Signed-off-by: Mikel Olasagasti Uranga <mikel@olasagasti.info>
Signed-off-by: James Humphries <james@james-humphries.co.uk>
Co-authored-by: Mikel Olasagasti Uranga <mikel@olasagasti.info>
* Rename module name from "github.com/hashicorp/terraform" to "github.com/placeholderplaceholderplaceholder/opentf".
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Gofmt.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Regenerate protobuf.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comments.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo issue and pull request link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo comment changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comment.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo some link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* make generate && make protobuf
Signed-off-by: Jakub Martin <kubam@spacelift.io>
---------
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Add ability to specify Terraform Cloud Project in cloud block
Adds project configuration to the workspaces section of the cloud block.
Also configurable via the `TF_CLOUD_PROJECT` environment variable.
When a project is configured, the following behaviors will occur:
- `terraform init` with workspaces.name configured will create the workspace in the given project
- `terraform workspace new <name>` with workspaces.tags configured will create workspaces in the given project
- `terraform workspace list` will list workspaces only from the given project
The following behaviors are NOT affected by project configuration
- `terraform workspace delete <name>` does not validate the workspace's inclusion in the given project
- When initializing a workspace that already exists in Terraform Cloud, the workspace's parent project is NOT validated against the given project
Adds tests for cloud block configuration of project
Update changelog
* Update cloud block docs
* Fix typos and changelog entry
* Add speculative project lookup early in the cloud initialize process to capture inability to find a configured project
* Add project config for alias test
- Add plausible unredacted plan json for `plan-json-{basic,full}` testdata --
Created by just running the relevant terraform commands locally.
- Add plan-json-no-changes testdata --
The unredacted json was organically grown, but I edited the log and redacted
json by hand to match what I observed from a real but unrelated
planned-and-finished run in TFC.
- Add plan-json-basic-no-unredacted testdata --
This mimics a lack of admin permissions, resulting in a 404.
- Hook up `MockPlans.ReadJSONOutput` to test fixtures, when present.
This method has been implemented for ages, and has had a backing store for
unredacted plan json, but has been effectively a no-op since nothing ever
fills that backing store. So, when creating a mock plan, make an attempt to
read unredacted json and stow it in the mocks on success.
- Make it possible to get the entire MockClient for a test backend
In order to test some things, I'm going to need to mess with the internal
state of runs and plans beyond what the go-tfe client API allows. I could add
magic special-casing to the mock API methods, or I could locate the
shenanigans next to the test that actually exploits it. The latter seems more
comprehensible, but I need access to the full mock client struct in order to
mess with its interior.
- Fill in some missing expectations around HasChanges when retrieving a run +
plan.
Since `terraform show -json` needs to get a raw hunk of json bytes and sling it
right back out again, it's going to be more convenient if plain `show` can ALSO
take in raw json. In order for that to happen, I need a function that basically
acts like `client.Plans.ReadJSONOutput()`, without eagerly unmarshalling that
`jsonformat.Plan` struct.
As a slight bonus, this also lets us make the tfe client mocks slightly
stupider.
It displays a run header with link to web UI, like starting a new plan does, then confirms the run
and streams the apply logs. If you can't apply the run (it's from a different workspace, is in an
unconfirmable state, etc. etc.), it displays an error instead.
Notable points along the way:
* Implement `WrappedPlanFile` sum type, and update planfile consumers to use it instead of a plain `planfile.Reader`.
* Enable applying a saved cloud plan
* Update TFC mocks — add org name to workspace, and minimal support for includes on MockRuns.ReadWithOptions.
Create a pending state version followed by a separate state upload
When this version of the endpoint fails (It is not yet generally available, or when using with Terraform Enterprise) Fall back to the original call with state content included in the request.
This strategy will reduce the amount of save failures due to network latency and gateway timeouts.
The cloud backend, which communicates with TFC like APIs, can create
runs which may have one more configuration parameters altered. These
alterations are emitted as run-events on the run so that API clients
can consume and display them to users. This commit adds a step in
plan operation to query the run-events once a run is created and then
emit specific run-event descriptions to the console as warnings for
the user.
* Implementation of structured logging.
These are the changes that enable the cloud backend to consume
structured logs and make use of the new plan renderer. This will enable
CLI-driven runs to view the structured output in the Terraform Cloud UI.
* Cloud structured logging unit tests
* Remove deferred logs logic, fix minor issues
Color formatting fixes, log type stop lists, default behavior for logs
that are unknown
* Use service disco path in redacted plan url
Normally, `terraform output` refreshes and reads the entire state in the command package before pulling output values out of it. This doesn't give Terraform Cloud the opportunity to apply the read state outputs org permission and instead applies the read state versions permission.
I decided to expand the state manager interface to provide a separate GetRootOutputValues function in order to give the cloud backend a more nuanced opportunity to fetch just the outputs. This required moving state Refresh/Read code that was previously in the command into the shared backend state as well as the filesystem state packages.
* convert uses of worspaces.operations into workspaces.executionMode
The cloud package currently uses a deprecated API on workspaces to determine a workspace's execution mode.
Deprecated: Operations (boolean)
New hotness: Execution mode (string - "local", "remote", or "agent")
More details: https://www.terraform.io/docs/cloud/api/workspaces.html#request-body
All uses of Operations field coming from the client (within the cloud package) should be converted to the appropriate ExecutionMode equivalent.
Also, we need to update all acknowledgment of operations field on the tests that are testing the behavior of workspaces.
Co-authored-by: Nick Fagerlund <nick.fagerlund@gmail.com>
Co-authored-by: Nick Fagerlund <nick.fagerlund@gmail.com>
The delete + assign at the end of `Update` and `UpdateByID` are meant to handle
renaming a workspace — (remove old name), (insert new name).
However, `UpdateByID` was doing (remove new name), (insert new name) and leaving
the old name in place. This commit changes it to match `Update` by grabbing the
original name off the workspace object _before_ potentially renaming it.
Alas, there's not a very good way to test the message we're supposed to print to
the console in this situation; we just don't appear to have a mock terminal that
the test can read from. But we can at least test that the function returns
without erroring under the exact conditions where it was erroring before.
Note that the behaviors of mc.Workspaces.Update and UpdateByID were already
starting to drift, so I consolidated their actual attribute update logic into a
helper function before they drifted much further.
Implementing this test was quite a rabbithole, as in order to satisfy
backendTestBackendStates() the workspaces returned from
backend.Workspaces() must match exactly, and the shortcut taken to test
pagination in 3cc58813f0 created an
impossible circumstance that got plastered over with the fact that
prefix filtering is done clientside, not by the API as it should be.
Tagging does not rely on clientside filtering, and expects that the
request made to the TFC API returns exactly those workspaces with the
given tags.
These changes include a better way to test pagination, wherein we
actually create over a page worth of valid workspaces in the mock client
and implement a simplified pagination behavior to match how the TFC API
actually works.
These changes include additions to fulfill the interface for the client
mock, plus moving all that logic (which needn't be duplicated across
both the remote and cloud packages) over to the cloud package under a
dedicated mock client file.