* Rename module name from "github.com/hashicorp/terraform" to "github.com/placeholderplaceholderplaceholder/opentf".
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Gofmt.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Regenerate protobuf.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comments.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo issue and pull request link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo comment changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comment.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo some link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* make generate && make protobuf
Signed-off-by: Jakub Martin <kubam@spacelift.io>
---------
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Add ability to specify Terraform Cloud Project in cloud block
Adds project configuration to the workspaces section of the cloud block.
Also configurable via the `TF_CLOUD_PROJECT` environment variable.
When a project is configured, the following behaviors will occur:
- `terraform init` with workspaces.name configured will create the workspace in the given project
- `terraform workspace new <name>` with workspaces.tags configured will create workspaces in the given project
- `terraform workspace list` will list workspaces only from the given project
The following behaviors are NOT affected by project configuration
- `terraform workspace delete <name>` does not validate the workspace's inclusion in the given project
- When initializing a workspace that already exists in Terraform Cloud, the workspace's parent project is NOT validated against the given project
Adds tests for cloud block configuration of project
Update changelog
* Update cloud block docs
* Fix typos and changelog entry
* Add speculative project lookup early in the cloud initialize process to capture inability to find a configured project
* Add project config for alias test
This commit uses Go's error wrapping features to transparently add some optional
info to certain planfile/state read errors. Specifically, we wrap errors when we
think we've identified the file type but are somehow unable to use it.
Callers that aren't interested in what we think about our input can just ignore
the wrapping; callers that ARE interested can use `errors.As()`.
- Add plausible unredacted plan json for `plan-json-{basic,full}` testdata --
Created by just running the relevant terraform commands locally.
- Add plan-json-no-changes testdata --
The unredacted json was organically grown, but I edited the log and redacted
json by hand to match what I observed from a real but unrelated
planned-and-finished run in TFC.
- Add plan-json-basic-no-unredacted testdata --
This mimics a lack of admin permissions, resulting in a 404.
- Hook up `MockPlans.ReadJSONOutput` to test fixtures, when present.
This method has been implemented for ages, and has had a backing store for
unredacted plan json, but has been effectively a no-op since nothing ever
fills that backing store. So, when creating a mock plan, make an attempt to
read unredacted json and stow it in the mocks on success.
- Make it possible to get the entire MockClient for a test backend
In order to test some things, I'm going to need to mess with the internal
state of runs and plans beyond what the go-tfe client API allows. I could add
magic special-casing to the mock API methods, or I could locate the
shenanigans next to the test that actually exploits it. The latter seems more
comprehensible, but I need access to the full mock client struct in order to
mess with its interior.
- Fill in some missing expectations around HasChanges when retrieving a run +
plan.
To do the "human" version of showing a plan, we need more than just the redacted
plan JSON itself; we also need a bunch of extra information that only the Cloud
backend is in a position to find out (since it's the only one holding a
configured go-tfe client instance). So, this method takes a run ID and hostname,
finds out everything we're going to need, and returns it wrapped up in a
RemotePlanJSON struct.
Since `terraform show -json` needs to get a raw hunk of json bytes and sling it
right back out again, it's going to be more convenient if plain `show` can ALSO
take in raw json. In order for that to happen, I need a function that basically
acts like `client.Plans.ReadJSONOutput()`, without eagerly unmarshalling that
`jsonformat.Plan` struct.
As a slight bonus, this also lets us make the tfe client mocks slightly
stupider.
- Don't save errored plans.
- Call op.View.PlanNextStep for terraform plan in cloud mode (We never used to
show this footer, because we didn't support -out.)
- Create non-speculative config version if saving plan
- Rewrite TestCloud_planWithPath to expect success!
It displays a run header with link to web UI, like starting a new plan does, then confirms the run
and streams the apply logs. If you can't apply the run (it's from a different workspace, is in an
unconfirmable state, etc. etc.), it displays an error instead.
Notable points along the way:
* Implement `WrappedPlanFile` sum type, and update planfile consumers to use it instead of a plain `planfile.Reader`.
* Enable applying a saved cloud plan
* Update TFC mocks — add org name to workspace, and minimal support for includes on MockRuns.ReadWithOptions.
Having this sitting loose in `cloud` proved problematic because of circular
import dependencies. Putting it in a sub-package under cloud frees us up to
reference the type from places like `internal/backend`!
Previously, remote and cloud backends would automatically alias localterraform.com as the configured hostname during configuration. This turned out to be an issue with how backends could potentially be used within the builtin terraform_remote_state data source. Those data sources each configure the same service discovery with different targets for localterraform.com, and do so simultaneously, creating an occasional concurrent map read & write panic when multiple data sources are defined.
localterraform.com is obviously not useful for every backend configuration. Therefore, I relocated the alias configuration to the callers, so they may specify when to use it. The modified design adds a new method to backend.Enhanced to allow configurators to ask which aliases should be defined.
Create a pending state version followed by a separate state upload
When this version of the endpoint fails (It is not yet generally available, or when using with Terraform Enterprise) Fall back to the original call with state content included in the request.
This strategy will reduce the amount of save failures due to network latency and gateway timeouts.
* cloud: assert import block compatibility
* check for import <> TFC compatibility during init
* imports are not in alphabetical order 🙃
---------
Co-authored-by: CJ Horton <cjhorton@hashicorp.com>
Previously we just made a hard rule that the state storage for Terraform
Cloud would never save any intermediate snapshots at all, as a coarse way
to mitigate concerns over heightened Terraform Enterprise storage caused
by saving intermediate snapshots.
As a better compromise, we'll now create intermediate snapshots at the
default interval unless the Terraform Cloud API responds with a special
extra header field X-Terraform-Snapshot-Interval, which specifies a
different number of seconds (up to 1 hour) to wait before saving the next
snapshot.
This will then allow Terraform Cloud and Enterprise to provide some dynamic
backpressure when needed, either to reduce the disk usage in Terraform
Enterprise or in situations where Terraform Cloud is under unusual load
and needs to calm the periodic intermediate snapshot writes from clients.
This respects the "force persist" mode so that if Terraform CLI is
interrupted with SIGINT then it'll still be able to urgently persist
a snapshot of whatever state it currently has, in anticipation of probably
being terminated with a more aggressive signal very soon.
We've seen some concern about the additional storage usage implied by
creating intermediate state snapshots for particularly long apply phases
that can arise when managing a large number of resource instances together
in a single workspace.
This is an initial coarse approach to solving that concern, just restoring
the original behavior when running inside Terraform Cloud or Enterprise
for now and not creating snapshots at all.
This is here as a solution of last resort in case we cannot find a better
compromise before the v1.5.0 final release. Hopefully a future commit
will implement a more subtle take on this which still gets some of the
benefits when running in a Terraform Enterprise environment but in a way
that will hopefully be less concerning for Terraform Enterprise
administrators.
This does not affect any other state storage implementation except the
Terraform Cloud integration and the "remote" backend's state storage when
running inside a TFC/TFE-driven remote execution environment.
The cloud backend, which communicates with TFC like APIs, can create
runs which may have one more configuration parameters altered. These
alterations are emitted as run-events on the run so that API clients
can consume and display them to users. This commit adds a step in
plan operation to query the run-events once a run is created and then
emit specific run-event descriptions to the console as warnings for
the user.
This test case was making a real DNS call in a non-acceptance test, and
since it was intended to fail it would introduce a several second delay.
This commit replaces the test with a similar one which uses the mocked
disco services for a non-TFE host.
Also restructure the test to use t.Run for clarity.
* Implementation of structured logging.
These are the changes that enable the cloud backend to consume
structured logs and make use of the new plan renderer. This will enable
CLI-driven runs to view the structured output in the Terraform Cloud UI.
* Cloud structured logging unit tests
* Remove deferred logs logic, fix minor issues
Color formatting fixes, log type stop lists, default behavior for logs
that are unknown
* Use service disco path in redacted plan url
This was clearly wrong, but it was also harmless -- in the event of a failing
test due to missing tags, they would get double-reported as both missing and
unexpected. This commit separates out the reporting as intended.
Previously the cloud backend only supported the pre and post plan stages.
This commit adds support to display the output of the new pre-apply task
stage as well.
Previously all of the logic to retrieve Task Stages was in the backend_plan.go file.
This commit:
* Moves the logic to the backend_taskStages.go file.
* Replaces using an array to a map indexed by the Task Stage's stage. This makes it
easier to find the relevant stage in the list and means the getTaskStageIDByName
function is no longer required.
Go 1.19's "fmt" has some awareness of the new doc comment formatting
conventions and adjusts the presentation of the source comments to make
it clearer how godoc would interpret them. Therefore this commit includes
various updates made by "go fmt" to acheve that.
In line with our usual convention that we make stylistic/grammar/spelling
tweaks typically only when we're "in the area" changing something else
anyway, I also took this opportunity to review most of the comments that
this updated to see if there were any other opportunities to improve them.
Prevously the cloud backend would only render post-plan run tasks. Now
that pre-plan tasks are in beta, this commit updates the plan phase to
render pre-plan run tasks. This commit also moves some common code to
the common backend as it will be used by other task stages in the
future.
Normally, `terraform output` refreshes and reads the entire state in the command package before pulling output values out of it. This doesn't give Terraform Cloud the opportunity to apply the read state outputs org permission and instead applies the read state versions permission.
I decided to expand the state manager interface to provide a separate GetRootOutputValues function in order to give the cloud backend a more nuanced opportunity to fetch just the outputs. This required moving state Refresh/Read code that was previously in the command into the shared backend state as well as the filesystem state packages.