* Rename module name from "github.com/hashicorp/terraform" to "github.com/placeholderplaceholderplaceholder/opentf".
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Gofmt.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Regenerate protobuf.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comments.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo issue and pull request link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo comment changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comment.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo some link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* make generate && make protobuf
Signed-off-by: Jakub Martin <kubam@spacelift.io>
---------
Signed-off-by: Jakub Martin <kubam@spacelift.io>
It displays a run header with link to web UI, like starting a new plan does, then confirms the run
and streams the apply logs. If you can't apply the run (it's from a different workspace, is in an
unconfirmable state, etc. etc.), it displays an error instead.
Notable points along the way:
* Implement `WrappedPlanFile` sum type, and update planfile consumers to use it instead of a plain `planfile.Reader`.
* Enable applying a saved cloud plan
* Update TFC mocks — add org name to workspace, and minimal support for includes on MockRuns.ReadWithOptions.
Previously, remote and cloud backends would automatically alias localterraform.com as the configured hostname during configuration. This turned out to be an issue with how backends could potentially be used within the builtin terraform_remote_state data source. Those data sources each configure the same service discovery with different targets for localterraform.com, and do so simultaneously, creating an occasional concurrent map read & write panic when multiple data sources are defined.
localterraform.com is obviously not useful for every backend configuration. Therefore, I relocated the alias configuration to the callers, so they may specify when to use it. The modified design adds a new method to backend.Enhanced to allow configurators to ask which aliases should be defined.
Add a single global schema cache for providers. This allows multiple
provider instances to share a single copy of the schema, and prevents
loading the schema multiple times for a given provider type during a
single command.
This does not currently work with some provider releases, which are
using GetProviderSchema to trigger certain initializations. A new server
capability will be introduced to trigger reloading their schemas, but
not store duplicate results.
Previously we just always used the same intermediate state persistence
behavior for all state storages. However, some storages might have access
to additional information that allows them to tailor when they persist,
such as reacting to API rate limit status headers in responses, or just
knowing that a particular storage isn't suited to intermediate snapshots
at all for some reason.
This commit doesn't actually change any observable behavior yet, but it
introduces an optional means for a state storage to customize the behavior
which we may make use of in certain storage implementations in future
commits.
* checks: filter out check diagnostics during certain plans
* wrap diagnostics produced by check blocks in a dedicated check block diagnostic
* address comments
While the returned plan is checked for nil in most cases, there was
a single point where the plan was dereferenced which could panic. Rather
than always guarding the dereferences, return early when the plan is
nil.
Terraform Core emits a hook event every time it writes a change into the
in-memory state. Previously the local backend would just copy that into
the transient storage of the state manager, but for most state storage
implementations that doesn't really do anything useful because it just
makes another copy of the state in memory.
We originally added this hook mechanism with the intent of making
Terraform _persist_ the state each time, but we backed that out after
finding that it was a bit too aggressive and was making the state snapshot
history much harder to use in storage systems that can preserve historical
snapshots.
However, sometimes Terraform gets killed mid-apply for whatever reason and
in our previous implementation that meant always losing that transient
state, forcing the user to edit the state manually (or use "import") to
recover a useful state.
In an attempt at finding a sweet spot between these extremes, here we
change the rule so that if an apply runs for longer than 20 seconds then
we'll try to persist the state to the backend in an update that arrives
at least 20 seconds after the first update, and then again for each
additional 20 second period as long as Terraform keeps announcing new
state snapshots.
This also introduces a special interruption mode where if the apply phase
gets interrupted by SIGINT (or equivalent) then the local backend will
try to persist the state immediately in anticipation of a
possibly-imminent SIGKILL, and will then immediately persist any
subsequent state update that arrives until the apply phase is complete.
After interruption Terraform will not start any new operations and will
instead just let any already-running operations run to completion, and so
this will persist the state once per resource instance that is able to
complete before being killed.
This does mean that now long-running applies will generate intermediate
state snapshots where they wouldn't before, but there should still be
considerably fewer snapshots than were created when we were persisting
for each individual state change. We can adjust the 20 second interval
in future commits if we find that this spot isn't as sweet as first
assumed.
This makes the behavior of remote state consistent with local state in regards to the initial serial number of the generated / pushed state. Previously remote state's initial push would have a serial number of 0, whereas local state had a serial of > 0. This causes issues with the logic around, for example, ensuring that a plan file cannot be applied if state is stale (see https://github.com/hashicorp/terraform/issues/30670 for example).
Make writing a plan file the default. We already create plans which have
no changes so the plan result would need to be checked in automation, so
having plans with errors should not pose a problem.
If we find workflows which cannot handle a plan that can't be applied,
we can reevaluate the need for a specialized flag. In the meantime, it
feels more logical that the plan output would always describe the result
of the plan, even if that included errors.
This is a prototype of how the CLI layer might make use of Terraform
Core's ability to produce a partial plan if it encounters an error during
planning, with two new situations:
- When using local CLI workflow, Terraform will show the partial plan
before showing any errors.
- "terraform plan" has a new option -always-out=..., which is similar to
the existing -out=... but additionally instructs Terraform to produce
a plan file even if the plan is incomplete due to errors. This means
that the plan can still be inspected by external UI implementations.
This is just a prototype to explore how these parts might fit together.
It's not a complete implementation and so should not be shipped. In
particular, it doesn't include any mention of a plan being incomplete in
the "terraform show -json" output or in the "terraform plan -json" output,
both of which would be required for a complete solution.