* Rename module name from "github.com/hashicorp/terraform" to "github.com/placeholderplaceholderplaceholder/opentf".
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Gofmt.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Regenerate protobuf.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comments.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo issue and pull request link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo comment changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Fix comment.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* Undo some link changes.
Signed-off-by: Jakub Martin <kubam@spacelift.io>
* make generate && make protobuf
Signed-off-by: Jakub Martin <kubam@spacelift.io>
---------
Signed-off-by: Jakub Martin <kubam@spacelift.io>
It displays a run header with link to web UI, like starting a new plan does, then confirms the run
and streams the apply logs. If you can't apply the run (it's from a different workspace, is in an
unconfirmable state, etc. etc.), it displays an error instead.
Notable points along the way:
* Implement `WrappedPlanFile` sum type, and update planfile consumers to use it instead of a plain `planfile.Reader`.
* Enable applying a saved cloud plan
* Update TFC mocks — add org name to workspace, and minimal support for includes on MockRuns.ReadWithOptions.
Previously, remote and cloud backends would automatically alias localterraform.com as the configured hostname during configuration. This turned out to be an issue with how backends could potentially be used within the builtin terraform_remote_state data source. Those data sources each configure the same service discovery with different targets for localterraform.com, and do so simultaneously, creating an occasional concurrent map read & write panic when multiple data sources are defined.
localterraform.com is obviously not useful for every backend configuration. Therefore, I relocated the alias configuration to the callers, so they may specify when to use it. The modified design adds a new method to backend.Enhanced to allow configurators to ask which aliases should be defined.
Add a single global schema cache for providers. This allows multiple
provider instances to share a single copy of the schema, and prevents
loading the schema multiple times for a given provider type during a
single command.
This does not currently work with some provider releases, which are
using GetProviderSchema to trigger certain initializations. A new server
capability will be introduced to trigger reloading their schemas, but
not store duplicate results.
We've seen some concern about the additional storage usage implied by
creating intermediate state snapshots for particularly long apply phases
that can arise when managing a large number of resource instances together
in a single workspace.
This is an initial coarse approach to solving that concern, just restoring
the original behavior when running inside Terraform Cloud or Enterprise
for now and not creating snapshots at all.
This is here as a solution of last resort in case we cannot find a better
compromise before the v1.5.0 final release. Hopefully a future commit
will implement a more subtle take on this which still gets some of the
benefits when running in a Terraform Enterprise environment but in a way
that will hopefully be less concerning for Terraform Enterprise
administrators.
This does not affect any other state storage implementation except the
Terraform Cloud integration and the "remote" backend's state storage when
running inside a TFC/TFE-driven remote execution environment.
Previously we just always used the same intermediate state persistence
behavior for all state storages. However, some storages might have access
to additional information that allows them to tailor when they persist,
such as reacting to API rate limit status headers in responses, or just
knowing that a particular storage isn't suited to intermediate snapshots
at all for some reason.
This commit doesn't actually change any observable behavior yet, but it
introduces an optional means for a state storage to customize the behavior
which we may make use of in certain storage implementations in future
commits.
ran into this error while running terraform on a container and saving state to Consul. I suspect my policy needs tweaking but it's impossible to tell with an error like this:
```
╷
│ Error: Failed to save state
│
│ Error saving state: consul CAS failed with transaction errors:
│ [0xc0006e93c8]
╵
```
This PR makes the will include the error messaage in the details so I can continue debugging
* Improve environment variable support for the pg backend
This patch does two things:
- it adds environment variable support to the parameters that did
not have it (and uses `PG_CONN_STR` instead of `PGDATABASE` which is
actually more appropriate to match the behavior of other PostgreSQL
utilities)
- better documents how to give the connection parameters as environment
variables for the ones that were already supported based on the
recommendation of @bsouth00
I will prepare a backport of the documentation part of this once it is
merged.
Closes https://github.com/hashicorp/terraform/issues/33024
* Remove global variable in test of the PG backend
The cloud backend, which communicates with TFC like APIs, can create
runs which may have one more configuration parameters altered. These
alterations are emitted as run-events on the run so that API clients
can consume and display them to users. This commit adds a step in
plan operation to query the run-events once a run is created and then
emit specific run-event descriptions to the console as warnings for
the user.
* checks: filter out check diagnostics during certain plans
* wrap diagnostics produced by check blocks in a dedicated check block diagnostic
* address comments
While the returned plan is checked for nil in most cases, there was
a single point where the plan was dereferenced which could panic. Rather
than always guarding the dereferences, return early when the plan is
nil.
Terraform Core emits a hook event every time it writes a change into the
in-memory state. Previously the local backend would just copy that into
the transient storage of the state manager, but for most state storage
implementations that doesn't really do anything useful because it just
makes another copy of the state in memory.
We originally added this hook mechanism with the intent of making
Terraform _persist_ the state each time, but we backed that out after
finding that it was a bit too aggressive and was making the state snapshot
history much harder to use in storage systems that can preserve historical
snapshots.
However, sometimes Terraform gets killed mid-apply for whatever reason and
in our previous implementation that meant always losing that transient
state, forcing the user to edit the state manually (or use "import") to
recover a useful state.
In an attempt at finding a sweet spot between these extremes, here we
change the rule so that if an apply runs for longer than 20 seconds then
we'll try to persist the state to the backend in an update that arrives
at least 20 seconds after the first update, and then again for each
additional 20 second period as long as Terraform keeps announcing new
state snapshots.
This also introduces a special interruption mode where if the apply phase
gets interrupted by SIGINT (or equivalent) then the local backend will
try to persist the state immediately in anticipation of a
possibly-imminent SIGKILL, and will then immediately persist any
subsequent state update that arrives until the apply phase is complete.
After interruption Terraform will not start any new operations and will
instead just let any already-running operations run to completion, and so
this will persist the state once per resource instance that is able to
complete before being killed.
This does mean that now long-running applies will generate intermediate
state snapshots where they wouldn't before, but there should still be
considerably fewer snapshots than were created when we were persisting
for each individual state change. We can adjust the 20 second interval
in future commits if we find that this spot isn't as sweet as first
assumed.
* go get github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/sts/v20180813@v1.0.588
* feat:support assume_role for COS backend
* update go.mod and go.sum
* change secret_id and secret_key from required to optional
* update cos doc
* update logic by comments
* rm sensitive info in log
This makes the behavior of remote state consistent with local state in regards to the initial serial number of the generated / pushed state. Previously remote state's initial push would have a serial number of 0, whereas local state had a serial of > 0. This causes issues with the logic around, for example, ensuring that a plan file cannot be applied if state is stale (see https://github.com/hashicorp/terraform/issues/30670 for example).
* Add mTLS support for http backend by way of client cert & key, as well as enterprise cacert.
* Fix style.
* Skip cert validation to be sure error is related to missing client cert; not untrusted server cert.
* Remove misplaced err check.
* Fix the size of test using http backend.
* Just for correctness, include all certs in the pem encoded cert - sometimes certs come with a chain of their signers.
* Adjusted names as recommended in PR comments.
* Adjusted names to be full-length and more descriptive.
* Added full-fledged testing with mTLS http server
* Fix goimports.
* Fix the names of the backend config.
* Exclusive lock for write and delete.
* Revert "Fix goimports."
This reverts commit 7d40f6099fbbb675fb2e25e35ee40aeafe3d0a22.
* goimports just for server test.
* Added the go:generation for the mock.
* Move the TLS configuration out to make it more readable - don't replace the HTTPClient as the retryablehttp already creates one - just configure its TLS.
* Just switch the client/data params - felt more natural this way.
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/testdata/gencerts.sh
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* the location of the file name is not sensitive.
* Added error if only one of client_certificate_pem and client_private_key_pem are set.
* Remove testify from test cases; use t.Error* for assert and t.Fatal* for require.
* Fixed import consistency
* Just use default openssl.
* Since file(...) is so trivial to use, changed the client cert, key, and ca cert to be the data.
See also https://github.com/hashicorp/terraform-provider-http/pull/211
Co-authored-by: Sheridan C Rawlins <scr@ouryahoo.com>
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
Make writing a plan file the default. We already create plans which have
no changes so the plan result would need to be checked in automation, so
having plans with errors should not pose a problem.
If we find workflows which cannot handle a plan that can't be applied,
we can reevaluate the need for a specialized flag. In the meantime, it
feels more logical that the plan output would always describe the result
of the plan, even if that included errors.