We've seen some concern about the additional storage usage implied by
creating intermediate state snapshots for particularly long apply phases
that can arise when managing a large number of resource instances together
in a single workspace.
This is an initial coarse approach to solving that concern, just restoring
the original behavior when running inside Terraform Cloud or Enterprise
for now and not creating snapshots at all.
This is here as a solution of last resort in case we cannot find a better
compromise before the v1.5.0 final release. Hopefully a future commit
will implement a more subtle take on this which still gets some of the
benefits when running in a Terraform Enterprise environment but in a way
that will hopefully be less concerning for Terraform Enterprise
administrators.
This does not affect any other state storage implementation except the
Terraform Cloud integration and the "remote" backend's state storage when
running inside a TFC/TFE-driven remote execution environment.
Previously we just always used the same intermediate state persistence
behavior for all state storages. However, some storages might have access
to additional information that allows them to tailor when they persist,
such as reacting to API rate limit status headers in responses, or just
knowing that a particular storage isn't suited to intermediate snapshots
at all for some reason.
This commit doesn't actually change any observable behavior yet, but it
introduces an optional means for a state storage to customize the behavior
which we may make use of in certain storage implementations in future
commits.
When planning a destroy operations, locals only referenced by root
outputs do not need to be kept in the graph, because the root output
does not get evaluated. Rather than try and prune the local based on
this condition, we can prevent the connection from being created by
ensuring that a root output destroy node has no references.
The separate plan+apply destroy fields used for outputs can be
simplified by combining, since they are only ever referenced together.
* genconfig: fix nil nested block panic
* always InternalValidate test schemas
* genconfig: null NestingSingle blocks should be absent
A NestingSingle block that is null in state should be completely absent from config.
If a resource is already in state, do not attempt to import it again. Resources already in state are filtered out of the plan's import targets.
A change is only considered "importing" if it is adding a new resource instance to the state.
* command: keep our promises
* remove some nil config checks
Remove some of the safety checks that ensure plan nodes have config attached at the appropriate time.
* add GeneratedConfig to plan changes objects
Add a new GeneratedConfig field alongside Importing in plan changes.
* add config generation package
The genconfig package implements HCL config generation from provider state values.
Thanks to @mildwonkey whose implementation of terraform add is the basis for this package.
* generate config during plan
If a resource is being imported and does not already have config, attempt to generate that config during planning. The config is generated from the state as an HCL string, and then parsed back into an hcl.Body to attach to the plan graph node.
The generated config string is attached to the change emitted by the plan.
* complete config generation prototype, and add tests
* plannable import: add a provider argument to the import block
* Update internal/configs/config.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/configs/config.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/configs/config.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* fix formatting and tests
---------
Co-authored-by: Katy Moe <katy@katy.moe>
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* command: keep our promises
* remove some nil config checks
Remove some of the safety checks that ensure plan nodes have config attached at the appropriate time.
* add GeneratedConfig to plan changes objects
Add a new GeneratedConfig field alongside Importing in plan changes.
* add config generation package
The genconfig package implements HCL config generation from provider state values.
Thanks to @mildwonkey whose implementation of terraform add is the basis for this package.
* generate config during plan
If a resource is being imported and does not already have config, attempt to generate that config during planning. The config is generated from the state as an HCL string, and then parsed back into an hcl.Body to attach to the plan graph node.
The generated config string is attached to the change emitted by the plan.
* complete config generation prototype, and add tests
* Plannable import: Add generated config to json and human-readable plan output
---------
Co-authored-by: Katy Moe <katy@katy.moe>
* command: keep our promises
* remove some nil config checks
Remove some of the safety checks that ensure plan nodes have config attached at the appropriate time.
* add GeneratedConfig to plan changes objects
Add a new GeneratedConfig field alongside Importing in plan changes.
* add config generation package
The genconfig package implements HCL config generation from provider state values.
Thanks to @mildwonkey whose implementation of terraform add is the basis for this package.
* generate config during plan
If a resource is being imported and does not already have config, attempt to generate that config during planning. The config is generated from the state as an HCL string, and then parsed back into an hcl.Body to attach to the plan graph node.
The generated config string is attached to the change emitted by the plan.
* complete config generation prototype, and add tests
---------
Co-authored-by: Katy Moe <katy@katy.moe>
The imported resource was being stored in the wrong state, and only
ended up in the refresh state because ReadResource was being called a
second time in the normal refresh path.
Make sure to only refresh the imported resource once. This is still done
separately within importState so that we can handle the error slightly
differently to let the user know if an imported instance does not exist.
* [plannable import] embed the resource id within the changes
* [Plannable Import] Implement streamed logs for -json plan
* use latest structs
* remove implementation plans from TODO
The logic used to prune unused providers was only taking into account
the common case of providers in the root module. The quick check of
looking for up edges doesn't work within a module, because the module
structures will create non-resource nodes connected to the providers.
Use a deeper check of looking for any dependent resources which may
require that provider to be configured.
During a plan, Terraform now checks for the presence of import blocks.
For each resource in config, if an import block is present with a matching address, planning that node will now trigger an ImportResourceState and ReadResource. The resulting state is treated as the node's "refresh state", and planning proceeds as normal from there.
The walkImport operation is now only used for the legacy "terraform import" CLI command. This is the only case under which the plan should produce graphNodeImportStates.
ran into this error while running terraform on a container and saving state to Consul. I suspect my policy needs tweaking but it's impossible to tell with an error like this:
```
╷
│ Error: Failed to save state
│
│ Error saving state: consul CAS failed with transaction errors:
│ [0xc0006e93c8]
╵
```
This PR makes the will include the error messaage in the details so I can continue debugging
Just like in the destroy apply, we can skip the inter-provider cycle
check when creating the destroy plan, which can be expensive when there
are a lot of resource instances with dependencies from another provider.
* Improve environment variable support for the pg backend
This patch does two things:
- it adds environment variable support to the parameters that did
not have it (and uses `PG_CONN_STR` instead of `PGDATABASE` which is
actually more appropriate to match the behavior of other PostgreSQL
utilities)
- better documents how to give the connection parameters as environment
variables for the ones that were already supported based on the
recommendation of @bsouth00
I will prepare a backport of the documentation part of this once it is
merged.
Closes https://github.com/hashicorp/terraform/issues/33024
* Remove global variable in test of the PG backend
The cloud backend, which communicates with TFC like APIs, can create
runs which may have one more configuration parameters altered. These
alterations are emitted as run-events on the run so that API clients
can consume and display them to users. This commit adds a step in
plan operation to query the run-events once a run is created and then
emit specific run-event descriptions to the console as warnings for
the user.
* checks: filter out check diagnostics during certain plans
* wrap diagnostics produced by check blocks in a dedicated check block diagnostic
* address comments
When we plan to destroy an instance, the change recorded should use the
correct type for the resource rather than `DynamicPseudoType`. Most of
the time this is hidden when the change is encoded in the plan, because
any `null` is always encoded to the same value, and when decoded it will
be converted to the schema type. However when apply requires creating a
second plan for an instance's replacement that value is not going to be
encoded, and remains a dynamic value which is sent to the provider.
Most providers won't see that either, as the grpc request also encodes
and decodes the value to conform with the correct schema. The builtin
terraform provider does get the raw cty value though, and when that
dynamic value is returned validation fails when the type does not match.
It is not valid for a provider to return an unknown value for a
configured nested collection, but we need to check for unknowns before
comparing the number of values in the collection.
If a resource has a change in marks from the prior state, we need to
notify the user that an update is going to be necessary to at least
store that new value in the state. If the provider however returns the
prior state value in lieu of a new config value, we need to be sure to
filter any new marks for comparison as well. The comparison of the prior
marks and new marks must take into account whether those new marks could
even be applied, because if the value is unchanged the new marks may be
completely irrelevant.
* Add support for scoped resources
* refactor existing checks addrs and add check block addr
* Add configuration for check blocks
* introduce check blocks into the terraform node and transform graph
* address comments
* address comments
* don't execute checks during destroy operations
* don't even include check nodes for destroy operations
In the case where a provider has been upgraded, and there are external
changes to resources outside of terraform, and -target is being used,
and resources which are not targeted require a schema migration; the
untargeted resources will not have been migrated and cannot be decoded for the
external changes report.
Since there is no way to decode the resources which have been excluded
via -target, we can only skip over them when inspecting
driftedResources. Return warnings for now to indicate that these
resources could not be decoded to help indicate that users will need to
eventually apply these changes.
Module outputs are evaluated from state, so in order to have detailed
information about sensitivity from non-root module outputs, we need to
store the value along with all sensitive marks. This aligns with the
usage of state being the in-memory store for other temporary values like
locals and variables.
When planning encounters an error we were returning early without
cleaning out any planed data sources which cannot be serialized. Move
the cleanup to the common walkPlan method where the PriorState is
assigned so that it cannot be missed.
We inadvertently incorporated the new minor release of cty into the 1.4
branch, and that's introduced some more refined handling of unknown values
that is too much of a change to introduce in a patch release.
Therefore this reverts back to the previous minor release for the v1.4
series, and then we'll separately get the main branch ready to work
correctly with the new cty before Terraform v1.5.
This reverts just the upgrade and the corresponding test changes from
#32775, while retaining the HCL upgrade and the new test case it
introduced for that bug it was trying to fix. That new test is still
passing so it seems that the cty upgrade is not crucial to that fix.
This test was previously not taking into account the fact that the
"Stopping" hook gets sent in the goroutine that calls ctx.Stop, whereas
all of the others get called from inside ctx.Apply, and so there are no
ordering guarantees for that event in relation to the others.
We now handle the stopping event as a special case that is allowed to
appear anywhere in the sequence as long as it appears. The other events
are still strongly ordered because their ordering is important for
correctness of Terraform Core's own behavior.
As some extra insurance we also now check whether the provider's
ApplyResourceChange and Stop functions both ran and reached a suitable
point of execution related to the stop request, which help to ensure not
only that something called Stop but that Terraform Core correctly
interacted with the provider to handle the stop.
While the returned plan is checked for nil in most cases, there was
a single point where the plan was dereferenced which could panic. Rather
than always guarding the dereferences, return early when the plan is
nil.
This test case was making a real DNS call in a non-acceptance test, and
since it was intended to fail it would introduce a several second delay.
This commit replaces the test with a similar one which uses the mocked
disco services for a non-TFE host.
Also restructure the test to use t.Run for clarity.
This is a mostly mechanical refactor with a handful of changes which
are necessary due to the semantic difference between earlyconfig and
configs.
When parsing root and descendant modules in the module installer, we now
check the core version requirements inline. If the Terraform version is
incompatible, we drop any other module loader diagnostics. This ensures
that future language additions don't clutter the output and confuse the
user.
We also add two new checks during the module load process:
* Don't try to load a module with a `nil` source address. This is a
necessary change due to the move away from earlyconfig.
* Don't try to load a module with a blank name (i.e. `module ""`).
Because our module loading manifest uses the stringified module path
as its map key, this causes a collision with the root module, and a
later panic. This is the bug which triggered this refactor in the
first place.
Since it's already possible to activate the dependency lock file using an
environment variable, we should allow opting in to it having broken
behavior using the environment too.
It's kinda odd in retrospect that TF_PLUGIN_CACHE_DIR is the only setting
we allow to be configured both in the environment and the CLI
configuration. That means that the infrastructure for dealing with that
situation was relatively immature here and so I did some light refactoring
to make it unit-testable without actually modifying the test program's
environment.
With the demise of the early config loader, we want to show core
version errors first, followed by backend errors, and only then
show other errors with the configuration.
Terraform Core emits a hook event every time it writes a change into the
in-memory state. Previously the local backend would just copy that into
the transient storage of the state manager, but for most state storage
implementations that doesn't really do anything useful because it just
makes another copy of the state in memory.
We originally added this hook mechanism with the intent of making
Terraform _persist_ the state each time, but we backed that out after
finding that it was a bit too aggressive and was making the state snapshot
history much harder to use in storage systems that can preserve historical
snapshots.
However, sometimes Terraform gets killed mid-apply for whatever reason and
in our previous implementation that meant always losing that transient
state, forcing the user to edit the state manually (or use "import") to
recover a useful state.
In an attempt at finding a sweet spot between these extremes, here we
change the rule so that if an apply runs for longer than 20 seconds then
we'll try to persist the state to the backend in an update that arrives
at least 20 seconds after the first update, and then again for each
additional 20 second period as long as Terraform keeps announcing new
state snapshots.
This also introduces a special interruption mode where if the apply phase
gets interrupted by SIGINT (or equivalent) then the local backend will
try to persist the state immediately in anticipation of a
possibly-imminent SIGKILL, and will then immediately persist any
subsequent state update that arrives until the apply phase is complete.
After interruption Terraform will not start any new operations and will
instead just let any already-running operations run to completion, and so
this will persist the state once per resource instance that is able to
complete before being killed.
This does mean that now long-running applies will generate intermediate
state snapshots where they wouldn't before, but there should still be
considerably fewer snapshots than were created when we were persisting
for each individual state change. We can adjust the 20 second interval
in future commits if we find that this spot isn't as sweet as first
assumed.
The terraform provider was panicking on import, because it didn't
previously have a resource type which could be imported at all. Add a
stub import function for terraform_data as a placeholder to allow the
call to complete successfully. While there's no need to actually import
a terraform_data resource, users will inevitably use this to construct
examples of import actions for learning purposes or bug reports.
This still isn't very useful even for examples however, because the
state-only nature of the terraform_data resource type means that we
can't fill in the state from only the import ID. This means that any
value in `trigger_replace` or `input` will cause a change in the next
plan. Once configuration data is available during import we can extend
this to create a logical final state based on config.
* Add metadata functions command skeleton
* Export functions as JSON via cli command
* Add metadata command
* Add tests to jsonfunction package
* WIP: Add metadata functions test
* Change return_type & type in JSON to json.RawMessage
This enables easier deserialisation of types when parsing the JSON.
* Skip is_nullable when false
* Update cli docs with metadata command
* Use tfdiags to report function marshal errors
* Ignore map, list and type functions
* Test Marshal function with diags
* Test metadata functions command output
* Simplify type marshaling by using cty.Type
* Add static function signatures for can and try
* Update internal/command/jsonfunction/function_test.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
---------
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* go get github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/sts/v20180813@v1.0.588
* feat:support assume_role for COS backend
* update go.mod and go.sum
* change secret_id and secret_key from required to optional
* update cos doc
* update logic by comments
* rm sensitive info in log
Resource instances with no current object in state should not have
orphan nodes added to the graph, as deposed objects are handled
separately. This was previously handled correctly for the non-expanded
case, but expanded resources were missing the appropriate check for a
current object.
Also update the comment in the non-expanded case to hopefully clarify
that we're checking for the presence of a current object, not the
absence of any deposed objects. An instance may have both a current
object and zero or more deposed objects in some circumstances, and if
so, we still want an orphan node to be added if the instance is not in
configuration.
* Implementation of structured logging.
These are the changes that enable the cloud backend to consume
structured logs and make use of the new plan renderer. This will enable
CLI-driven runs to view the structured output in the Terraform Cloud UI.
* Cloud structured logging unit tests
* Remove deferred logs logic, fix minor issues
Color formatting fixes, log type stop lists, default behavior for logs
that are unknown
* Use service disco path in redacted plan url
* Add viewType to Meta object and use it at the call sites
* Assign viewType passed from flags to state-locking cli commands
* Remove temp files
* Set correct mode for statelocker depending on json flag passed to commands
* Add StateLocker interface conformation check for StateLockerJSON
* Remove empty line at end of comment
* Pass correct ViewType to StateLocker from Backend call chain
* Pass viewType to backend migration and initialization functions
* Remove json processing info in process comment
* Restore documentation style of backendMigrateOpts
This makes the behavior of remote state consistent with local state in regards to the initial serial number of the generated / pushed state. Previously remote state's initial push would have a serial number of 0, whereas local state had a serial of > 0. This causes issues with the logic around, for example, ensuring that a plan file cannot be applied if state is stale (see https://github.com/hashicorp/terraform/issues/30670 for example).
* Add failing test case for the given issue
* pause
* don't use local when sending PR for review
* go get github.com/hashicorp/hcl/v2@v2.16.0
* Update go.mod
---------
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
As explained by the deleted comments, this package was used to identify situations where the `terraform 0.12upgrade` command can help migrate 0.11 syntax. Current versions of terraform don't include this command, and it's not likely that users are attempting upgrades from 0.11 to 1.4+
The replacement init swaps the order of the module and backend initialization in order to prepare for the next commit.
Config initialization now takes the following approach:
1. Load the root module, but withhold diagnostic errors until after version check
2. Initialize the backend, but withhold diagnostic errors until after version check
3. Get modules
4. Load all config (root and modules)
5. Check terraform version requirements (this can be defined by nested modules) and display any errors. It's important to show these first because prior errors could be the result of a newer terraform version syntax
6. Finally, show any errors related to backed init or config loading
Although they are not serialized to the final stored state, all module
outputs must be saved in the state for evaluation. There is no defined
schema which is used to identify the overall type of module outputs, so
all outputs must exist in the state to build the correct type for proper
evaluation.
* Add mTLS support for http backend by way of client cert & key, as well as enterprise cacert.
* Fix style.
* Skip cert validation to be sure error is related to missing client cert; not untrusted server cert.
* Remove misplaced err check.
* Fix the size of test using http backend.
* Just for correctness, include all certs in the pem encoded cert - sometimes certs come with a chain of their signers.
* Adjusted names as recommended in PR comments.
* Adjusted names to be full-length and more descriptive.
* Added full-fledged testing with mTLS http server
* Fix goimports.
* Fix the names of the backend config.
* Exclusive lock for write and delete.
* Revert "Fix goimports."
This reverts commit 7d40f6099fbbb675fb2e25e35ee40aeafe3d0a22.
* goimports just for server test.
* Added the go:generation for the mock.
* Move the TLS configuration out to make it more readable - don't replace the HTTPClient as the retryablehttp already creates one - just configure its TLS.
* Just switch the client/data params - felt more natural this way.
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/testdata/gencerts.sh
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* the location of the file name is not sensitive.
* Added error if only one of client_certificate_pem and client_private_key_pem are set.
* Remove testify from test cases; use t.Error* for assert and t.Fatal* for require.
* Fixed import consistency
* Just use default openssl.
* Since file(...) is so trivial to use, changed the client cert, key, and ca cert to be the data.
See also https://github.com/hashicorp/terraform-provider-http/pull/211
Co-authored-by: Sheridan C Rawlins <scr@ouryahoo.com>
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>