* main: disambiguate arg ordering test
Make it extra clear what order of args we are asserting.
* command: fix plan -refresh=false test
The test for plan -refresh=false was not functioning, since ReadResource will not be called if the resource is not in prior state.
Add a new fixture directory with state, and also test the converse, to prevent regression.
* command: add test for refresh flag precedence
A consumer relies on the fact that running terraform plan -refresh=false -refresh true gives the same result as terraform plan -refresh=true.
Use the global providers.SchemaCache and update all schema access to the
providers.Schemas, except where the provider.GetProviderSchemaResponse
type name would be expected.
Some tests that reuse provider factories needed a little more careful
handling. Change the fixed func to only reset the provider on the first
call.
Add a single global schema cache for providers. This allows multiple
provider instances to share a single copy of the schema, and prevents
loading the schema multiple times for a given provider type during a
single command.
This does not currently work with some provider releases, which are
using GetProviderSchema to trigger certain initializations. A new server
capability will be introduced to trigger reloading their schemas, but
not store duplicate results.
A module output is generally not used during destroy, however it must be
evaluated when its value is used by a provider for configuration,
because that configuration is not stored between walks.
There was an oversight in the output expansion node where the output
node was not created because the operation was destroy, and module
outputs have nothing to destroy. This however skipped evaluation when
the output is needed by a provider as mentioned above. Because of the
way an implied plan is stored internally when executing `terraform
destroy`, this went unnoticed by the test.
Allowing the output to be evaluated during destroy fixes the issue, and
should be acceptable because an output is classified as temporary in the
graph, and will be pruned when not actually needed.
Update the existing test to serialize the plan, which triggers the
failure.
In order to ensure that transitive dependencies are connected even when
there are no instances for a resource, we need to route the references
through the config ("expand") node. This happens naturally by having the
expand node report its config references, however legacy configs can
contain self-referenced without the "self" identifier, so those need to
be filtered out.
* Add test structure to views package for rendering test output
* Add test file HCL configuration and parser functionality
* Adds a TestContext structure for evaluating assertions against the state and plan
* Add test command to Terraform CLI
* Add test structure to views package for rendering test output
* Add test file HCL configuration and parser functionality
* Adds a TestContext structure for evaluating assertions against the state and plan
Several parts of the objchange logic incorrectly use cty.Value.RawEquals
for value comparison, instead of more appropriate comparison methods like
cty.Value.Equals or c.Value.Range().Includes. That makes them incorrectly
consider two unknown values with the same type but different refinements
as always non-equal, rather than evaluating based on the overlap between
the refinements (if any).
As a short-term fix for that we previously added this unrefinedValue shim
that just strips away the refinements for comparison, thus allowing
callers to continue using RawEquals as long as they've already taken care
of all of the other things that can make that go wrong, such as value
marks.
Unfortunately the shim was too simplistic and only supported direct
unknown values. Unknown values with refinements can also appear nested
inside known container values such as collections, so the shim needs to
recursively un-refine the entire data structure in that case.
This is still intended only as a temporary fix until we have time to
revisit all of the callers and make them use cty's own logic for
comparison. Using cty's own logic will make the results more precise,
because e.g. it can notice if two unknown strings have different known
prefixes and therefore cannot possibly be equal despite not being fully
known. For now this shim will accept any pair of unknown values of the
same type as equal, regardless of refinement.
Create a pending state version followed by a separate state upload
When this version of the endpoint fails (It is not yet generally available, or when using with Terraform Enterprise) Fall back to the original call with state content included in the request.
This strategy will reduce the amount of save failures due to network latency and gateway timeouts.
If a set contains partially known values the length is unknown which
causes assertPlannedObjectValid to fail valid plans.
Revert to the old method if using LengthInt for the set lengths, which
returns the maximum number of possible elements, with a guard for
entirely unknown set values.
* cloud: assert import block compatibility
* check for import <> TFC compatibility during init
* imports are not in alphabetical order 🙃
---------
Co-authored-by: CJ Horton <cjhorton@hashicorp.com>
This temporary measure prevents a panic further down the line when there is an unmatched expanded resource instance import target when running in config gen mode.
HashiCorp legal now requires a copyright claim in a comment at the top of
every substantial file in this repository. If we don't add this ourselves
then a bot will open a PR to add missing entries, but that process adds
git history, pull request, and GitHub notification noise so instead we'll
deal with it proactively as part of our usual code generation steps.
This means that pull requests will fail their checks if there are any
files that lack copyright headers, so we can deal with those before we
merge rather than in a subsequent PR.
Providers that existed prior to refinements (all of them, at the time of
writing) cannot preserve refinements sent in unknown values in the
configuration, and even if one day providers _are_ aware of refinements
there we might add new ones that existing providers don't know how to
handle.
For that reason we'll absolve providers of the responsibility of
preserving refinements from config into plan by fixing some cases where
we were incorrectly using RawEquals to compare values; that function isn't
appropriate for comparing values that might be unknown.
However, to avoid a disruptive change right now this initial fix just
strips off the refinements before comparing. Ideally this should be using
Value.Equals and handling unknown values more explicitly, but we'll save
that for a possible later improvement.
This does not include a similar exception for validating whether a final
value conforms to a plan because the plan value and the final value are
both produced by the same provider and so providers ought to be able to
be consistent with their _own_ treatment of refinements, if any.
Configuration is special because Terraform itself generates that, and so
it can potentially contain refinements that a particular provider has no
awareness of.
If the original value was unknown but its range was refined then the
provider must return a value that is within the refined range, because
otherwise downstream planning decisions could be invalidated.
This relies on cty's definition of whether a value is in a refined range,
which has pretty good coverage for the "false" case and so should give a
pretty good signal, but it'll probably improve over time and so providers
must not rely on any loopholes in the current implementation and must
keep their promises even if Terraform can't currently check them.
If the string to be tested is an unknown value that's been refined with
a prefix and the prefix we're being asked to test is in turn a prefix of
that known prefix then we can return a known answer despite the inputs
not being fully known.
There are also some other similar deductions we can make about other
combinations of inputs.
This extra analysis could be useful in a custom condition check that
requires a string with a particular prefix, since it can allow the
condition to fail even on partially-unknown input, thereby giving earlier
feedback about a problem.
The "id" attribute of this resource type is generated by the provider
itself and can never be null, so we'll refine the range of its unknown
result in case that helps downstream expressions to produce known results
even when the exact value hasn't yet been planned.
cty's new "refinements" concept allows us to reduce the range of unknown
values from our functions. This initial changeset focuses only on
declaring which functions are guaranteed to return a non-null result,
which is a helpful baseline refinement because it allows "== null" and
"!= null" tests to produce known results even when the given value is
otherwise unknown.
This commit also includes some updates to test results that are now
refined based on cty's own built-in refinement behaviors, just as a
result of us having updated cty in the previous commit.
* genconfig: fix nil nested block panic
* genconfig: null NestingSingle blocks should be absent
A NestingSingle block that is null in state should be completely absent from config.
* configschema: make FilterOr variadic
* configschema: apply filters to nested types
* configschema: filter helper/schema id attribute
The legacy SDK adds an Optional+Computed "id" attribute to the
resource schema even if not defined in provider code.
During validation, however, the presence of an extraneous "id"
attribute in config will cause an error.
Remove this attribute so we do not generate an "id" attribute
where there is a risk that it is not in the real resource schema.
* configschema: filter test
* terraform: do not pre-validate generated config
Config generated from a resource's import state may fail validation in
the case of schema behaviours such as ExactlyOneOf and ConflictsWith.
We don't want to fail the plan now, because that would give the user no
way to proceed and fix the config to make it valid. We allow the plan to
complete and output the generated config.
* generate config alongside import process
Rather than waiting until we call `plan()`, generate the configuration
at the point of the import call, so we have the necessary data to return
in case planning fails later.
The `plan` and `state` predeclared variables in the plan() method were
obfuscating the actual return of nil throughout, so those identifiers
were removed for clarity.
* move generateHCLStringAttributes closer to caller
* store generated config in plan on error
* test for config gen with error
* add simple warning when generating config
---------
Co-authored-by: James Bardin <j.bardin@gmail.com>
Previously we just made a hard rule that the state storage for Terraform
Cloud would never save any intermediate snapshots at all, as a coarse way
to mitigate concerns over heightened Terraform Enterprise storage caused
by saving intermediate snapshots.
As a better compromise, we'll now create intermediate snapshots at the
default interval unless the Terraform Cloud API responds with a special
extra header field X-Terraform-Snapshot-Interval, which specifies a
different number of seconds (up to 1 hour) to wait before saving the next
snapshot.
This will then allow Terraform Cloud and Enterprise to provide some dynamic
backpressure when needed, either to reduce the disk usage in Terraform
Enterprise or in situations where Terraform Cloud is under unusual load
and needs to calm the periodic intermediate snapshot writes from clients.
This respects the "force persist" mode so that if Terraform CLI is
interrupted with SIGINT then it'll still be able to urgently persist
a snapshot of whatever state it currently has, in anticipation of probably
being terminated with a more aggressive signal very soon.
We've seen some concern about the additional storage usage implied by
creating intermediate state snapshots for particularly long apply phases
that can arise when managing a large number of resource instances together
in a single workspace.
This is an initial coarse approach to solving that concern, just restoring
the original behavior when running inside Terraform Cloud or Enterprise
for now and not creating snapshots at all.
This is here as a solution of last resort in case we cannot find a better
compromise before the v1.5.0 final release. Hopefully a future commit
will implement a more subtle take on this which still gets some of the
benefits when running in a Terraform Enterprise environment but in a way
that will hopefully be less concerning for Terraform Enterprise
administrators.
This does not affect any other state storage implementation except the
Terraform Cloud integration and the "remote" backend's state storage when
running inside a TFC/TFE-driven remote execution environment.
Previously we just always used the same intermediate state persistence
behavior for all state storages. However, some storages might have access
to additional information that allows them to tailor when they persist,
such as reacting to API rate limit status headers in responses, or just
knowing that a particular storage isn't suited to intermediate snapshots
at all for some reason.
This commit doesn't actually change any observable behavior yet, but it
introduces an optional means for a state storage to customize the behavior
which we may make use of in certain storage implementations in future
commits.
When planning a destroy operations, locals only referenced by root
outputs do not need to be kept in the graph, because the root output
does not get evaluated. Rather than try and prune the local based on
this condition, we can prevent the connection from being created by
ensuring that a root output destroy node has no references.
The separate plan+apply destroy fields used for outputs can be
simplified by combining, since they are only ever referenced together.
* genconfig: fix nil nested block panic
* always InternalValidate test schemas
* genconfig: null NestingSingle blocks should be absent
A NestingSingle block that is null in state should be completely absent from config.
If a resource is already in state, do not attempt to import it again. Resources already in state are filtered out of the plan's import targets.
A change is only considered "importing" if it is adding a new resource instance to the state.
* command: keep our promises
* remove some nil config checks
Remove some of the safety checks that ensure plan nodes have config attached at the appropriate time.
* add GeneratedConfig to plan changes objects
Add a new GeneratedConfig field alongside Importing in plan changes.
* add config generation package
The genconfig package implements HCL config generation from provider state values.
Thanks to @mildwonkey whose implementation of terraform add is the basis for this package.
* generate config during plan
If a resource is being imported and does not already have config, attempt to generate that config during planning. The config is generated from the state as an HCL string, and then parsed back into an hcl.Body to attach to the plan graph node.
The generated config string is attached to the change emitted by the plan.
* complete config generation prototype, and add tests
* plannable import: add a provider argument to the import block
* Update internal/configs/config.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/configs/config.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/configs/config.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* fix formatting and tests
---------
Co-authored-by: Katy Moe <katy@katy.moe>
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* command: keep our promises
* remove some nil config checks
Remove some of the safety checks that ensure plan nodes have config attached at the appropriate time.
* add GeneratedConfig to plan changes objects
Add a new GeneratedConfig field alongside Importing in plan changes.
* add config generation package
The genconfig package implements HCL config generation from provider state values.
Thanks to @mildwonkey whose implementation of terraform add is the basis for this package.
* generate config during plan
If a resource is being imported and does not already have config, attempt to generate that config during planning. The config is generated from the state as an HCL string, and then parsed back into an hcl.Body to attach to the plan graph node.
The generated config string is attached to the change emitted by the plan.
* complete config generation prototype, and add tests
* Plannable import: Add generated config to json and human-readable plan output
---------
Co-authored-by: Katy Moe <katy@katy.moe>
* command: keep our promises
* remove some nil config checks
Remove some of the safety checks that ensure plan nodes have config attached at the appropriate time.
* add GeneratedConfig to plan changes objects
Add a new GeneratedConfig field alongside Importing in plan changes.
* add config generation package
The genconfig package implements HCL config generation from provider state values.
Thanks to @mildwonkey whose implementation of terraform add is the basis for this package.
* generate config during plan
If a resource is being imported and does not already have config, attempt to generate that config during planning. The config is generated from the state as an HCL string, and then parsed back into an hcl.Body to attach to the plan graph node.
The generated config string is attached to the change emitted by the plan.
* complete config generation prototype, and add tests
---------
Co-authored-by: Katy Moe <katy@katy.moe>
The imported resource was being stored in the wrong state, and only
ended up in the refresh state because ReadResource was being called a
second time in the normal refresh path.
Make sure to only refresh the imported resource once. This is still done
separately within importState so that we can handle the error slightly
differently to let the user know if an imported instance does not exist.
* [plannable import] embed the resource id within the changes
* [Plannable Import] Implement streamed logs for -json plan
* use latest structs
* remove implementation plans from TODO
The logic used to prune unused providers was only taking into account
the common case of providers in the root module. The quick check of
looking for up edges doesn't work within a module, because the module
structures will create non-resource nodes connected to the providers.
Use a deeper check of looking for any dependent resources which may
require that provider to be configured.
During a plan, Terraform now checks for the presence of import blocks.
For each resource in config, if an import block is present with a matching address, planning that node will now trigger an ImportResourceState and ReadResource. The resulting state is treated as the node's "refresh state", and planning proceeds as normal from there.
The walkImport operation is now only used for the legacy "terraform import" CLI command. This is the only case under which the plan should produce graphNodeImportStates.
ran into this error while running terraform on a container and saving state to Consul. I suspect my policy needs tweaking but it's impossible to tell with an error like this:
```
╷
│ Error: Failed to save state
│
│ Error saving state: consul CAS failed with transaction errors:
│ [0xc0006e93c8]
╵
```
This PR makes the will include the error messaage in the details so I can continue debugging
Just like in the destroy apply, we can skip the inter-provider cycle
check when creating the destroy plan, which can be expensive when there
are a lot of resource instances with dependencies from another provider.
* Improve environment variable support for the pg backend
This patch does two things:
- it adds environment variable support to the parameters that did
not have it (and uses `PG_CONN_STR` instead of `PGDATABASE` which is
actually more appropriate to match the behavior of other PostgreSQL
utilities)
- better documents how to give the connection parameters as environment
variables for the ones that were already supported based on the
recommendation of @bsouth00
I will prepare a backport of the documentation part of this once it is
merged.
Closes https://github.com/hashicorp/terraform/issues/33024
* Remove global variable in test of the PG backend
The cloud backend, which communicates with TFC like APIs, can create
runs which may have one more configuration parameters altered. These
alterations are emitted as run-events on the run so that API clients
can consume and display them to users. This commit adds a step in
plan operation to query the run-events once a run is created and then
emit specific run-event descriptions to the console as warnings for
the user.
* checks: filter out check diagnostics during certain plans
* wrap diagnostics produced by check blocks in a dedicated check block diagnostic
* address comments
When we plan to destroy an instance, the change recorded should use the
correct type for the resource rather than `DynamicPseudoType`. Most of
the time this is hidden when the change is encoded in the plan, because
any `null` is always encoded to the same value, and when decoded it will
be converted to the schema type. However when apply requires creating a
second plan for an instance's replacement that value is not going to be
encoded, and remains a dynamic value which is sent to the provider.
Most providers won't see that either, as the grpc request also encodes
and decodes the value to conform with the correct schema. The builtin
terraform provider does get the raw cty value though, and when that
dynamic value is returned validation fails when the type does not match.
It is not valid for a provider to return an unknown value for a
configured nested collection, but we need to check for unknowns before
comparing the number of values in the collection.
If a resource has a change in marks from the prior state, we need to
notify the user that an update is going to be necessary to at least
store that new value in the state. If the provider however returns the
prior state value in lieu of a new config value, we need to be sure to
filter any new marks for comparison as well. The comparison of the prior
marks and new marks must take into account whether those new marks could
even be applied, because if the value is unchanged the new marks may be
completely irrelevant.
* Add support for scoped resources
* refactor existing checks addrs and add check block addr
* Add configuration for check blocks
* introduce check blocks into the terraform node and transform graph
* address comments
* address comments
* don't execute checks during destroy operations
* don't even include check nodes for destroy operations
In the case where a provider has been upgraded, and there are external
changes to resources outside of terraform, and -target is being used,
and resources which are not targeted require a schema migration; the
untargeted resources will not have been migrated and cannot be decoded for the
external changes report.
Since there is no way to decode the resources which have been excluded
via -target, we can only skip over them when inspecting
driftedResources. Return warnings for now to indicate that these
resources could not be decoded to help indicate that users will need to
eventually apply these changes.
Module outputs are evaluated from state, so in order to have detailed
information about sensitivity from non-root module outputs, we need to
store the value along with all sensitive marks. This aligns with the
usage of state being the in-memory store for other temporary values like
locals and variables.
When planning encounters an error we were returning early without
cleaning out any planed data sources which cannot be serialized. Move
the cleanup to the common walkPlan method where the PriorState is
assigned so that it cannot be missed.
We inadvertently incorporated the new minor release of cty into the 1.4
branch, and that's introduced some more refined handling of unknown values
that is too much of a change to introduce in a patch release.
Therefore this reverts back to the previous minor release for the v1.4
series, and then we'll separately get the main branch ready to work
correctly with the new cty before Terraform v1.5.
This reverts just the upgrade and the corresponding test changes from
#32775, while retaining the HCL upgrade and the new test case it
introduced for that bug it was trying to fix. That new test is still
passing so it seems that the cty upgrade is not crucial to that fix.
This test was previously not taking into account the fact that the
"Stopping" hook gets sent in the goroutine that calls ctx.Stop, whereas
all of the others get called from inside ctx.Apply, and so there are no
ordering guarantees for that event in relation to the others.
We now handle the stopping event as a special case that is allowed to
appear anywhere in the sequence as long as it appears. The other events
are still strongly ordered because their ordering is important for
correctness of Terraform Core's own behavior.
As some extra insurance we also now check whether the provider's
ApplyResourceChange and Stop functions both ran and reached a suitable
point of execution related to the stop request, which help to ensure not
only that something called Stop but that Terraform Core correctly
interacted with the provider to handle the stop.
While the returned plan is checked for nil in most cases, there was
a single point where the plan was dereferenced which could panic. Rather
than always guarding the dereferences, return early when the plan is
nil.
This test case was making a real DNS call in a non-acceptance test, and
since it was intended to fail it would introduce a several second delay.
This commit replaces the test with a similar one which uses the mocked
disco services for a non-TFE host.
Also restructure the test to use t.Run for clarity.
This is a mostly mechanical refactor with a handful of changes which
are necessary due to the semantic difference between earlyconfig and
configs.
When parsing root and descendant modules in the module installer, we now
check the core version requirements inline. If the Terraform version is
incompatible, we drop any other module loader diagnostics. This ensures
that future language additions don't clutter the output and confuse the
user.
We also add two new checks during the module load process:
* Don't try to load a module with a `nil` source address. This is a
necessary change due to the move away from earlyconfig.
* Don't try to load a module with a blank name (i.e. `module ""`).
Because our module loading manifest uses the stringified module path
as its map key, this causes a collision with the root module, and a
later panic. This is the bug which triggered this refactor in the
first place.
Since it's already possible to activate the dependency lock file using an
environment variable, we should allow opting in to it having broken
behavior using the environment too.
It's kinda odd in retrospect that TF_PLUGIN_CACHE_DIR is the only setting
we allow to be configured both in the environment and the CLI
configuration. That means that the infrastructure for dealing with that
situation was relatively immature here and so I did some light refactoring
to make it unit-testable without actually modifying the test program's
environment.
With the demise of the early config loader, we want to show core
version errors first, followed by backend errors, and only then
show other errors with the configuration.
Terraform Core emits a hook event every time it writes a change into the
in-memory state. Previously the local backend would just copy that into
the transient storage of the state manager, but for most state storage
implementations that doesn't really do anything useful because it just
makes another copy of the state in memory.
We originally added this hook mechanism with the intent of making
Terraform _persist_ the state each time, but we backed that out after
finding that it was a bit too aggressive and was making the state snapshot
history much harder to use in storage systems that can preserve historical
snapshots.
However, sometimes Terraform gets killed mid-apply for whatever reason and
in our previous implementation that meant always losing that transient
state, forcing the user to edit the state manually (or use "import") to
recover a useful state.
In an attempt at finding a sweet spot between these extremes, here we
change the rule so that if an apply runs for longer than 20 seconds then
we'll try to persist the state to the backend in an update that arrives
at least 20 seconds after the first update, and then again for each
additional 20 second period as long as Terraform keeps announcing new
state snapshots.
This also introduces a special interruption mode where if the apply phase
gets interrupted by SIGINT (or equivalent) then the local backend will
try to persist the state immediately in anticipation of a
possibly-imminent SIGKILL, and will then immediately persist any
subsequent state update that arrives until the apply phase is complete.
After interruption Terraform will not start any new operations and will
instead just let any already-running operations run to completion, and so
this will persist the state once per resource instance that is able to
complete before being killed.
This does mean that now long-running applies will generate intermediate
state snapshots where they wouldn't before, but there should still be
considerably fewer snapshots than were created when we were persisting
for each individual state change. We can adjust the 20 second interval
in future commits if we find that this spot isn't as sweet as first
assumed.
The terraform provider was panicking on import, because it didn't
previously have a resource type which could be imported at all. Add a
stub import function for terraform_data as a placeholder to allow the
call to complete successfully. While there's no need to actually import
a terraform_data resource, users will inevitably use this to construct
examples of import actions for learning purposes or bug reports.
This still isn't very useful even for examples however, because the
state-only nature of the terraform_data resource type means that we
can't fill in the state from only the import ID. This means that any
value in `trigger_replace` or `input` will cause a change in the next
plan. Once configuration data is available during import we can extend
this to create a logical final state based on config.
* Add metadata functions command skeleton
* Export functions as JSON via cli command
* Add metadata command
* Add tests to jsonfunction package
* WIP: Add metadata functions test
* Change return_type & type in JSON to json.RawMessage
This enables easier deserialisation of types when parsing the JSON.
* Skip is_nullable when false
* Update cli docs with metadata command
* Use tfdiags to report function marshal errors
* Ignore map, list and type functions
* Test Marshal function with diags
* Test metadata functions command output
* Simplify type marshaling by using cty.Type
* Add static function signatures for can and try
* Update internal/command/jsonfunction/function_test.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
---------
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* go get github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/sts/v20180813@v1.0.588
* feat:support assume_role for COS backend
* update go.mod and go.sum
* change secret_id and secret_key from required to optional
* update cos doc
* update logic by comments
* rm sensitive info in log
Resource instances with no current object in state should not have
orphan nodes added to the graph, as deposed objects are handled
separately. This was previously handled correctly for the non-expanded
case, but expanded resources were missing the appropriate check for a
current object.
Also update the comment in the non-expanded case to hopefully clarify
that we're checking for the presence of a current object, not the
absence of any deposed objects. An instance may have both a current
object and zero or more deposed objects in some circumstances, and if
so, we still want an orphan node to be added if the instance is not in
configuration.
* Implementation of structured logging.
These are the changes that enable the cloud backend to consume
structured logs and make use of the new plan renderer. This will enable
CLI-driven runs to view the structured output in the Terraform Cloud UI.
* Cloud structured logging unit tests
* Remove deferred logs logic, fix minor issues
Color formatting fixes, log type stop lists, default behavior for logs
that are unknown
* Use service disco path in redacted plan url