Auditing the graph builder to remove unused transformers (planning does
not need to close provisioners for example), and re-order them. While
many of the transformations are commutative, using the same order
ensures the same behavior between operations when the commutative
property is lost or changed.
Outputs were not being evaluated during import, because it was not added
to the walk filter.
Remove any unnecessary walk filters from all the Execute nodes.
We do not currently need to evaluate module variables in order to
import a resource.
This will likely change once we can select the import provider
automatically, and have a more dynamic method for dispatching providers
to module instances. In the meantime we can avoid the evaluation for now
and prevent a certain class of import errors.
The change was passed into the provisioner node because the normal
NodeApplyableResourceInstance overwrites the prior state with the new
state. This however doesn't matter here, because the resource destroy
node does not do this. Also, even if the updated state were to be used
for some reason with a create provisioner, it would be the correct state
to use at that point.
This new-ish package ended up under "helper" during the 0.12 cycle for
want of some other place to put it, but in retrospect that was an odd
choice because the "helper/" tree is otherwise a bunch of legacy code from
when the SDK lived in this repository.
Here we move it over into the "internal" directory just to distance it
from the guidance of not using "helper/" packages in new projects;
didyoumean is a package we actively use as part of error message hints.
Remove marks for object compatibility tests to allow apply
to continue. Adds a block to the test provider to use
in testing, and extends the sensitivity apply test to include a block
The test for this behavior did not work, because the old mock diff
function does not work correctly. Write a PlanResourceChange function to
return a correct plan.
Allow the evaluation of resource pending deleting only during a full
destroy. With this change we can ensure deposed instances are not
evaluated under normal circumstances, but can be references when needed.
This also allows us to remove the fixup transformer that added
connections so temporary values would evaluate in the correct order when
referencing destroy nodes.
In the majority of cases, we do not want to evaluate resources that are
pending deletion since configuration references only can refer to
resources that is intended to be managed by the configuration. An
exception to that rule is when Terraform is performing a full `destroy`
operation, and providers need to evaluate existing resources for their
configuration.
The loading of the initial instance state was inadvertently skipped when
-refresh=false, causing all resources to appear to be missing from the
state during plan.
If a data source refers to a indexed managed resource, we need to
re-target that reference to the containing resource for planning. Since
data sources use the same mechanism as depends_on for managed resource
references, they can only refer to resources as a whole.
There are situations when a user may want to keep or exclude a map key
using `ignore_changes` which may not be listed directly in the
configuration. This didn't work previously because the transformation
always started off with the configuration, and would never encounter a
key if it was only present in the prior value.
* Split node_resource_abstract.go into two files, putting
NodeAbstractResourceInstance methods in their own file - it was getting
large enough to be tricky for (my) human eyeballs.
* un-exported the functions that were created as part of the EvalTree()
refactor; they did not need to be public.
function
The original EvalReadState node is used only by `NodeAbstractResource`s,
so I've created a new method on NodeAbstractResource which does the same
thing as EvalReadState. When the EvalNode refactor project is complete,
EvalReadState will be removed entirely.
In order to ensure all the starting values agree, and since
ignore_changes is only meant to apply to the configuration, we need to
process the ignore_changes values on the config itself rather than the
proposed value.
This ensures the proposed new value and the config value seen by
providers are coordinated, and still allows us to use the rules laid out
by objchange.AssertPlanValid to compare the result to the configuration.
ignore_changes should only exclude changes to the resource arguments,
and not alter the returned value from PlanResourceChange. This would
effect very few providers, since most current providers don't actively
create their plan, and those that do should be generating computed
values here rather than modifying existing ones.
Sensitive values may not be used in outputs which are not also marked
as sensitive. This includes values nested within complex structures.
Note that sensitive values are unmarked before writing to state. This
means that sensitive values used in module outputs will have the
sensitive mark removed. At the moment, we have not implemented
sensitivity propagation from module outputs back to value marks.
This commit also reworks the tests for NodeApplyableOutput to cover
more existing behaviour, as well as this change.
A data source referencing another data source through depends_on should
not be forced to defer until apply. Data sources have no side effects,
so nothing should need to be applied. If the dependency has a
planned change due to a managed resource, the original data source will
also encounter that further down the list of dependencies.
This prevents a data source being read during plan for any reason from
causing other data sources to be deferred until apply. It does not
change the behavior noticeably in 0.14, but because 0.13 still had
separate refresh and plan phases which could read the data source, the
deferral could cause many things downstream to become unexpectedly
unknown until apply.
When a sensitive variable has a complex type, any traversal of the
variable should still result in a sensitive value. This test uses a
sensitive `map(string)` and verifies that both plan and state output
include the appropriate sensitive marks for the resource attribute.
NodePlannableResource and NodeApplyableResource EvalTree()s have been
replaced with Execute() nodes and straight-through code. Both called
EvalWriteResourceState and were the only functions to use it, so I chose
to replace EvalWriteResourceState entirely with straight-through code
(by copying the contents into the two locations).
* Add creation test and simplify in-place test
* Add deletion test
* Start adding marking from state
Start storing paths that should be marked
when pulled out of state. Implements deep
copy for attr paths. This commit also includes some
comment noise from investigations, and fixing the diff test
* Fix apply stripping marks
* Expand diff tests
* Basic apply test
* Update comments on equality checks to clarify current understanding
* Add JSON serialization for sensitive paths
We need to serialize a slice of cty.Path values to be used to re-mark
the sensitive values of a resource instance when loading the state file.
Paths consist of a list of steps, each of which may be either getting an
attribute value by name, or indexing into a collection by string or
number.
To serialize these without building a complex parser for a compact
string form, we render a nested array of small objects, like so:
[
[
{ type: "get_attr", value: "foo" },
{ type: "index", value: { "type": "number", "value": 2 } }
]
]
The above example is equivalent to a path `foo[2]`.
* Format diffs with map types
Comparisons need unmarked values to operate on,
so create unmarked values for those operations. Additionally,
change diff to cover map types
* Remove debugging printing
* Fix bug with marking non-sensitive values
When pulling a sensitive value from state,
we were previously using those marks to remark
the planned new value, but that new value
might *not* be sensitive, so let's not do that
* Fix apply test
Apply was not passing the second state
through to the third pass at apply
* Consistency in checking for length of paths vs inspecting into value
* In apply, don't mark with before paths
* AttrPaths test coverage for DeepCopy
* Revert format changes
Reverts format changes in format/diff for this
branch so those changes can be discussed on a separate PR
* Refactor name of AttrPaths to AttrSensitivePaths
* Rename AttributePaths/attributePaths for naming consistency
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
In order to save any changes to lifecycle options, we need to record
those changes during refresh, otherwise they would only be updated when
there is a change in the resource to be applied.
This evaluation was required when refresh ran in a separate walk and
managed resources were only partly handled by configuration. Now that we
have the correct dependency information available when refreshing
configured resources, we can update their state accordingly. Since
orphaned resources are not refreshed, they can retain their stored
dependencies for correct ordering.
This also prevents users from introducing cycles with nodes they can't
"see", since only orphaned nodes will retain their stored dependencies,
and the remaining nodes will be updated according to the configuration.
This adds a test for GetInputVariable, and includes
a variable with a "sensitive" attribute in configuration,
to test that that value is marked as sensitive
Now that we don't have to handle data sources that may or may not have
been updated during a refresh phase, and the plan phase can save the
data source to the refreshed state, we can remove a lot of the logic
involved in detecting whether the data source needs to be planned or
not.
When there is no separate refresh phase, we always must attempt to read
the data source during planning, and the only conditions are based on
having a known configuration, and not having any dependencies on which
we're waiting. If the data source is read during plan, we can now save
that directly to the refreshed state, and don't need to smuggle the
value as a change to be saved during apply.
Now that the planning process generates a refreshed state, and can
handle changes between the configuration and the state which the refresh
process cannot, we can use the plan for the refresh command.
Treat any reference from a data source to a managed resource as a
dependency on the entire resource. While a resource's
attribute may be statically resolvable from the configuration, if the
user added a reference to that resource, it stands to reason that the
user intended there to be a dependency which we need to wait on.
This is an extension of an implicit behavior that existed previously in
Terraform, but was lost in the 0.13 release. That behavior was emergent
from the fact that the Refresh walk did not process the configuration
for managed resources, so any new resources in the config would be
evaluate as entirely unknown during Refresh, even if some attributes
were statically resolvable at that point.
This new implementation restores the old behavior, and extends it to
updates and replacements of the referenced resource.
This only changes the refreshed state stored in the plan file. Since the
change is stored in the plan, the applied result would be the same, but
we should still store the refreshed data in the plan file for tools that
consume the plan file.
This will also be needed in order to implement a new refresh command
based on the plan itself.
All resources use EvalWriteState to store their state, so we need a way
to switch the states when the resource is refreshing vs when it is
planning. (this will likely change once we factor out the EvalNode pattern)
Since plan uses the state as a scratch space for evaluation, we need an
entirely separate state to store the refreshed resources values during
planning. Add a RefreshState method to the EvalContext to retrieve a
state used only for refreshing resources.
Test that the changes which use the sensitive variable have
corresponding path value marks. Also remove the unrelated validate
function from this test.
* remove leftover debug line
* terraform: refactor ProviderEvalTree
This PR refactors the ProviderEvalTree by folding the entire tree into
straight-through code in NodeApplyableProvider Execute (formally
EvalTree). The EvalConfigProvider functions were refactored into
NodeApplyableProvider functions (since that was the only place they were
used).
I also removed the unused node_provider_disabled code.
* revert removal of graphNodeCloseProvider EvalTree, replace with Execute
* terraform: remove unused eval node
* add UpdateStateHook function to replace EvalUpdateStateHook
* early exit error isn't
* terraform: NodeDestroyResourceInstance refactor
This PR refactor's NodeDestroyResourceInstance EvalTree() into an
Execute() node. EvalRequireState and evalWriteEmptyState were used only
by this node, and they have been removed in favor of straight code.
There are still many calls to someEvalNode.Eval() in NodeDestroyResourceInstance: I plan on refactoring the remaining EvalTree()s before tacking those Eval()s (all of which are used by many graph nodes)
I've added a new function, UpdateStateHook, that is effectively the same
as EvalUpdateStateHook. The latter will be removed when the larger
EvalNode refactor project is complete.
EvalModuleCallArguments is now a method on nodeModuleVariable, it's only
caller, and the other functions have been replaces with straight through
code (or in the case of evalVariableValidations, a standalone function).
I was unable to add tests for nodeModuleVariable.Execute, which requires
fixtures that aren't part of the MockEvalContext (a scope.evalContext is
one); it's not ideal but that function should be well covered by the
context tests so I chose to leave it as-is.
Finally, I removed the unused function hclTypeName. Deleting code is
fun!
While this was easier to spot during plan, it is also possible to
evaluate resources with 0 instances during apply as well.
This doesn't effect the failure when scaling CBD instances, it only
changes the fact that the inconsistent value is no longer unknown.
Remove the check for CreateBeforeDestroyOverride which can't happen in a
destroy node.
Remove the unnecessary GraphNodeAttachDestroyer interface, since we
don't use it now that plans can record the create+destroy order.
EvalTrees
The import EvalTree()s have been replaced with Execute +
straight-through code, which let me remove eval_import_state.go.
graphNodeImportStateSub's Execute() function is still using the
old-style (not-refactored) calls to EvalRefresh and EvalWriteState;
those are being saved for a later refactor once all (or at least most)
of the EvalTree()s are gone.
I've added incredibly basic tests for both Execute() functions; they are
only enough to verify basics - the real testing happens in
context_import_test.go.
* terraform: refactor signature of EvalContext.InitProvisioner
Nothing was using the returned provisioners.Interface, so I simplified
the signature.
* terraform: remove provisioner-related EvalTree()s
The various Evals in eval_provisioner were removed from all callers and
replaced with straight-through code.
`NodeValidatableResource.EvalTree()` to `Execute()` was more involved. I
chose to leave the `EvalValidateResource` and `EvalValidateProvisioner`
Eval functions mostly as-is, changing the main function to `.Validate` to
make it clear that these are no longer eval nodes. Eventually I expect
to rename the file (perhaps to just Validate).
This also unearthed that the marking must happen
earlier in the eval_diff in order to produce a valid plan
(so that the planned marked value matches the marked config
value)
Using markedPlannedNewVal caused many test
failures with ignoreChanges, and I noted plannedNewVal
itself is modified in the eval_diff. plannedNewVal
is now marked closer to the change where it needs it.
There is also a test fixture update to remove interpolation warnings.
Mark sensitivity on a value. However, when the value is encoded to send to the
provider to produce a changeset we must remove the marks, so unmark the value
and remark it with the saved path afterwards
When applying a plan, a forced CreateBeforeDestroy may not be set during
the apply walk when downstream resources are no longer present in the
graph. We still need to stick to that plan, and both the
NodeApplyableResourceInstance EvalTree and the individual Eval nodes
need to operate on that planned value.
Ensure that we always check for an existing plan when determining
CreateBeforeDestroy status. This must happen in 2 different code paths
due to the eval node pattern currently in-use. Future refactoring may be
able to unify these code-paths to make this less fragile.
If a resource is forced CreateBeforeDestroy from a dependent resource,
and that dependent has no changes, the plan is changed from
CreateThenDelete to DeleteThenCreate causing an apply error.
One of the tenants of the graph transformations is that resource destroy
nodes can only be ordered relative to other resources, and can't be
referenced directly. This was broken by the module close node which
naively connected to all module nodes, creating cycles in some cases
when edges are reversed from CreateBeforeDestroy.
This commit refactors NodeApplyableOutput and NodeDestroyableOutput into
the new Execute() pattern, collapsing the functions in eval_output.go
into one place.
I also reverted a recent decision to have Execute take a _pointer_ to a
walkOperation: I was thinking of interfaces, not constant bytes, so all
it did was cause problems.
And finally I removed eval_lang.go, which was unused.
This commit continues the overall EvalNode removal project.
Something to note: the NodeRefreshableDataResourceInstance's Execute()
function is intentionally refactored in the bare minimum,
hardly-a-refactor style, because we have another ongoing project which
aims to remove NodeRefreshable*s. It is not worth the effort at this
time. We may revisit this decision in the future.
This new option is intended to address the previous inconsistencies where
some older subcommands supported partially changing the target directory
(where Terraform would use the new directory inconsistently) where newer
commands did not support that override at all.
Instead, now Terraform will accept a -chdir command at the start of the
command line (before the subcommand) and will interpret it as a request
to direct all actions that would normally be taken in the current working
directory into the target directory instead. This is similar to options
offered by some other similar tools, such as the -C option in "make".
The new option is only accepted at the start of the command line (before
the subcommand) as a way to reflect that it is a global command (not
specific to a particular subcommand) and that it takes effect _before_
executing the subcommand. This also means it'll be forced to appear before
any other command-specific arguments that take file paths, which hopefully
communicates that those other arguments are interpreted relative to the
overridden path.
As a measure of pragmatism for existing uses, the path.cwd object in
the Terraform language will continue to return the _original_ working
directory (ignoring -chdir), in case that is important in some exceptional
workflows. The path.root object gives the root module directory, which
will always match the overriden working directory unless the user
simultaneously uses one of the legacy directory override arguments, which
is not a pattern we intend to support in the long run.
As a first step down the deprecation path, this commit adjusts the
documentation to de-emphasize the inconsistent old command line arguments,
including specific guidance on what to use instead for the main three
workflow commands, but all of those options remain supported in the same
way as they were before. In a later commit we'll make those arguments
produce a visible deprecation warning in Terraform's output, and then
in an even later commit we'll remove them entirely so that -chdir is the
single supported way to run Terraform from a directory other than the
one containing the root module configuration.
This introduces a new GraphNode, GraphNodeExecutable, which will
gradually replace GraphNodeEvalable as part of the overall removal of
EvalTree()s. Terraform's Graph.walk function will now check if a node is
GraphNodeExecutable and run walker.Execute instead of running through
the EvalTree() and Eval().
For the time being, terraform will panic if a node implements both
GraphNodeExecutable and GraphNodeEvalable. This will be removed when
we've finished removing all GraphNodeEvalable implementations.
The new GraphWalker function, Execute(), is meant to replace both
EnterEvalTree and ExitEvalTree, and wraps the call to the
GraphNodeExecutable's Execute function.
We've not been using HIL in the main codepaths since Terraform 0.12, but
some references to it (and some supporting functionality in Terraform)
stuck around due to interactions with types we'd kept around to support
legacy shims.
However, removing the configs.RawConfig field from
terraform.ResourceConfig disconnects that subtree of dependencies from
everything else, allowing us to remove it. This is safe because the only
remaining uses of terraform.ResourceConfig are shims from values that
were already evaluated using the HCL 2 API, and thus they never need
the "just in time" HIL evaluation that ResourceConfig.interpolateForce
used to do.
We also had some HIL references in configs/hcl2shim that were previously
in support of the "terraform 0.12upgrade" command, but the implementation
of that command is now removed.
There was one remaining reference to HIL in a now-unused function in the
helper/schema package, which I removed entirely here.
This then allows us to remove the HIL dependency entirely, and also to
clean up some remaining old remants of the legacy "config" package that
we'd recently moved into the "configs" package pending further pruning.
This pull reverts a recent change to backend/local which created two context, one with and one without state. Instead I have removed the state entirely from the validate graph (by explicitly passing a states.NewState() to the validate graph builder).
This changed caused a test failure, which (ty so much for the help) @jbardin discovered was inaccurate all along: the test's call to `Validate()` was actually what was removing the output from state. The new expected test output matches terraform's actual behavior on the command line: if you use -target to destroy a resource, an output that references only that resource is *not* removed from state even though that test would lead you to believe it did.
This includes two tests to cover the expected behavior:
TestPlan_varsUnset has been updated so it will panic if it gets more than one request to input a variable
TestPlan_providerArgumentUnset covers #26035Fixes#26035, #26027
When calling createEmptyBlocks, we only intend to process legacy SDK
blocks (NestingList or NestingSet), but the function also needs to pass
through NestingSingle block types untouched. To do so we must only look
up the container's element type after we've established that the block
is backed by a container.
When we need to select a qualified provider address based on an implied
provider name, we have a special case that the name "terraform" maps to
terraform.io/builtin/terraform instead of
registry.terraform.io/hashicorp/terraform as would be the case for other
prefixes.
However, in order for that to work properly we need to use
addrs.ImpliedProviderForUnqualifiedType instead of
addrs.NewDefaultProvider, because the latter just unconditionally always
produces a "default" provider configuration (belonging to the "hashicorp"
namespace on the public registry).
There was a missing outer loop for catching inverse module dependencies
when pruning nodes for destroy. Since the need to "register" the fully
destroyed modules no longer exists, the extra complication of pruning
the modules as a whole from the leaves inward is no longer required.
While it is technically still a valid optimization to reduce iterations,
the extra comparisons required to backtrack for transitive dependencies
don't amount to much, and having a single nested loop is much easier to
maintain.
If a module has multiple terraform.required_version constraints, any
failures would point at the last constraint in the error diagnostics. If
an earlier constraint was the actual problem, this leads to confusing
errors like this:
Error: Unsupported Terraform Core version
on main.tf line 6, in terraform:
6: required_version = ">= 0.13.0"
This configuration does not support Terraform version 0.13.0.
The error was due to storing the declaration range of the constraint as
a pointer to the contents of a loop variable, which was later
overwritten in later iterations of the loop. Instead we now use HCL's
handy Ptr() method to create a direct pointer to the range struct.
Include the import walk in the list of operations for which we create an
EvalModuleCallArgument node. This causes module call arguments to be
evaluated even if the module variables have defaults, ensuring that
invalid default values (such as the common "{}" for variables thought of
as maps) do not cause failures specific to import.
This fixes a bug where a child module evaluates an input variable in its
locals block, assuming that it is a nested object structure. The bug
report includes a default value of "{}", which is overridden by a root
variable value. Without the eval node added in this commit, the default
value is used and the local evaluation errors.
In order to determine if we need to re-read a data source during plan,
we need to compare the newly evaluated configuration with the stored
state. To do that we create a ProposedNewVal, which if there are no
changes, should match the existing state exactly.
A problem arises if the remote data source contains any blocks, and they
are not set in the configuration. Terraform always decodes configuration
blocks as empty containers, however the legacy SDK cannot correctly
handle empty blocks and may return a null block which is saved to the
state. In order to correctly make the comparison for planning, we need
to reify those null blocks as empty containers in the cty value.
The createEmptyBlocks helper converts any null NestingList or NestingSet
blocks to empty list or set cty values. We only need to be concerned
with List and Set, because those are the only types that can be defined
with the legacy SDK. In hindsight these could have been normalized in
the legacy SDK shims had this problem been uncovered earlier, but for the
sake of compatibility we will now normalize these in core.
When working with a ConfigResource, the generalization of a
ModuleInstance to a Module was inadvertently dropped, and there was to
test coverage for that type of target.
Ensure we can target a specific module instance alone.
Before expansion happens, we only have expansion resource nodes that
know their ConfigResource address. In order to properly compare these to
targets within a module instance, we need to generalize the target to
also be a ConfigResource.
We can also remove the IgnoreIndices field from the transformer, since
we have addresses that are properly scoped and can compare them in the
correct context.
While removal of attributes can be handled by providers through the
UpgradeResourceState call, data sources may need to be evaluated before
reading, and they have no upgrade path in the provider protocol.
Strip out extra attributes during state decoding when they are no longer
present in the schema, and there is no schema upgrade pending.
When looking up a resource during plan, we need to return an empty
container type when we're certain there are going to be no instances.
It's now more common to reference resources in a context that needs to
be known during plan (e.g. for_each), and always returning a DynamicVal
her would block plan from succeeding.
This is the known case broken by the changes to allow resources pending
destruction to be evaluated from state. When a resource references
another that is create_before_destroy, and that resource is being scaled
in, the first resource will not be updated correctly.
Since we have to allow destroy nodes to be evaluated for providers
during a full destroy, this is adding a transformer to connect temporary
values to any destroy versions of their references when possible. The
ensures that the destroy happens before evaluation, even when there
isn't a full create-then-destroy set of instances.
The cases where the connection can't be made are when the temporary
value has a provider descendant, which means it must evaluate early in
the case of a full destroy. This means the value may contain incorrect
data when referencing resource that are create_before_destroy, or being
scaled-in via count or for_each. That will need to be addressed later by
reevaluating how we handle the full destroy case in terraform.
During a full destroy, providers may reference resources that are going
to be destroyed as well. We currently cannot change this behavior, so we
need to allow the evaluation and try to prevent it from leaking into as
many other places as possible. Another transformer to try and protect
the values in locals, variables and outputs will be added to enforce
destroy ordering when possible.
Outputs and locals cannot refer to destroy nodes. Since those nodes
types do not have different ordering for create and destroy operations,
connecting them directly to destroy nodes can cause cycles.
The destroy graph builder test requires state in order to be correct,
which it didn't have. The other tests hits the edge case where a planned
destroy cannot remove outputs, because the apply phase does not know it
was created from a destroy.
Since data source destruction is only state removal, and other resources
cannot depend on them creating any physical resources, the destroy
dependencies were not tracked in the state. It turns out that there is a
special case which requires this; running terraform destroy where the
provider depends on a data source. In that case the resources using that
provider need to record their indirect dependence on the data source, so
that they can be deleted before the data source is removed from the
state.
Our reference transformer analyses and our destroy transformer analyses
are built around static (not-yet-expanded) addresses so that they can
correctly handle mixtures of expanded and not-yet-expanded objects in the
same graph.
However, this characteristic also makes them unnecessarily conservative
in their handling of references between resources within different
instances of the same module: we know they can never interact with each
other in practice because the dependencies for all instances of a module
are the same and so one instance cannot possibly depend on another.
As a compromise then, here we introduce a new helper function that can
recognize when a proposed edge is between two resource instances that
belong to different instances of the same module, and thus allow us to
skip actually creating those edges even though our imprecise analyses
believe them to be needed.
As well as significantly reducing the number of edges in situations where
multi-instance resources appear inside multi-instance modules, this also
fixes some potential cycles in situations where a single plan includes
both destroying an instance of a module and creating a new instance of the
same module: the dependencies between the objects in the instance being
destroyed and the objects in the instance being created can, if allowed
to connect, cause Terraform to believe that the create and the destroy
both depend on one another even though there is no need for that to be
true in practice.
This involves a very specialized helper function to encode the situation
where this exception applies. This function has an ugly name to reflect
how specialized it is; it's not intended to be of any use outside of these
three situations in particular.
The AbstractResourceInstance type was storing the entire Resource from
the state, when it only needs the actual instance state. This would
cause resources to consume memory on the order of n^2, where n in the
number of instances of the resource.
Rather than attaching the entire resource state, which includes copying
each individual instance, only attach the ResourceInstance state, and
extract out the provider address from the Resource.
The pruneUnusedNodes transformer was skipping root level locals and
variables, causing them to be left in the graph during a full destroy.
Use the return value from temporaryValue to indicate if the node is
truly temporary or not, rather then keeping the entire root module.
If we're adding a node to remove a root output from the state, the
output itself does not need to be re-evaluated. The exception for root
outputs caused them to be missed when we refactored resource destruction
to only use the existing state.
Have the output reference the expansion of a resource (via the whole
resource object), so that we can be sure we don't attempt to evaluate
that expansion during destroy.
When configuring providers, it is normally valid to refer to any value
which is known at apply time. This can include resource instance
attributes, variables, locals, and so on.
The import command has a simpler graph evaluation, which means that
many of these values are unknown. We previously prevented this from
happening by restricting provider configuration references to input
variables (#22862), but this was more restrictive than is necessary.
This commit changes how we verify provider configuration for import.
We no longer inspect the configuration references during graph building,
because this is too early to determine if these values will become known
or not.
Instead, when the provider is configured during evaluation, we
check if the configuration value is wholly known. If not, we fail with a
diagnostic error.
Includes a test case which verifies that providers can now be configured
using locals as well as vars, and an updated test case which verifies
that providers cannot be configured with references to resources.
element types
The error message when evaluateForEachExpression encounted an unknown
value of cty.DynamicPseudoType was not clear:
The given "for_each" argument value is unsuitable: "for_each" supports maps
and sets of strings, but you have provided a set containing type dynamic.
By moving the check for unknowns before the check for set element types,
the following error is returned instead:
"The "for_each" value depends on resource attributes that cannot be
determined until apply (...)"
Orphaned instances that are create_before_destroy will still be in the
state when their references are evaluated. We need to skip instances
that are planned to be destroyed altogether, as they can't be part of an
evaluation.
When the DestroyEdgeTransformer was updated to handle stored
dependencies the addrs.ConfigResource type did not yet exist. The lookup
map keys in the transformer needed to be updated to remove module
indexes.
This is for consistency with other commands which use prompts, all of
which require "yes" rather than "y" to confirm.
We also migrate the login command to use UIInput, which now supports
securely asking for passwords or secrets via the speakeasy library.
This simplifies the initial targeting logic, and removes the complex
algorithm for finding descendants that result in output changes, which
hid bugs that failed with modules.
The targeting is handled in 2 phases. First we find all individual
resource nodes that are targeted, then add all their dependencies to the
set of targets. This in essence is all we need for targeting, and is
straightforward to understand.
The next phase is to add any root module outputs that can be solely
derived from the set of targeted resources. There is currently no way to
target outputs themselves, so this is how we can allow these to be
updated as part of a target.
Rather than attempting to backtrack through the graph to find candidate
outputs, requiring each node on the chain to properly advertise if it
could be traversed, then backtracking again to determine if the
candidate is valid (which often got "off course"), we can start directly
from the outputs themselves. The algorithm here is simpler: if all the
root output's resource dependencies are targeted, add that output and
its dependencies to the targeted set.
We previously intentionally removed support for the allow-missing-config
option to terraform import, requiring that all imported resources have
matching config. See #24412.
However, the option was not removed from the import command, and it is
widely used. This commit reintroduces support for importing with a
missing configuration by falling back to implying the provider FQN based
on the resource type.
If a data source is storing a value that doesn't comply precisely with
the schema, it will now show up as a perpetual diff during plan.
Since we can easily detect if there is no resulting change from the
stored value, rather than presenting a planned read each time, we can
change the plan to a NoOp and log the incongruity as a warning.
If depends_on is allowed for outputs, we should validate that the
expressions are valid. Since outputs are always evaluated, and
validation is just done by this evaluation, we can check the
depends_on validation during evaluation too.
There aren't going to be any nodes specifically for module call
instances during plan, so we have to switch the reference subject to the
general module call.
The output destroy node only needs to connect to each of the output's
up-edges in order to be connected transitively to all of the outputs
dependencies. In large, highly-connected graphs, this may save
considerable time for each output.
The TargetsTransformer ignored resource indices before expansion could
happen, but was not handling module indices. Ensure that we collapse all
pre-expansion addresses to "configuration" addresses, with no module or
resource keys.
The recursive call should only return immediately on error.
The switch statement to find the current path should not use
ReferenceOutside, as we are getting the path for configuration, not for
references. This case would not have been taken currently, since all
GraphNodeReferenceOutside are also GraphNodeModulePath.