We need to avoid re-writing the state for every NoOp apply. We may
still be evaluating the instance to account for any side-effects in the
condition checks, however the state of the instance has not changes.
Re-writing the state is a non-current operation, which may require
encoding a fairly large instance state and re-serializing the entire
state blob, so it is best avoided if possible.
If there are no changes, then there is no reason to create an apply
graph since all objects are known. We however do need the walk to match
the expected state structure. This is probably only cleanup of empty
nested modules and outputs, but some investigation is needed before
making the full change.
For now we can store the checks from the plan directly into the new
state, since the apply walk overwrote the results we had already.
NoOp changes should not participate in a destroy sequence, but because
they are included as normal update nodes the usual connections were
still being made.
Refreshing for a destroy should use the refresh-only plan to avoid
planning new objects or evaluating conditions. This should also be
skipped if there is no state, since there would be nothing to refresh.
IsFullDestroy was a workaround during apply to detect when the change
set was created by a destroy plan. This no longer works correctly, and
we need to fall back to the UIMode set in the plan.
Not all root output instances were going through proper expansion when
destroy operations were involved, leading to cases where they would be
evaluated even though the expected result was only to remove them from
the state.
Normally destroy nodes stand alone in the graph, and do not produce
references to other nodes. Because root output nodes were replaced by
expansion nodes, these were being connected via normal references, even
in the case where we were working with a destroy graph.
Module output may need to be evaluated during destroy in order to
possibly be used by providers. The final state however is that all
objects are destroyed, so preconditions should not be evaluated.
The graph walking mechanism is specified as requiring a graph with a single
root, which in practice means there's exactly one node in the graph
which doesn't have any dependencies.
However, we previously weren't verifying that invariant is true for
subgraphs returned from DynamicExpand. It was working anyway, but it's not
ideal to be relying on a behavior that isn't guaranteed by our underlying
infrastructure.
We also previously had the RootTransformer being a bit clever and trying
to avoid adding a new node if there is already only a single graph with
no dependencies. That special case isn't particularly valuable since
there's no harm in turning a one-node graph into a two-node graph with
an explicit separate root node, and doing that allows us to assume that
the root node is always present and is always exactly terraform.rootNode.
Many existing DynamicExpand implementations were not producing valid
graphs and were previously getting away with it. All of them now produce
properly-rooted graphs that should pass validation, and we will guarantee
that with an explicit check of the DynamicExpand return value before we
try to walk that subgraph. For good measure we also verify that the root
node is exactly terraform.rootNode, even though that isn't strictly
required by our graph walker, just to help us catch potential future bugs
where a DynamicExpand implementation neglects to add our singleton root
node.
An orphaned resource which plans as a NoOp change will have no config.
This is not an error, but there is nothing to do since there are also
no checks to validate. We still leave the change in the plan to keep the
plan as complete as possible, noting all possible changes.
Preventing the node from being added to the graph is awkward, because
the config is attached separately from the diff transformer. This should
not pose any problems however, because there is no longer any state or
config linking the instance to any dependencies in the graph.
Because import does not yet plan new instances as part of the import
process, we can end up evaluating references to resources which have no
state at all. The fallback for this situation could result in slightly
better values during import. The count and for_each values were
technically incorrect, since the length is not known to be zero, and the
single instance does have a concrete type which we can return.
When we checked for cycles with destroy edges around providers, it was
only for providers of a different type, but one can do the same thing
with the same provider under different local aliases. Check to see if
the provider also contains an alias, or is defined absolutely in some
other way. The absolute accuracy here isn't critical, since in most
cases these edges are not required for correct results, but finding a
correct and consistent method for determining when these edges are
needed is going to take more research.
There was also an oversight fixed here where the basic
creator->destroyer edges were added _after_ the cycle checks, limiting
their utility. The ordering of the additions was swapped to make sure
all cycles are noticed.
If a previously deposed object is deleted outside of Terraform, the
next plan will result in a NoOp change for the deposed object. Fix the
check to verify that the deposed object has an acceptable action rather
than use the `update` flag.
Legacy providers expect Terraform to be able to clean up invalid plans
and computed attributes. Add a special case for the LegacyTypeSystem to
revert `ignore_changes = all` to the complete prior state.
Once again we're caught out by sharing the same output value node type
between the plan phase and the apply phase. To allow for some slight
variation between plan and apply without drastic refactoring here we just
add a new flag to nodeExpandOutput which is true only during the planning
phase.
This then allows us to register the checkable objects only during the
planning phase and not incorrectly re-register them during the apply phase.
It's incorrect to re-register during apply because we carry over the
planned checkable objects from the plan phase into the apply phase so we
can guarantee that the final state will have all of the same checkable
objects that the plan did.
This avoids a panic during the apply phase from the incorrect duplicate
registration.
We use a non-pointer value for this particular node, which means that
there can never be two root nodes in the same graph: the graph
implementation will just coalesce them together when a second one is added.
Our resource expansion code is relying on that coalescing so that it can
subsume together multiple graphs for different modules instances into a
single mega-graph with all instances across all module instances, with
any root nodes coalescing together to produce a single root.
This also updates one of the context tests that exercises resource
expansion so that it will generate multiple resource instance nodes per
module and thus potentially have multiple roots to coalesce together.
However, we aren't currently explicitly validating the return values from
DynamicExpand and so this test doesn't actually fail if the coalescing
doesn't happen. We may choose to validate the DynamicExpand result in a
later commit in order to make it more obvious if future modifications fail
to uphold this invariant.
We previously did two levels of DynamicExpand to go from ConfigResource to
AbsResource and then from AbsResource to AbsResourceInstance.
We'll now do the full expansion from ConfigResource to AbsResourceInstance
in a single DynamicExpand step inside nodeExpandPlannableResource.
The new approach is essentially functionally equivalent to the old except
that it fixes a bug in the previous implementation: we will now call
checkState.ReportCheckableObjects only once for the entire set of
instances for a particular resource, which is what the checkable objects
infrastructure expects so that it can always mention all of the checkable
objects in the check report even if we bail out partway through due to
a downstream error.
This is essentially the same code but now turned into additional methods
on nodeExpandPlannableResource instead of having the extra graph node
type. This has the further advantage of this now being straight-through
code with standard control flow, instead of the unusual inversion of
control we were doing before bouncing in and out of different Execute and
DynamicExpand implementations to get this done.
We were previously _trying_ to handle diagnostics here but were not quite
doing it right because we were testing whether the resulting error was
nil rather than appending it to the diagnostics and then seeing if the
result has errors.
The difference here is important because it allows DynamicExpand to return
warnings without associated errors when needed. Previously the graph
walker would treat a warnings-only result as if it were an error.
Ideally we'd change DynamicExpand to return diagnostics directly, but we
previously decided against that because there were so many implementors
to update, and my intent for this change is to be surgical in the update
so we minimize risk of backporting the change into patch releases.
When adding destroy edges between resources from different providers,
and a provider itself depends on the other provider's resources, we can
get cycles in the final dependency graph.
The problem is a little deeper than simply not connecting these nodes,
since the edges are still needed when doing a full destroy operation.
For now we can get by assuming the edges are required, and reverting
them only if they result in a cycle. This works because destroy edges
are the last edges added to managed resources during graph building.
This was rarely a problem before v1.3, because noop nodes were not added
to the apply graph, and unused values were aggressively pruned. In v1.3
however all nodes are kept in the graph so that postcondition blocks are
always evaluated during apply, increasing the chances of the cycles
appearing.
Because import uses the complete planning process, it must also call
RemovePlannedResourceInstanceObjects. This is required to serialized the
resulting state if there are data sources with an ObjectPlanned status
because they could not be read during the import process.
We may need to prune nodes from a full destroy plan graph which cannot
be evaluated if there is no current state.
Add missing method to nodeExpandPlannableResource to ensure planned
resource are handled correctly when pruning nodes.
Also add regression test coverage of the crash. This would occur when
objects with optional attributes had default values of different type
from the attribute type, and the objects were members of a collection.
For example:
list(object({
a = optional(set(string), [])
}))
If this type constraint is applied to a variable value where one object
has a set(string) value for a, and the other object applies the empty
tuple default, Terraform would crash.
Applying object type defaults to null values can convert null to an
object with partial attributes. This means that even a specified default
value of null will not remain null after variable evaluation.
In turn, the result can then be invalid, if not all attributes in an
object type have defaults specified.
This commit skips the type default application step during config load
and variable evaluation if the default or given value is null of any
type. We still perform type conversion.
When handling ignore_changes=all, we must filter computed attributes
from the prior state to prevent them showing in the configuration. Since
it's not valid for the user to have set computed attributes in the
config, the provider should expect to never see any values there. The
oversight has only now become apparent, as more providers adopt the
plugin-framework which has direct access to the plan-time configuration
value.
Add a new ChangeReason, ReasonDeleteBecauseNoMoveTarget, to provide better
information in cases where a planned deletion is due to moving a resource to
a target not in configuration.
Consider a case in which a resource instance exists in state at address A, and
the user adds a moved block to move A to address B. Whether by the user's
intention or not, address B does not exist in configuration.
Terraform combines the move from A to B, and the lack of configuration for B,
into a single delete action for the (previously nonexistent) entity B.
Prior to this commit, the Terraform plan will report that resource B will be
destroyed because it does not exist in configuration, without explicitly
connecting this to the move.
This commit provides the user an additional clue as to what has happened, in a
case in which Terraform has elided a user's action and inaction into one
potentially destructive change.
In an earlier commit we changed the states.CheckResults model to
explicitly model the config object vs. dynamic checkable object hierarchy,
but neglected to update the logic in Terraform Core to take that into
account when propagating the object expansion decisions from the plan
phase to the apply phase. That meant that we were incorrectly classifying
zero-instance resources always as having an unknown number of instances,
rather than possibly being known to have zero instances.
This now follows the two-level heirarchy of the data structure, which has
the nice side-effect that we can remove some of the special-case methods
from checks.State that we were using to bulk-load data: the data is now
shaped in the appropriate way to reload the data using the same method
the plan phase would've used to record the results in the first place.
A significant goal of the design changes around checks in earlier commits
(with the introduction of package "checks") was to allow us to
differentiate between a configuration object that we didn't expand at all
due to an upstream error, which has _unknown_ check status, and a
configuration object that expanded to zero dynamic objects, which
therefore has a _passing_ check status.
However, our initial lowering of checks.State into states.CheckResults
stayed with the older model of just recording each leaf check in isolation,
without any tracking of the containers.
This commit therefore lightly reworks our representation of check results
in the state and plan with two main goals:
- The results are grouped by the static configuration object they came
from, and we capture an aggregate status for each of those so that
we can differentiate an unknown aggregate result from a passing
aggregate result which has zero dynamic associated objects.
- The granularity of results is whole checkable objects rather than
individual checks, because checkable objects have durable addresses
between runs, but individual checks for an object are more of a
syntactic convenience to make it easier for module authors to declare
many independent conditions that each have their own error messages.
Since v1.2 exposed some details of our checks model into the JSON plan
output there are some unanswered questions here about how we can shift to
reporting in the two-level heirarchy described above. For now I've
preserved structural compatibility but not semantic compatibility: any
parser that was written against that format should still function but will
now see fewer results. We'll revisit this in a later commit and consider
other structures and what to do about our compatibility constraint on the
v1.2 structure.
Otherwise though, this is an internal-only change which preserves all of
the existing main behaviors of conditions as before, and just gets us
ready to build user-facing features in terms of this new structure.
The "checks" package is an expansion what we previously called
plans.Conditions to accommodate a new requirement that we be able to track
which checks we're expecting to run even if we don't actually get around
to running them, which will be helpful when we start using checks as part
of our module testing story because test reporting tools appreciate there
being a relatively consistent set of test cases from one run to the next.
So far this should be essentially a no-op change from an external
functionality standpoint, aside from some minor adjustments to how we
report some of the error and warning cases from condition evaluation in
light of the fact that the "checks" package can now track errors as a
different outcome than a failure of a valid check.
As is often the case with anything which changes what we track
in the EvalContext and persist between plan and apply, Terraform Core is
pretty brittle and so this had knock-on effects elsewhere too. Again, the
goal is for these changes to not create any material externally-visible
difference, and just to accommodate the new assumption that there will
always be a "checks" object available for tracking during a graph walk.
Go 1.19's "fmt" has some awareness of the new doc comment formatting
conventions and adjusts the presentation of the source comments to make
it clearer how godoc would interpret them. Therefore this commit includes
various updates made by "go fmt" to acheve that.
In line with our usual convention that we make stylistic/grammar/spelling
tweaks typically only when we're "in the area" changing something else
anyway, I also took this opportunity to review most of the comments that
this updated to see if there were any other opportunities to improve them.