Made a change to code example within the *Preconditions and Postconditions* section so that it technically makes sense; prior it was missing the data resource that was being called within the precondition lifecycle event on line 135, and the aws_instance resource was not utilizing the ami being provided by the data source in line 129, so i changed that as well.
We use a non-pointer value for this particular node, which means that
there can never be two root nodes in the same graph: the graph
implementation will just coalesce them together when a second one is added.
Our resource expansion code is relying on that coalescing so that it can
subsume together multiple graphs for different modules instances into a
single mega-graph with all instances across all module instances, with
any root nodes coalescing together to produce a single root.
This also updates one of the context tests that exercises resource
expansion so that it will generate multiple resource instance nodes per
module and thus potentially have multiple roots to coalesce together.
However, we aren't currently explicitly validating the return values from
DynamicExpand and so this test doesn't actually fail if the coalescing
doesn't happen. We may choose to validate the DynamicExpand result in a
later commit in order to make it more obvious if future modifications fail
to uphold this invariant.
We previously did two levels of DynamicExpand to go from ConfigResource to
AbsResource and then from AbsResource to AbsResourceInstance.
We'll now do the full expansion from ConfigResource to AbsResourceInstance
in a single DynamicExpand step inside nodeExpandPlannableResource.
The new approach is essentially functionally equivalent to the old except
that it fixes a bug in the previous implementation: we will now call
checkState.ReportCheckableObjects only once for the entire set of
instances for a particular resource, which is what the checkable objects
infrastructure expects so that it can always mention all of the checkable
objects in the check report even if we bail out partway through due to
a downstream error.
This is essentially the same code but now turned into additional methods
on nodeExpandPlannableResource instead of having the extra graph node
type. This has the further advantage of this now being straight-through
code with standard control flow, instead of the unusual inversion of
control we were doing before bouncing in and out of different Execute and
DynamicExpand implementations to get this done.
We were previously _trying_ to handle diagnostics here but were not quite
doing it right because we were testing whether the resulting error was
nil rather than appending it to the diagnostics and then seeing if the
result has errors.
The difference here is important because it allows DynamicExpand to return
warnings without associated errors when needed. Previously the graph
walker would treat a warnings-only result as if it were an error.
Ideally we'd change DynamicExpand to return diagnostics directly, but we
previously decided against that because there were so many implementors
to update, and my intent for this change is to be surgical in the update
so we minimize risk of backporting the change into patch releases.
When adding destroy edges between resources from different providers,
and a provider itself depends on the other provider's resources, we can
get cycles in the final dependency graph.
The problem is a little deeper than simply not connecting these nodes,
since the edges are still needed when doing a full destroy operation.
For now we can get by assuming the edges are required, and reverting
them only if they result in a cycle. This works because destroy edges
are the last edges added to managed resources during graph building.
This was rarely a problem before v1.3, because noop nodes were not added
to the apply graph, and unused values were aggressively pruned. In v1.3
however all nodes are kept in the graph so that postcondition blocks are
always evaluated during apply, increasing the chances of the cycles
appearing.
Because import uses the complete planning process, it must also call
RemovePlannedResourceInstanceObjects. This is required to serialized the
resulting state if there are data sources with an ObjectPlanned status
because they could not be read during the import process.
We may need to prune nodes from a full destroy plan graph which cannot
be evaluated if there is no current state.
Add missing method to nodeExpandPlannableResource to ensure planned
resource are handled correctly when pruning nodes.
Also add regression test coverage of the crash. This would occur when
objects with optional attributes had default values of different type
from the attribute type, and the objects were members of a collection.
For example:
list(object({
a = optional(set(string), [])
}))
If this type constraint is applied to a variable value where one object
has a set(string) value for a, and the other object applies the empty
tuple default, Terraform would crash.
The remote-state-data documentation page refers to the aws_s3_bucket_object resource and data which is deprecated. The documentation says to use aws_s3_object instead. This change updates the links for the S3 object solution for remote state.
Previously the cloud backend only supported the pre and post plan stages.
This commit adds support to display the output of the new pre-apply task
stage as well.
Previously all of the logic to retrieve Task Stages was in the backend_plan.go file.
This commit:
* Moves the logic to the backend_taskStages.go file.
* Replaces using an array to a map indexed by the Task Stage's stage. This makes it
easier to find the relevant stage in the list and means the getTaskStageIDByName
function is no longer required.