Commit Graph

235 Commits

Author SHA1 Message Date
James Bardin
efd77159dd use key data from plan method for apply 2022-11-01 16:18:38 -04:00
James Bardin
ffe2e3935e avoid re-writing state for noop applies
We need to avoid re-writing the state for every NoOp apply. We may
still be evaluating the instance to account for any side-effects in the
condition checks, however the state of the instance has not changes.
Re-writing the state is a non-current operation, which may require
encoding a fairly large instance state and re-serializing the entire
state blob, so it is best avoided if possible.
2022-11-01 16:18:38 -04:00
James Bardin
eb88ccbc7b only add NoOp nodes with conditions
ONly add NoOp changes to the apply graph if they have conditions which
need to be evaluated.
2022-11-01 16:18:38 -04:00
James Bardin
19152e7ba5 fix log mesage 2022-11-01 16:18:38 -04:00
James Bardin
fa4c652013 changes are mutated during apply 2022-11-01 16:18:38 -04:00
James Bardin
b61c02da05 don't lose checks from refresh-only plan
If there are no changes, then there is no reason to create an apply
graph since all objects are known. We however do need the walk to match
the expected state structure. This is probably only cleanup of empty
nested modules and outputs, but some investigation is needed before
making the full change.

For now we can store the checks from the plan directly into the new
state, since the apply walk overwrote the results we had already.
2022-11-01 16:18:38 -04:00
James Bardin
92c8c76684 fix variable name 2022-10-20 13:14:16 -04:00
James Bardin
ac99cd6051 remove extra import line 2022-10-20 11:03:58 -04:00
James Bardin
28d5a5bf63 NoOp nodes should not have destroy edges
NoOp changes should not participate in a destroy sequence, but because
they are included as normal update nodes the usual connections were
still being made.
2022-10-20 10:59:08 -04:00
James Bardin
586401aeea make naming consistent 2022-10-20 09:36:10 -04:00
James Bardin
8a4883fd13 don't eval checks on destroy 2022-10-19 17:47:53 -04:00
James Bardin
8d11c7f524 the destroy refresh plan should be refresh-only
Refreshing for a destroy should use the refresh-only plan to avoid
planning new objects or evaluating conditions. This should also be
skipped if there is no state, since there would be nothing to refresh.
2022-10-19 17:47:53 -04:00
James Bardin
08081097cb check console with preconditions 2022-10-19 17:47:53 -04:00
James Bardin
a0723442b9 test for incorrectly evaluated outputs 2022-10-19 17:47:53 -04:00
James Bardin
333bdecf39 checks must be registered during eval 2022-10-19 17:47:53 -04:00
James Bardin
1eb22fd94a fix output transformer names
The removeRootOutputs field was not strictly used for that purpose, and
was also copied to another DestroyPlan field.
2022-10-19 17:47:42 -04:00
James Bardin
47b6386348 remove IsFullDestroy workaround
IsFullDestroy was a workaround during apply to detect when the change
set was created by a destroy plan. This no longer works correctly, and
we need to fall back to the UIMode set in the plan.
2022-10-19 14:47:06 -04:00
James Bardin
bcb792ee00 complete the root output expansion
Not all root output instances were going through proper expansion when
destroy operations were involved, leading to cases where they would be
evaluated even though the expected result was only to remove them from
the state.

Normally destroy nodes stand alone in the graph, and do not produce
references to other nodes. Because root output nodes were replaced by
expansion nodes, these were being connected via normal references, even
in the case where we were working with a destroy graph.
2022-10-19 14:42:55 -04:00
James Bardin
8a24d73d15 outputs should not be checked during destroy
Module output may need to be evaluated during destroy in order to
possibly be used by providers. The final state however is that all
objects are destroyed, so preconditions should not be evaluated.
2022-10-19 14:39:21 -04:00
James Bardin
71837d187b walkDestroy op in apply graph 2022-10-19 14:39:00 -04:00
James Bardin
5085ccdfbd fix error message 2022-10-19 14:29:34 -04:00
Martin Atkins
4bc1696fd1 core: Simplify our idea of "root node" and require it for DynamicExpand
The graph walking mechanism is specified as requiring a graph with a single
root, which in practice means there's exactly one node in the graph
which doesn't have any dependencies.

However, we previously weren't verifying that invariant is true for
subgraphs returned from DynamicExpand. It was working anyway, but it's not
ideal to be relying on a behavior that isn't guaranteed by our underlying
infrastructure.

We also previously had the RootTransformer being a bit clever and trying
to avoid adding a new node if there is already only a single graph with
no dependencies. That special case isn't particularly valuable since
there's no harm in turning a one-node graph into a two-node graph with
an explicit separate root node, and doing that allows us to assume that
the root node is always present and is always exactly terraform.rootNode.

Many existing DynamicExpand implementations were not producing valid
graphs and were previously getting away with it. All of them now produce
properly-rooted graphs that should pass validation, and we will guarantee
that with an explicit check of the DynamicExpand return value before we
try to walk that subgraph. For good measure we also verify that the root
node is exactly terraform.rootNode, even though that isn't strictly
required by our graph walker, just to help us catch potential future bugs
where a DynamicExpand implementation neglects to add our singleton root
node.
2022-10-13 14:01:08 -07:00
James Bardin
3779dbc2af noop orphan change has nothing to apply
An orphaned resource which plans as a NoOp change will have no config.
This is not an error, but there is nothing to do since there are also
no checks to validate. We still leave the change in the plan to keep the
plan as complete as possible, noting all possible changes.

Preventing the node from being added to the graph is awkward, because
the config is attached separately from the diff transformer. This should
not pose any problems however, because there is no longer any state or
config linking the instance to any dependencies in the graph.
2022-10-11 14:30:29 -04:00
James Bardin
041d9d3eec unknown evaluation of missing instances in import
Because import does not yet plan new instances as part of the import
process, we can end up evaluating references to resources which have no
state at all. The fallback for this situation could result in slightly
better values during import. The count and for_each values were
technically incorrect, since the length is not known to be zero, and the
single instance does have a concrete type which we can return.
2022-10-04 11:07:16 -04:00
James Bardin
c296172be7 test for cycle around aliased provider 2022-10-04 10:59:51 -04:00
James Bardin
036fb9c1bf check detailed provider for destroy edge cycles
When we checked for cycles with destroy edges around providers, it was
only for providers of a different type, but one can do the same thing
with the same provider under different local aliases. Check to see if
the provider also contains an alias, or is defined absolutely in some
other way. The absolute accuracy here isn't critical, since in most
cases these edges are not required for correct results, but finding a
correct and consistent method for determining when these edges are
needed is going to take more research.

There was also an oversight fixed here where the basic
creator->destroyer edges were added _after_ the cycle checks, limiting
their utility. The ordering of the additions was swapped to make sure
all cycles are noticed.
2022-10-04 10:58:36 -04:00
James Bardin
162b7274fa
Merge pull request #31914 from hashicorp/jbardin/ignore-all-legacy
special handling for legacy `ignore_changes = all`
2022-10-04 10:57:23 -04:00
James Bardin
fcec9e2c4f
Merge pull request #31902 from hashicorp/jbardin/noop-deposed
Prevent errors from NoOp deposed changes
2022-10-03 12:10:54 -04:00
James Bardin
f78ecef5e7 prevent errors from NoOp deposed changes
If a previously deposed object is deleted outside of Terraform, the
next plan will result in a NoOp change for the deposed object. Fix the
check to verify that the deposed object has an acceptable action rather
than use the `update` flag.
2022-10-03 09:24:23 -04:00
James Bardin
92f3a83530 special handling for legacy ignore_changes = all
Legacy providers expect Terraform to be able to clean up invalid plans
and computed attributes. Add a special case for the LegacyTypeSystem to
revert `ignore_changes = all` to the complete prior state.
2022-09-30 09:19:29 -04:00
Martin Atkins
6b290cf163 core: Don't re-register checkable outputs during the apply step
Once again we're caught out by sharing the same output value node type
between the plan phase and the apply phase. To allow for some slight
variation between plan and apply without drastic refactoring here we just
add a new flag to nodeExpandOutput which is true only during the planning
phase.

This then allows us to register the checkable objects only during the
planning phase and not incorrectly re-register them during the apply phase.
It's incorrect to re-register during apply because we carry over the
planned checkable objects from the plan phase into the apply phase so we
can guarantee that the final state will have all of the same checkable
objects that the plan did.

This avoids a panic during the apply phase from the incorrect duplicate
registration.
2022-09-29 08:23:51 -07:00
James Bardin
8c98e1f4a4
Merge pull request #31747 from hashicorp/jbardin/ignore-changes-all-computed
filter computed attrs from `ignore_changes=all`
2022-09-28 09:27:34 -04:00
Martin Atkins
cc964e6b0b core: Document that TransformRoot must produce coalescable node
We use a non-pointer value for this particular node, which means that
there can never be two root nodes in the same graph: the graph
implementation will just coalesce them together when a second one is added.

Our resource expansion code is relying on that coalescing so that it can
subsume together multiple graphs for different modules instances into a
single mega-graph with all instances across all module instances, with
any root nodes coalescing together to produce a single root.

This also updates one of the context tests that exercises resource
expansion so that it will generate multiple resource instance nodes per
module and thus potentially have multiple roots to coalesce together.
However, we aren't currently explicitly validating the return values from
DynamicExpand and so this test doesn't actually fail if the coalescing
doesn't happen. We may choose to validate the DynamicExpand result in a
later commit in order to make it more obvious if future modifications fail
to uphold this invariant.
2022-09-26 13:46:25 -07:00
Martin Atkins
2e177cd632 core: Eliminate NodePlannableResource indirection
We previously did two levels of DynamicExpand to go from ConfigResource to
AbsResource and then from AbsResource to AbsResourceInstance.

We'll now do the full expansion from ConfigResource to AbsResourceInstance
in a single DynamicExpand step inside nodeExpandPlannableResource.

The new approach is essentially functionally equivalent to the old except
that it fixes a bug in the previous implementation: we will now call
checkState.ReportCheckableObjects only once for the entire set of
instances for a particular resource, which is what the checkable objects
infrastructure expects so that it can always mention all of the checkable
objects in the check report even if we bail out partway through due to
a downstream error.

This is essentially the same code but now turned into additional methods
on nodeExpandPlannableResource instead of having the extra graph node
type. This has the further advantage of this now being straight-through
code with standard control flow, instead of the unusual inversion of
control we were doing before bouncing in and out of different Execute and
DynamicExpand implementations to get this done.
2022-09-26 13:46:25 -07:00
Martin Atkins
a9bd4099d3 core: DynamicExpand can return diagnostics
We were previously _trying_ to handle diagnostics here but were not quite
doing it right because we were testing whether the resulting error was
nil rather than appending it to the diagnostics and then seeing if the
result has errors.

The difference here is important because it allows DynamicExpand to return
warnings without associated errors when needed. Previously the graph
walker would treat a warnings-only result as if it were an error.

Ideally we'd change DynamicExpand to return diagnostics directly, but we
previously decided against that because there were so many implementors
to update, and my intent for this change is to be surgical in the update
so we minimize risk of backporting the change into patch releases.
2022-09-26 13:46:25 -07:00
James Bardin
dbaf6d63f3
Merge pull request #31871 from hashicorp/jbardin/remove-planned-during-import
RemovePlannedResourceInstanceObjects during import
2022-09-26 14:24:23 -04:00
James Bardin
008810f593
Merge pull request #31858 from hashicorp/jbardin/prune-plan-destroy
prune unused nodes from a destroy plan graph
2022-09-26 14:24:07 -04:00
James Bardin
1c8352d926
Merge pull request #31857 from hashicorp/jbardin/destroy-edge-cycles
prevent cycles when connecting destroy nodes
2022-09-26 14:23:45 -04:00
James Bardin
ce02344589 prevent cycles when connecting destroy nodes
When adding destroy edges between resources from different providers,
and a provider itself depends on the other provider's resources, we can
get cycles in the final dependency graph.

The problem is a little deeper than simply not connecting these nodes,
since the edges are still needed when doing a full destroy operation.
For now we can get by assuming the edges are required, and reverting
them only if they result in a cycle. This works because destroy edges
are the last edges added to managed resources during graph building.

This was rarely a problem before v1.3, because noop nodes were not added
to the apply graph, and unused values were aggressively pruned. In v1.3
however all nodes are kept in the graph so that postcondition blocks are
always evaluated during apply, increasing the chances of the cycles
appearing.
2022-09-26 13:38:17 -04:00
James Bardin
5bca0c609b RemovePlannedResourceInstanceObjects during import
Because import uses the complete planning process, it must also call
RemovePlannedResourceInstanceObjects. This is required to serialized the
resulting state if there are data sources with an ObjectPlanned status
because they could not be read during the import process.
2022-09-25 14:41:53 -04:00
James Bardin
aedd95a1ee prune unused nodes from a destroy plan graph
We may need to prune nodes from a full destroy plan graph which cannot
be evaluated if there is no current state.

Add missing method to nodeExpandPlannableResource to ensure planned
resource are handled correctly when pruning nodes.
2022-09-23 14:56:04 -04:00
Alisdair McDiarmid
10ae444ee2 Upgrade HCL to fix optional attr default crash
Also add regression test coverage of the crash. This would occur when
objects with optional attributes had default values of different type
from the attribute type, and the objects were members of a collection.

For example:

list(object({
  a = optional(set(string), [])
}))

If this type constraint is applied to a variable value where one object
has a set(string) value for a, and the other object applies the empty
tuple default, Terraform would crash.
2022-09-23 10:34:54 -04:00
James Bardin
3e281e1eda default optional+computed to prior value 2022-09-20 11:00:19 -04:00
Alisdair McDiarmid
551f6316fa Do not apply type defaults to null values
Applying object type defaults to null values can convert null to an
object with partial attributes. This means that even a specified default
value of null will not remain null after variable evaluation.

In turn, the result can then be invalid, if not all attributes in an
object type have defaults specified.

This commit skips the type default application step during config load
and variable evaluation if the default or given value is null of any
type. We still perform type conversion.
2022-09-15 15:32:36 -04:00
James Bardin
673d6ca2bc filter computed attrs from ignore_changes=all
When handling ignore_changes=all, we must filter computed attributes
from the prior state to prevent them showing in the configuration. Since
it's not valid for the user to have set computed attributes in the
config, the provider should expect to never see any values there. The
oversight has only now become apparent, as more providers adopt the
plugin-framework which has direct access to the plan-time configuration
value.
2022-09-06 16:29:08 -04:00
kmoe
dec48a8510
plans: indicate when resource deleted due to move (#31695)
Add a new ChangeReason, ReasonDeleteBecauseNoMoveTarget, to provide better
information in cases where a planned deletion is due to moving a resource to
a target not in configuration.

Consider a case in which a resource instance exists in state at address A, and
the user adds a moved block to move A to address B. Whether by the user's
intention or not, address B does not exist in configuration.
Terraform combines the move from A to B, and the lack of configuration for B,
into a single delete action for the (previously nonexistent) entity B.
Prior to this commit, the Terraform plan will report that resource B will be
destroyed because it does not exist in configuration, without explicitly
connecting this to the move.

This commit provides the user an additional clue as to what has happened, in a
case in which Terraform has elided a user's action and inaction into one
potentially destructive change.
2022-08-30 18:01:29 +01:00
Martin Atkins
d63871f70d core: Propagate check results accurately from plan to apply
In an earlier commit we changed the states.CheckResults model to
explicitly model the config object vs. dynamic checkable object hierarchy,
but neglected to update the logic in Terraform Core to take that into
account when propagating the object expansion decisions from the plan
phase to the apply phase. That meant that we were incorrectly classifying
zero-instance resources always as having an unknown number of instances,
rather than possibly being known to have zero instances.

This now follows the two-level heirarchy of the data structure, which has
the nice side-effect that we can remove some of the special-case methods
from checks.State that we were using to bulk-load data: the data is now
shaped in the appropriate way to reload the data using the same method
the plan phase would've used to record the results in the first place.
2022-08-26 15:47:29 -07:00
Martin Atkins
9e4861adbb states: Two-level representation of check results
A significant goal of the design changes around checks in earlier commits
(with the introduction of package "checks") was to allow us to
differentiate between a configuration object that we didn't expand at all
due to an upstream error, which has _unknown_ check status, and a
configuration object that expanded to zero dynamic objects, which
therefore has a _passing_ check status.

However, our initial lowering of checks.State into states.CheckResults
stayed with the older model of just recording each leaf check in isolation,
without any tracking of the containers.

This commit therefore lightly reworks our representation of check results
in the state and plan with two main goals:
- The results are grouped by the static configuration object they came
  from, and we capture an aggregate status for each of those so that
  we can differentiate an unknown aggregate result from a passing
  aggregate result which has zero dynamic associated objects.
- The granularity of results is whole checkable objects rather than
  individual checks, because checkable objects have durable addresses
  between runs, but individual checks for an object are more of a
  syntactic convenience to make it easier for module authors to declare
  many independent conditions that each have their own error messages.

Since v1.2 exposed some details of our checks model into the JSON plan
output there are some unanswered questions here about how we can shift to
reporting in the two-level heirarchy described above. For now I've
preserved structural compatibility but not semantic compatibility: any
parser that was written against that format should still function but will
now see fewer results. We'll revisit this in a later commit and consider
other structures and what to do about our compatibility constraint on the
v1.2 structure.

Otherwise though, this is an internal-only change which preserves all of
the existing main behaviors of conditions as before, and just gets us
ready to build user-facing features in terms of this new structure.
2022-08-26 15:47:29 -07:00
Martin Atkins
3785619f93 core: Use the new checks package for condition tracking
The "checks" package is an expansion what we previously called
plans.Conditions to accommodate a new requirement that we be able to track
which checks we're expecting to run even if we don't actually get around
to running them, which will be helpful when we start using checks as part
of our module testing story because test reporting tools appreciate there
being a relatively consistent set of test cases from one run to the next.

So far this should be essentially a no-op change from an external
functionality standpoint, aside from some minor adjustments to how we
report some of the error and warning cases from condition evaluation in
light of the fact that the "checks" package can now track errors as a
different outcome than a failure of a valid check.

As is often the case with anything which changes what we track
in the EvalContext and persist between plan and apply, Terraform Core is
pretty brittle and so this had knock-on effects elsewhere too. Again, the
goal is for these changes to not create any material externally-visible
difference, and just to accommodate the new assumption that there will
always be a "checks" object available for tracking during a graph walk.
2022-08-26 15:47:29 -07:00
Martin Atkins
783a07d9e8 build: Use Go 1.19
Go 1.19's "fmt" has some awareness of the new doc comment formatting
conventions and adjusts the presentation of the source comments to make
it clearer how godoc would interpret them. Therefore this commit includes
various updates made by "go fmt" to acheve that.

In line with our usual convention that we make stylistic/grammar/spelling
tweaks typically only when we're "in the area" changing something else
anyway, I also took this opportunity to review most of the comments that
this updated to see if there were any other opportunities to improve them.
2022-08-22 10:59:12 -07:00