Commit Graph

81 Commits

Author SHA1 Message Date
Pam Selle
02c48f8071 Comment fixing 2020-10-18 13:00:09 -04:00
James Bardin
0f5bf21983 remove last use of the apply graph Destroy flag!
The apply graph builder no longer uses the destroy flag, which is not
always known since the destroy flag is not stored in the plan file.
2020-10-12 17:29:45 -04:00
James Bardin
ff21cc3c8d remove the need for destroyRootOutputTransformer
Since root outputs can now use the planned changes, we can directly
insert the correct applyable or destroyable node into the graph during
plan and apply, and it will remove the outputs if they are being
destroyed.
2020-10-12 17:29:45 -04:00
James Bardin
d8e6d66362 use recorded changes for outputs
We record output changes in the plan, but don't currently use them for
anything other than display. If we have a wholly known output value
stored in the plan, we should prefer that for apply in order to ensure
consistency with the planned values. This also avoids cases where
evaluation during apply cannot happen correctly, like when all resources
are being removed or we are executing a destroy.

We also need to record output Delete changes when the plan is for
destroy operation. Otherwise without a change, the apply step will
attempt to evaluate the outputs, causing errors, or leaving them in the
state with stale values.
2020-10-09 13:13:27 -04:00
James Bardin
35714e61e6 audit graph builder to make them more similar
Auditing the graph builder to remove unused transformers (planning does
not need to close provisioners for example), and re-order them. While
many of the transformations are commutative, using the same order
ensures the same behavior between operations when the commutative
property is lost or changed.
2020-10-06 17:39:53 -04:00
James Bardin
07af1c6225 destroy time eval of resources pending deletion
Allow the evaluation of resource pending deleting only during a full
destroy. With this change we can ensure deposed instances are not
evaluated under normal circumstances, but can be references when needed.
This also allows us to remove the fixup transformer that added
connections so temporary values would evaluate in the correct order when
referencing destroy nodes.

In the majority of cases, we do not want to evaluate resources that are
pending deletion since configuration references only can refer to
resources that is intended to be managed by the configuration. An
exception to that rule is when Terraform is performing a full `destroy`
operation, and providers need to evaluate existing resources for their
configuration.
2020-10-01 17:11:46 -04:00
Kristin Laemmert
c9f710ac29
terraform: remove DisableReduce from refresh, plan and apply graphs (#25824) 2020-08-14 14:13:33 -04:00
James Bardin
5b8010b5b9 add a fixup transformer to connect destroy refs
Since we have to allow destroy nodes to be evaluated for providers
during a full destroy, this is adding a transformer to connect temporary
values to any destroy versions of their references when possible. The
ensures that the destroy happens before evaluation, even when there
isn't a full create-then-destroy set of instances.

The cases where the connection can't be made are when the temporary
value has a provider descendant, which means it must evaluate early in
the case of a full destroy. This means the value may contain incorrect
data when referencing resource that are create_before_destroy, or being
scaled-in via count or for_each. That will need to be addressed later by
reevaluating how we handle the full destroy case in terraform.
2020-07-20 09:49:47 -04:00
James Bardin
2555f6f988 remove root output eval nodes from destroy
If we're adding a node to remove a root output from the state, the
output itself does not need to be re-evaluated. The exception for root
outputs caused them to be missed when we refactored resource destruction
to only use the existing state.
2020-07-07 11:10:15 -04:00
James Bardin
308eb5f47f add CountBoundaryTransformer after targeting
no need to have the extra nodes and edges in the graph when we're
traversing everything for targeting
2020-06-23 17:22:44 -04:00
James Bardin
48757bcbe0 get rid of the NodeOutputOrphan
We don't need another node type for orphaned outptus, they are just
outputs being removed for a different reason than destroy. Use the
NodeDestroyableOutput implementation.

Destroy outputs also don't need to be referencers, since they are being
removed.

Rename DestroyOutputTransformer to destroyRootOutputTransformer, and add
an explanation as to why it is the only transformer that requires an
exception to know when it's involved from the destroy command.
2020-05-28 21:30:44 -04:00
James Bardin
c638252210 pruneUnusedNodesTransformer
Create a single transformer to remove all unused nodes from the apply
graph. This is similar to the combination of the resource pruning done
in the destroy edge transformer, and the unused values transformer. In
addition to resources, variables, locals, and outputs, we now need to
remove unused module expansion nodes as well. Since these can all be
interdependent, we need to process them as whole in a single
transformation.
2020-05-28 21:30:42 -04:00
James Bardin
009a136fa2 requiresInstanceExpansion and instanceExpander
create interfaces that nodes can implement to declare whether they
expand into instances of some sort, using the instances.Expander, and/or
whether use the instances.Expander to find instances.

included is a rough transformer implementation to remove these nodes
from the apply graph.
2020-05-28 21:30:12 -04:00
James Bardin
cf9b6de03e force cbd during apply too
We need to run the force CBD transformer during apply too, both to
ensure we can rely on the `CreateBeforeDestroy()` status for dependants
during apply, but also to ensure that the correct status is stored into
state.
2020-05-14 15:46:08 -04:00
James Bardin
0dcca1bc37 make the root node a nodeCloseModule for root
Replace the graphNodeRoot for the main graph with a nodeCloseModule for
the root module. USe a new transformer as well, so as to not change any
behavior of DynamicExpand graphs.

Closing out the root module like we do with sub modules means we no
longer need the OrphanResourceTransformer, or the NodeDestroyResource.
The old resource destroy logic has mostly moved into the instance nodes,
and the remaining resource node was just for cleanup, which need to be
done again by the module since there isn't always a NodeDestroyResource
to be evaluated.

The more-correct state caused a few tests to fail, which need to be
cleaned up to match the state without empty resource husks.
2020-04-02 16:00:36 -04:00
James Bardin
026a45a390 remove abstract resource node from destroy node
NodeDestroyResource does not require a provider, and to avoid this a
temporary GraphNodeNoProvider was used to differentiate it from other
resource nodes. We can now de-couple the destroy node from the abstract
resource which was adding the ProvidedBy method, and remove the
NoProvider method.
2020-04-02 16:00:35 -04:00
James Bardin
23cebc5205 create nodeExpandApplyableResource
Resources also need to be expanded during apply, which cannot be done
via EvalTree due to the lack of EvalContext.
2020-03-25 17:03:06 -04:00
Kristin Laemmert
c7cc0afb80
Mildwonkey/ps schema (#24312)
* add Config to AttachSchemaTransformer for providerFqn lookup
* terraform: refactor ProvidedBy() to return nil when provider is not set
in config or state
2020-03-10 14:43:57 -04:00
Martin Atkins
68b900928d core: Use instances.Expander to handle resource count and for_each
This is a minimal integration of instances.Expander used just for resource
count and for_each, for now just forcing modules to always be singletons
because the rest of Terraform Core isn't ready to deal with expanding
module calls yet.

This doesn't integrate super cleanly yet because we still have some
cleanup work to do in the design of the plan walk, to make it explicit
that the nodes in the plan graph represent static configuration objects
rather than expanded instances, including for modules. To make this work
in the meantime, there is some shimming between addrs.Module and
addrs.ModuleInstance to correct for the discontinuities that result from
the fact that Terraform currently assumes that modules are always
singletons.
2020-02-14 15:20:07 -08:00
James Bardin
d4d99be2db remove some destroy special cases
We no longer need special cases for most things during a full destroy,
so remove those from the graph transformations.

The only remaining cases are:
 - remove the root outputs, so destroy ends up with a clean state
 - reverse the target deps when targeting a destroy.
2020-02-13 15:43:52 -05:00
James Bardin
ca5b0e6894 no longer need DestroyValueReferenceTransformer
since destroy nodes are no longer connected to values, there's no need
to try and wrangle their edges to prevent cycles during destroy.
2020-02-13 15:43:52 -05:00
James Bardin
fe3edb8e46 more aggressively prune unused values
Since a planned destroy can no longer indicate it is a full destroy,
unused values were being left in the apply graph for evaluation. If
these values contains interpolations that can fail, (for example, a
zipmap with mismatched list sizes), it will cause the apply to abort.

The PrunUnusedValuesTransformer was only previously run during destroy,
more out of conservatism than for any other particular reason. Adapt it
to always remove unused values from the graph, with the exception being
the root module outputs, which must be retained when we don't have a
clear indication that a full destroy is being executed.
2019-12-19 09:09:38 -05:00
James Bardin
42bb4a644c make use of the new state Dependencies
Make use of the new Dependencies field in the instance state.

The inter-instance dependencies will be determined from the complete
reference graph, so that absolute addresses can be stored, rather than
just references within a module. The Dependencies are added to the node
in the same manner as state, i.e. via an "attacher" interface and
transformer.  This is because dependencies are calculated from the graph
itself, and not from the config.
2019-11-07 17:49:03 -05:00
James Bardin
470362b051 fix CBD tests to work on real data
The CBDEdgeTransformer tests worked on fake data structures, with a
synthetic graph, and configs that didn't match. Update them to generate
a more complete graph, with real node implementations, from real
configs.

The output graph is filtered down to instances, and the results still
functionally match the original expected test results, with some minor
additions due to using the real implementation.
2019-10-24 12:01:50 -04:00
James Bardin
7c703b1bbf apply edge transforamtions after references
We can't correctly resolve the destroy ordering if all references
haven't been assigned to each node.
2019-10-24 12:01:50 -04:00
Martin Atkins
2eea07750a core: Clean up resource states when they are orphaned
We previously had mechanisms to clean up only individual instance states,
leaving behind empty resource husks in the state after they were all
destroyed.

This takes care of it in the "orphan" case. It does not yet do it in the
"terraform destroy" or "terraform plan -destroy" cases because we don't
have anywhere to record in the plan that we're actually destroying and so
the resource configurations should be ignored and _everything_ should be
cleaned. We'll let the state be not-quite-empty in that case for now,
since it doesn't really hurt; cleaning up orphans is the main case because
the state will live on afterwards and so leftover cruft will accumulate
over the course of many changes.
2018-10-16 19:14:11 -07:00
Martin Atkins
a43b7df282 core: Handle forced-create_before_destroy during the plan walk
Previously we used a single plan action "Replace" to represent both the
destroy-before-create and the create-before-destroy variants of replacing.
However, this forces the apply graph builder to jump through a lot of
hoops to figure out which nodes need it forced on and rebuild parts of
the graph to represent that.

If we instead decide between these two cases at plan time, the actual
determination of it is more straightforward because each resource is
represented by only one node in the plan graph, and then we can ensure
we put the right nodes in the graph during DiffTransformer and thus avoid
the logic for dealing with deposed instances being spread across various
different transformers and node types.

As a nice side-effect, this also allows us to show the difference between
destroy-then-create and create-then-destroy in the rendered diff in the
CLI, although this change doesn't fully implement that yet.
2018-10-16 19:14:11 -07:00
Martin Atkins
334c6f1c2c core: Be more explicit in how we handle create_before_destroy
Previously our handling of create_before_destroy -- and of deposed objects
in particular -- was rather "implicit" and spread over various different
subsystems. We'd quietly just destroy every deposed object during a
destroy operation, without any user-visible plan to do so.

Here we make things more explicit by tracking each deposed object
individually by its pseudorandomly-allocated key. There are two different
mechanisms at play here, building on the same concepts:

- During a replace operation with create_before_destroy, we *pre-allocate*
  a DeposedKey to use for the prior object in the "apply" node and then
  pass that exact id to the destroy node, ensuring that we only destroy
  the single object we planned to destroy. In the happy path here the
  user never actually sees the allocated deposed key because we use it and
  then immediately destroy it within the same operation. However, that
  destroy may fail, which brings us to the second mechanism:

- If any deposed objects are already present in state during _plan_, we
  insert a destroy change for them into the plan so that it's explicit to
  the user that we are going to destroy these additional objects, and then
  create an individual graph node for each one in DiffTransformer.

The main motivation here is to be more careful in how we handle these
destroys so that from a user's standpoint we never destroy something
without the user knowing about it ahead of time.

However, this new organization also hopefully makes the code itself a
little easier to follow because the connection between the create and
destroy steps of a Replace is reprseented in a single place (in
DiffTransformer) and deposed instances each have their own explicit graph
node rather than being secretly handled as part of the main instance-level
graph node.
2018-10-16 19:14:11 -07:00
Martin Atkins
0a97daf3de core: Always update resource metadata in state during apply
Previously we had a bug where we would fail to populate resource-level
metadata in the state during apply when count = 0, because the apply
graph would contain only instance nodes, not whole-resource nodes.

To address this, we add to the apply graph a node for each resource in
the configuration alongside the separate resource instance nodes. This
node's job is just to populate the state metadata for the resource, which
ensures it gets updated correctly even when count = 0.

When count is not zero this ends up doing some redundant work that
would've happened as a side-effect of applying individual resource
instances anyway, but it's harmless and makes the updating of our
resource-level metadata more explicit.
2018-10-16 19:14:11 -07:00
Martin Atkins
7d760c09fb core: Update EvalCountFixZeroOneBoundaryGlobal for new state types 2018-10-16 19:14:11 -07:00
Martin Atkins
a3403f2766 terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.

The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.

The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.

Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-10-16 19:11:09 -07:00
Martin Atkins
71cedf19a4 core: Don't create indirect provider dependencies for references
The prior commit changed the schema-access model so that all schemas are
fetched up front during context creation and are then readily available
for use throughout graph building and evaluation.

As a result, we no longer need to create dependency edges to a provider
when one of its resources is referenced by another node, and so the
ProviderTransformer needs only to worry about direct ownership
dependencies.

This also avoids the need for us to run AttachSchemaTransformer twice,
since ProviderTransformer no longer needs schema and we can therefore
defer attaching until just before ReferenceTransformer, when all of the
referencable and referencing nodes are already present in the graph.
2018-10-16 18:49:20 -07:00
Martin Atkins
f7aa06726a core: Run AttachSchemaTransformer twice to catch provider nodes too
Both ProviderTransformer and ReferenceTransformer need schema information,
and so there's a chicken-and-egg problem here where previously the schemas
were not getting attached to provider nodes created during
ProviderTransformer.

As a stop-gap measure for now we'll just run AttachSchemaTransformer
twice, so we can catch any new nodes created during the provider
transforms.
2018-10-16 18:49:20 -07:00
Martin Atkins
88b5607a7a core: Fetch schemas during context construction
Previously we fetched schemas during the AttachSchemaTransformer,
potentially multiple times as that was re-run for each graph built. Now
we fetch the schemas just once during context construction, passing that
result into each of the graph builders.

This only addresses the schema accesses during graph construction. We're
still separately loading schemas during the main walk for evaluation
purposes. This will be addressed in a later commit.
2018-10-16 18:49:20 -07:00
Martin Atkins
b144497041 core: Attach schemas before dealing with provider edges
Since ProviderTransformer now needs the schema in order to infer indirect
references to providers, we must run AttachSchemaTransformer before the
provider transformers in order to calculate the correct ordering of
operations.
2018-10-16 18:49:20 -07:00
James Bardin
4d2da4d733 connect non-resources to providers they reference
Any non-resource (outputs, variables, locals) that references a resource
type must also be connected to that resources provider. This is required
during apply, because the graph built from the diff may not include the
referenced resources because they are being evaluated from the state.

If the provider isn't present already, add a NodeEvalableProvider to
fetch the provider schema.

The provider transformers now need to happen after the outputs, locals,
and variables are transformed.
2018-10-16 18:49:20 -07:00
Martin Atkins
bec0f56808 core: Pass components through to the destroy transformers
These transformers both construct temporary graphs using many of the same
transformers used in the apply graph, and properly doing this now requires
access to the providers and provisioners in order to obtain their schemas.

Along with this, we also update the tests here to use the
simpleMockComponentFactory helper to get a mock provider with a schema
already configured, which means we also need to update the test fixtures
and assertions to use the resource type and attributes defined in that
mock factory.
2018-10-16 18:48:28 -07:00
Martin Atkins
d4285dd27f core: Attach resource and provider config schemas during graph build
This is a little awkward since we need to instantiate the providers much
earlier than before. To avoid a lot of reshuffling here we just spin each
one up and then immediately shut it down again, letting our existing init
functionality during the graph walk still do the main initialization.
2018-10-16 18:46:46 -07:00
Martin Atkins
c937c06a03 terraform: ugly huge change to weave in new HCL2-oriented types
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.

The three main goals here are:
- Use the configuration models from the "configs" package instead of the
  older models in the "config" package, which is now deprecated and
  preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
  new "lang" package, instead of the Interpolator type and related
  functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
  rather than hand-constructed strings. This is not critical to support
  the above, but was a big help during the implementation of these other
  points since it made it much more explicit what kind of address is
  expected in each context.

Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.

I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
2018-10-16 18:46:46 -07:00
James Bardin
99867f0082 add PruneUnusedValuesTransformer
Since outputs and local nodes are always evaluated, if the reference a
resource form the configuration that isn't in the state, the
interpolation could fail.

Prune any local or output values that have no references in the graph.
2018-01-30 10:47:17 -05:00
James Bardin
d31fe5ab9d delete outputs during destroy
Now that outputs are always evaluated, we still need a way to remove
them from state when they are destroyed.

Previously, outputs were removed during destroy from the same
"Applyable" node type that evaluates them. Now that we need to possibly
both evaluate and remove output during an apply, we add a new node -
NodeDestroyableOutput.

This new node is added to the graph by the DestroyOutputTransformer,
which make the new destroy node depend on all descendants of the output
node.  This ensures that the output remains in the state as long as
everything which may interpolate the output still exists.
2018-01-29 19:30:04 -05:00
James Bardin
7da1a39480 always evaluate locals, even during destroy
Destroy-time provisioners require us to re-evaluate during destroy.

Rather than destroying local values, which doesn't do much since they
aren't persisted to state, we always evaluate them regardless of the
type of apply. Since the destroy-time local node is no longer a
"destroy" operation, the order of evaluation need to be reversed. Take
the existing DestroyValueReferenceTransformer and change it to reverse
the outgoing edges, rather than in incoming edges. This makes it so that
any dependencies of a local or output node are destroyed after
evaluation.

Having locals evaluated during destroy failed one other test, but that
was the odd case where we need `id` to exist as an attribute as well as
a field.
2018-01-29 16:16:41 -05:00
James Bardin
7e4dcdb9f0 run RemovedModuleTransformer before References
Also add RemovedModuleTransformer to the plan graph for parity.
2017-11-09 10:34:56 -05:00
James Bardin
15ea04af8a remove modules from state
Remove the module entry from the state if a module is no longer in the
configuration. Modules are not removed if there are any existing
resources with the module path as a prefix. The only time this should be
the case is if a module was removed in the config, but the apply didn't
target that module.

Create a NodeModuleRemoved and an associated EvalDeleteModule to track
the module in the graph then remove it from the state. The
NodeModuleRemoved dependencies are simply any other node which contains
the module path as a prefix in its path.

This could have probably been done much easier as a step in pruning the
state, but modules are going to have to be promoted to full graph nodes
anyway in order to support count.
2017-11-08 19:11:53 -05:00
James Bardin
2f91007999 group the provider transformations
The series of provider transformations is important, and often repeated.
Group these together in a single transform function.
2017-11-02 15:00:06 -04:00
James Bardin
0986d01223 add providers directly from the configuration
The first step in only using the required provider nodes in a graph is
to be able to specifically add them from the configuration.

The MissingProviderTransformer was previously responsible for adding
all providers. Now it is really just adding any that are missing from
the config.
2017-11-02 15:00:06 -04:00
James Bardin
35c6a4e89d add DestroyValueReferenceTransformer
DestroyValueReferenceTransformer is used during destroy to reverse the
edges for output and local values. Because destruction is going to
remove these from the state, nodes that depend on their value need to be
visited first.
2017-10-02 16:20:29 -04:00
Martin Atkins
3a30bfe845 core: evaluate locals and return them for interpolation
We stash the locals in the module state in a map that is ignored for JSON
serialization. We don't include locals in the persisted state because they
can be trivially recomputed and this allows us to assume that they will
pass through verbatim, without any normalization or other transforms
caused by the JSON serialization.

From a user standpoint a local is just a named alias for an expression,
so it's desirable that the result passes through here in as raw a form
as possible, so it behaves as closely as possible to simply using the
given expression directly.
2017-08-21 15:15:25 -07:00
Sander van Harmelen
051582d32a Add the close provider and provisioner transformers (#13102) 2017-04-12 23:25:15 +02:00
Mitchell Hashimoto
4d6085b46a
terraform: outputs should not be included if not targeted
Fixes #10911

Outputs that aren't targeted shouldn't be included in the graph.

This requires passing targets to the apply graph. This is unfortunate
but long term should be removable since I'd like to move output changes
to the diff as well.
2017-02-13 12:52:45 -08:00