Commit Graph

65 Commits

Author SHA1 Message Date
James Bardin
23cebc5205 create nodeExpandApplyableResource
Resources also need to be expanded during apply, which cannot be done
via EvalTree due to the lack of EvalContext.
2020-03-25 17:03:06 -04:00
Kristin Laemmert
c7cc0afb80
Mildwonkey/ps schema (#24312)
* add Config to AttachSchemaTransformer for providerFqn lookup
* terraform: refactor ProvidedBy() to return nil when provider is not set
in config or state
2020-03-10 14:43:57 -04:00
Martin Atkins
68b900928d core: Use instances.Expander to handle resource count and for_each
This is a minimal integration of instances.Expander used just for resource
count and for_each, for now just forcing modules to always be singletons
because the rest of Terraform Core isn't ready to deal with expanding
module calls yet.

This doesn't integrate super cleanly yet because we still have some
cleanup work to do in the design of the plan walk, to make it explicit
that the nodes in the plan graph represent static configuration objects
rather than expanded instances, including for modules. To make this work
in the meantime, there is some shimming between addrs.Module and
addrs.ModuleInstance to correct for the discontinuities that result from
the fact that Terraform currently assumes that modules are always
singletons.
2020-02-14 15:20:07 -08:00
James Bardin
d4d99be2db remove some destroy special cases
We no longer need special cases for most things during a full destroy,
so remove those from the graph transformations.

The only remaining cases are:
 - remove the root outputs, so destroy ends up with a clean state
 - reverse the target deps when targeting a destroy.
2020-02-13 15:43:52 -05:00
James Bardin
ca5b0e6894 no longer need DestroyValueReferenceTransformer
since destroy nodes are no longer connected to values, there's no need
to try and wrangle their edges to prevent cycles during destroy.
2020-02-13 15:43:52 -05:00
James Bardin
fe3edb8e46 more aggressively prune unused values
Since a planned destroy can no longer indicate it is a full destroy,
unused values were being left in the apply graph for evaluation. If
these values contains interpolations that can fail, (for example, a
zipmap with mismatched list sizes), it will cause the apply to abort.

The PrunUnusedValuesTransformer was only previously run during destroy,
more out of conservatism than for any other particular reason. Adapt it
to always remove unused values from the graph, with the exception being
the root module outputs, which must be retained when we don't have a
clear indication that a full destroy is being executed.
2019-12-19 09:09:38 -05:00
James Bardin
42bb4a644c make use of the new state Dependencies
Make use of the new Dependencies field in the instance state.

The inter-instance dependencies will be determined from the complete
reference graph, so that absolute addresses can be stored, rather than
just references within a module. The Dependencies are added to the node
in the same manner as state, i.e. via an "attacher" interface and
transformer.  This is because dependencies are calculated from the graph
itself, and not from the config.
2019-11-07 17:49:03 -05:00
James Bardin
470362b051 fix CBD tests to work on real data
The CBDEdgeTransformer tests worked on fake data structures, with a
synthetic graph, and configs that didn't match. Update them to generate
a more complete graph, with real node implementations, from real
configs.

The output graph is filtered down to instances, and the results still
functionally match the original expected test results, with some minor
additions due to using the real implementation.
2019-10-24 12:01:50 -04:00
James Bardin
7c703b1bbf apply edge transforamtions after references
We can't correctly resolve the destroy ordering if all references
haven't been assigned to each node.
2019-10-24 12:01:50 -04:00
Martin Atkins
2eea07750a core: Clean up resource states when they are orphaned
We previously had mechanisms to clean up only individual instance states,
leaving behind empty resource husks in the state after they were all
destroyed.

This takes care of it in the "orphan" case. It does not yet do it in the
"terraform destroy" or "terraform plan -destroy" cases because we don't
have anywhere to record in the plan that we're actually destroying and so
the resource configurations should be ignored and _everything_ should be
cleaned. We'll let the state be not-quite-empty in that case for now,
since it doesn't really hurt; cleaning up orphans is the main case because
the state will live on afterwards and so leftover cruft will accumulate
over the course of many changes.
2018-10-16 19:14:11 -07:00
Martin Atkins
a43b7df282 core: Handle forced-create_before_destroy during the plan walk
Previously we used a single plan action "Replace" to represent both the
destroy-before-create and the create-before-destroy variants of replacing.
However, this forces the apply graph builder to jump through a lot of
hoops to figure out which nodes need it forced on and rebuild parts of
the graph to represent that.

If we instead decide between these two cases at plan time, the actual
determination of it is more straightforward because each resource is
represented by only one node in the plan graph, and then we can ensure
we put the right nodes in the graph during DiffTransformer and thus avoid
the logic for dealing with deposed instances being spread across various
different transformers and node types.

As a nice side-effect, this also allows us to show the difference between
destroy-then-create and create-then-destroy in the rendered diff in the
CLI, although this change doesn't fully implement that yet.
2018-10-16 19:14:11 -07:00
Martin Atkins
334c6f1c2c core: Be more explicit in how we handle create_before_destroy
Previously our handling of create_before_destroy -- and of deposed objects
in particular -- was rather "implicit" and spread over various different
subsystems. We'd quietly just destroy every deposed object during a
destroy operation, without any user-visible plan to do so.

Here we make things more explicit by tracking each deposed object
individually by its pseudorandomly-allocated key. There are two different
mechanisms at play here, building on the same concepts:

- During a replace operation with create_before_destroy, we *pre-allocate*
  a DeposedKey to use for the prior object in the "apply" node and then
  pass that exact id to the destroy node, ensuring that we only destroy
  the single object we planned to destroy. In the happy path here the
  user never actually sees the allocated deposed key because we use it and
  then immediately destroy it within the same operation. However, that
  destroy may fail, which brings us to the second mechanism:

- If any deposed objects are already present in state during _plan_, we
  insert a destroy change for them into the plan so that it's explicit to
  the user that we are going to destroy these additional objects, and then
  create an individual graph node for each one in DiffTransformer.

The main motivation here is to be more careful in how we handle these
destroys so that from a user's standpoint we never destroy something
without the user knowing about it ahead of time.

However, this new organization also hopefully makes the code itself a
little easier to follow because the connection between the create and
destroy steps of a Replace is reprseented in a single place (in
DiffTransformer) and deposed instances each have their own explicit graph
node rather than being secretly handled as part of the main instance-level
graph node.
2018-10-16 19:14:11 -07:00
Martin Atkins
0a97daf3de core: Always update resource metadata in state during apply
Previously we had a bug where we would fail to populate resource-level
metadata in the state during apply when count = 0, because the apply
graph would contain only instance nodes, not whole-resource nodes.

To address this, we add to the apply graph a node for each resource in
the configuration alongside the separate resource instance nodes. This
node's job is just to populate the state metadata for the resource, which
ensures it gets updated correctly even when count = 0.

When count is not zero this ends up doing some redundant work that
would've happened as a side-effect of applying individual resource
instances anyway, but it's harmless and makes the updating of our
resource-level metadata more explicit.
2018-10-16 19:14:11 -07:00
Martin Atkins
7d760c09fb core: Update EvalCountFixZeroOneBoundaryGlobal for new state types 2018-10-16 19:14:11 -07:00
Martin Atkins
a3403f2766 terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.

The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.

The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.

Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-10-16 19:11:09 -07:00
Martin Atkins
71cedf19a4 core: Don't create indirect provider dependencies for references
The prior commit changed the schema-access model so that all schemas are
fetched up front during context creation and are then readily available
for use throughout graph building and evaluation.

As a result, we no longer need to create dependency edges to a provider
when one of its resources is referenced by another node, and so the
ProviderTransformer needs only to worry about direct ownership
dependencies.

This also avoids the need for us to run AttachSchemaTransformer twice,
since ProviderTransformer no longer needs schema and we can therefore
defer attaching until just before ReferenceTransformer, when all of the
referencable and referencing nodes are already present in the graph.
2018-10-16 18:49:20 -07:00
Martin Atkins
f7aa06726a core: Run AttachSchemaTransformer twice to catch provider nodes too
Both ProviderTransformer and ReferenceTransformer need schema information,
and so there's a chicken-and-egg problem here where previously the schemas
were not getting attached to provider nodes created during
ProviderTransformer.

As a stop-gap measure for now we'll just run AttachSchemaTransformer
twice, so we can catch any new nodes created during the provider
transforms.
2018-10-16 18:49:20 -07:00
Martin Atkins
88b5607a7a core: Fetch schemas during context construction
Previously we fetched schemas during the AttachSchemaTransformer,
potentially multiple times as that was re-run for each graph built. Now
we fetch the schemas just once during context construction, passing that
result into each of the graph builders.

This only addresses the schema accesses during graph construction. We're
still separately loading schemas during the main walk for evaluation
purposes. This will be addressed in a later commit.
2018-10-16 18:49:20 -07:00
Martin Atkins
b144497041 core: Attach schemas before dealing with provider edges
Since ProviderTransformer now needs the schema in order to infer indirect
references to providers, we must run AttachSchemaTransformer before the
provider transformers in order to calculate the correct ordering of
operations.
2018-10-16 18:49:20 -07:00
James Bardin
4d2da4d733 connect non-resources to providers they reference
Any non-resource (outputs, variables, locals) that references a resource
type must also be connected to that resources provider. This is required
during apply, because the graph built from the diff may not include the
referenced resources because they are being evaluated from the state.

If the provider isn't present already, add a NodeEvalableProvider to
fetch the provider schema.

The provider transformers now need to happen after the outputs, locals,
and variables are transformed.
2018-10-16 18:49:20 -07:00
Martin Atkins
bec0f56808 core: Pass components through to the destroy transformers
These transformers both construct temporary graphs using many of the same
transformers used in the apply graph, and properly doing this now requires
access to the providers and provisioners in order to obtain their schemas.

Along with this, we also update the tests here to use the
simpleMockComponentFactory helper to get a mock provider with a schema
already configured, which means we also need to update the test fixtures
and assertions to use the resource type and attributes defined in that
mock factory.
2018-10-16 18:48:28 -07:00
Martin Atkins
d4285dd27f core: Attach resource and provider config schemas during graph build
This is a little awkward since we need to instantiate the providers much
earlier than before. To avoid a lot of reshuffling here we just spin each
one up and then immediately shut it down again, letting our existing init
functionality during the graph walk still do the main initialization.
2018-10-16 18:46:46 -07:00
Martin Atkins
c937c06a03 terraform: ugly huge change to weave in new HCL2-oriented types
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.

The three main goals here are:
- Use the configuration models from the "configs" package instead of the
  older models in the "config" package, which is now deprecated and
  preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
  new "lang" package, instead of the Interpolator type and related
  functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
  rather than hand-constructed strings. This is not critical to support
  the above, but was a big help during the implementation of these other
  points since it made it much more explicit what kind of address is
  expected in each context.

Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.

I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
2018-10-16 18:46:46 -07:00
James Bardin
99867f0082 add PruneUnusedValuesTransformer
Since outputs and local nodes are always evaluated, if the reference a
resource form the configuration that isn't in the state, the
interpolation could fail.

Prune any local or output values that have no references in the graph.
2018-01-30 10:47:17 -05:00
James Bardin
d31fe5ab9d delete outputs during destroy
Now that outputs are always evaluated, we still need a way to remove
them from state when they are destroyed.

Previously, outputs were removed during destroy from the same
"Applyable" node type that evaluates them. Now that we need to possibly
both evaluate and remove output during an apply, we add a new node -
NodeDestroyableOutput.

This new node is added to the graph by the DestroyOutputTransformer,
which make the new destroy node depend on all descendants of the output
node.  This ensures that the output remains in the state as long as
everything which may interpolate the output still exists.
2018-01-29 19:30:04 -05:00
James Bardin
7da1a39480 always evaluate locals, even during destroy
Destroy-time provisioners require us to re-evaluate during destroy.

Rather than destroying local values, which doesn't do much since they
aren't persisted to state, we always evaluate them regardless of the
type of apply. Since the destroy-time local node is no longer a
"destroy" operation, the order of evaluation need to be reversed. Take
the existing DestroyValueReferenceTransformer and change it to reverse
the outgoing edges, rather than in incoming edges. This makes it so that
any dependencies of a local or output node are destroyed after
evaluation.

Having locals evaluated during destroy failed one other test, but that
was the odd case where we need `id` to exist as an attribute as well as
a field.
2018-01-29 16:16:41 -05:00
James Bardin
7e4dcdb9f0 run RemovedModuleTransformer before References
Also add RemovedModuleTransformer to the plan graph for parity.
2017-11-09 10:34:56 -05:00
James Bardin
15ea04af8a remove modules from state
Remove the module entry from the state if a module is no longer in the
configuration. Modules are not removed if there are any existing
resources with the module path as a prefix. The only time this should be
the case is if a module was removed in the config, but the apply didn't
target that module.

Create a NodeModuleRemoved and an associated EvalDeleteModule to track
the module in the graph then remove it from the state. The
NodeModuleRemoved dependencies are simply any other node which contains
the module path as a prefix in its path.

This could have probably been done much easier as a step in pruning the
state, but modules are going to have to be promoted to full graph nodes
anyway in order to support count.
2017-11-08 19:11:53 -05:00
James Bardin
2f91007999 group the provider transformations
The series of provider transformations is important, and often repeated.
Group these together in a single transform function.
2017-11-02 15:00:06 -04:00
James Bardin
0986d01223 add providers directly from the configuration
The first step in only using the required provider nodes in a graph is
to be able to specifically add them from the configuration.

The MissingProviderTransformer was previously responsible for adding
all providers. Now it is really just adding any that are missing from
the config.
2017-11-02 15:00:06 -04:00
James Bardin
35c6a4e89d add DestroyValueReferenceTransformer
DestroyValueReferenceTransformer is used during destroy to reverse the
edges for output and local values. Because destruction is going to
remove these from the state, nodes that depend on their value need to be
visited first.
2017-10-02 16:20:29 -04:00
Martin Atkins
3a30bfe845 core: evaluate locals and return them for interpolation
We stash the locals in the module state in a map that is ignored for JSON
serialization. We don't include locals in the persisted state because they
can be trivially recomputed and this allows us to assume that they will
pass through verbatim, without any normalization or other transforms
caused by the JSON serialization.

From a user standpoint a local is just a named alias for an expression,
so it's desirable that the result passes through here in as raw a form
as possible, so it behaves as closely as possible to simply using the
given expression directly.
2017-08-21 15:15:25 -07:00
Sander van Harmelen
051582d32a Add the close provider and provisioner transformers (#13102) 2017-04-12 23:25:15 +02:00
Mitchell Hashimoto
4d6085b46a
terraform: outputs should not be included if not targeted
Fixes #10911

Outputs that aren't targeted shouldn't be included in the graph.

This requires passing targets to the apply graph. This is unfortunate
but long term should be removable since I'd like to move output changes
to the diff as well.
2017-02-13 12:52:45 -08:00
Mitchell Hashimoto
e9f6c9c429
terraform: run destroy provisioners on destroy 2017-01-20 18:07:51 -08:00
Mitchell Hashimoto
14d079f914
terraform: destroy resources in dependent providers first
Fixes #4645

This is something that never worked (even in legacy graphs), but as we
push forward towards encouraging multi-provider usage especially with
things like the Vault data source, I want to make sure we have this
right for 0.8.

When you have a config like this:

```
resource "foo_type" "name" {}
provider "bar" { attr = "${foo_type.name.value}" }
resource "bar_type" "name" {}
```

Then the destruction ordering MUST be:

  1. `bar_type`
  2. `foo_type`

Since configuring the client for `bar_type` requires accessing data from
`foo_type`. Prior to this PR, these two would be done in parallel. This
properly pushes forward the dependency.

There are more cases I want to test but this is a basic case that is
fixed.
2016-12-10 20:11:24 -05:00
Mitchell Hashimoto
26ac58bc97
terraform: refactor NodeApplyableProvider to use NodeAbstractProvider
This is important so that the graph looks correct.
2016-12-03 15:27:38 -08:00
Mitchell Hashimoto
fb8f2e2753
terraform: new Graph API that can return the graph for each op 2016-12-02 22:56:22 -05:00
James Bardin
3df3b99276 Make sure each GraphBuilder has a Name
Ensure that each instance of BasucGraphBuilder gets a name corresponding
to the Builder which created it. This allows us to differentiate the
graphs in the logs.
2016-11-15 16:40:10 -05:00
Mitchell Hashimoto
a5df3973a4
terraform: module variables should be pruned if nothing depends on them 2016-11-04 18:58:03 -07:00
James Bardin
797a1b339d DebugInfo and DebugGraph
Implement debugInfo and the DebugGraph

DebugInfo will be a global variable through which graph debug
information can we written to a compressed archive. The DebugInfo
methods are all safe for concurrent use, and noop with a nil receiver.
The API outside of the terraform package will be to call SetDebugInfo
to create the archive, and CloseDebugInfo() to properly close the file.
Each write to the archive will be flushed and sync'ed individually, so
in the event of a crash or a missing call to Close, the archive can
still be recovered.

The DebugGraph is a representation of a terraform Graph to be written to
the debug archive, currently in dot format. The DebugGraph also contains
an internal buffer with Printf and Write methods to add to this buffer.
The buffer will be written to an accompanying file in the debug archive
along with the graph.

This also adds a GraphNodeDebugger interface. Any node implementing
`NodeDebug() string` can output information to annotate the debug graph
node, and add the data to the log. This interface may change or be
removed to provide richer options for debugging graph nodes.

The new graph builders all delegate the build to the BasicGraphBuilder.
Having a Name field lets us differentiate the actual builder
implementation in the debug graphs.
2016-11-04 11:30:51 -04:00
Mitchell Hashimoto
5a8ec482a2
terraform: unify destroy/apply graph builders
They're so similar we unify them, they only change in a select few
places. This is very similar to the old graph but is still much simpler.
2016-10-22 12:12:30 -07:00
Mitchell Hashimoto
e4ef1fe553
terraform: disable providers in new apply graph
This adds the proper logic for "disabling" providers to the new apply
graph: interolating and storing the config for inheritance but not
actually initializing and configuring the provider.

This is important since parent modules will often contain incomplete
provider configurations for the purpose of inheritance that would error
if they were actually attempted to be configured (since they're
incomplete). If the provider is not used, it should be "disabled".
2016-10-19 14:54:00 -07:00
Mitchell Hashimoto
c1664d2eaa
terraform: cbd works! 2016-10-19 13:38:53 -07:00
Mitchell Hashimoto
aaee4df363
terraform: working on enabling CBD, some cycles 2016-10-19 13:38:53 -07:00
Mitchell Hashimoto
6622ca001d
terraform: abstract resource nodes 2016-10-19 13:38:53 -07:00
Mitchell Hashimoto
311d27108e
terraform: Enable DestroyEdgeTransformer 2016-10-19 13:38:52 -07:00
Mitchell Hashimoto
bd5d97f9f5
terraform: transform to attach resource configs 2016-10-19 13:38:52 -07:00
Mitchell Hashimoto
ebc7d209a7
terraform: new graph fixes ".0" and "" boundaries on counts 2016-10-19 13:38:52 -07:00
Mitchell Hashimoto
2e8cb94a5e
terraform: orphan outputs are deleted from the state 2016-10-19 13:38:52 -07:00