Because CBD now runs after a RootTransformer, it's now operating on a
graph that _may_ have had a graphNodeRoot added to it (a noop node whose
only purpose is to be a root).
CBD includes a step that tells the destroy node to depend on any parents
of the create node. When one of those parents was "root", this was
causing the destroy node to depend on "root", making it cease to be an
actual root node.
Because graphNodeRoot is a singleton, the follow-up RootTransformer was
not sufficient to slap another root on top - it wasn't being seen as a
fresh node, so edges were just accumulating, and we ended up in a state
with "no roots".
refs #1903 (not sure if this will fix all the "no root found" cases, or
just the one I bumped into)
fixes#1947
Root cause was a bad edge being made by the CBD transform going from the
flattened destroy node to the unflattened create node, which was no
longer in the graph. The destroy node therefore had a dependency that
could never be satisfied, which locked up the walk.
Adds the ability to target resources within modules, like:
module.mymod.aws_instance.foo
And the ability to target all resources inside a module, like:
module.mymod
Closes#1434
The `TargetTransform` was dropping provisioner nodes, which caused graph
validation to fail with messages about uninitialized provisioners when a
`terraform destroy` was attempted.
This was because `destroy` flops the dependency calculation to try and
address any nodes in the graph that "depend on" the target node. But we
still need to keep the provisioner node in the graph.
Here we switch the strategy for filtering nodes to only drop
addressable, non-targeted nodes. This should prevent us from having to
whitelist nodes to keep in the future.
closes#1541
Most CBD-related cycles include destroy nodes, and destroy nodes were
all being pruned from the graph before staring the Validate walk.
In practice this meant that we had scenarios that would error out with
graph cycles on Apply that _seemed_ fine during Plan.
This introduces a Verbose option to the GraphBuilder that tells it to
generate a "worst-case" graph. Validate sets this to true so that cycle
errors will always trigger at this step if they're going to happen.
(This Verbose option will be exposed as a CLI flag to `terraform graph`
in a second incoming PR.)
refs #1651
Adds an "alias" field to the provider which allows creating multiple instances
of a provider under different names. This provides support for configurations
such as multiple AWS providers for different regions. In each resource, the
provider can be set with the "provider" field.
(thanks to Cisco Cloud for their support)
When the `prevent_destroy` flag is set on a resource, any plan that
would destroy that resource instead returns an error. This has the
effect of preventing the resource from being unexpectedly destroyed by
Terraform until the flag is removed from the config.
The provider config was not being properly merged across module
boundaries during the Input walk over the graph, so when a provider was
configured at the top level, resources in modules could improperly
trigger a request for input for a provider attribute that's already
defined.
Instead of returning UnknownVariableValue every time, attempt to return
the real value. If we don't find it, return unknown value. This fixes
removing outputs from state on refresh.
Only used in targets for now. The plan is to use this for interpolation
as well.
This allows us to target:
* individual resources expanded by `count` using bracket / index notation.
* deposed / tainted resources with an `InstanceType` field after name
Docs to follow.
Add `-target=resource` flag to core operations, allowing users to
target specific resources in their infrastructure. When `-target` is
used, the operation will only apply to that resource and its
dependencies.
The calculated dependencies are different depending on whether we're
running a normal operation or a `terraform destroy`.
Generally, "dependencies" refers to ancestors: resources falling
_before_ the target in the graph, because their changes are required to
accurately act on the target.
For destroys, "dependencies" are descendents: those resources which fall
_after_ the target. These resources depend on our target, which is going
to be destroyed, so they should also be destroyed.
Deposed instances need to be stored as a list for certain pathological
cases where destroys fail for some reason (e.g. upstream API failure,
Terraform interrupted mid-run). Terraform needs to be able to remember
all Deposed nodes so that it can clean them up properly in subsequent
runs.
Deposed instances will now never touch the Tainted list - they're fully
managed from within their own list.
Added a "multiDepose" test case that walks through a scenario to
exercise this.