If a data source has explicit dependencies in `depends_on`, we can
assume the user has added those because of a dependency not tracked
directly in the config. If there are any entries in `depends_on`, don't
apply the data source early during Refresh.
This adds a new function to get a unique identifier scoped to the graph
walk in order to identify operations against the same instance. This is
used by the shadow to namespace provider function calls.
This set of changes addresses two bug scenarios:
(1) When an ignored change canceled a resource replacement, any
downstream resources referencing computer attributes on that resource
would get "diffs didn't match" errors. This happened because the
`EvalDiff` implementation was calling `state.MergeDiff(diff)` on the
unfiltered diff. Generally this is what you want, so that downstream
references catch the "incoming" values. When there's a potential for the
diff to change, thought, this results in problems w/ references.
Here we solve this by doing away with the separate `EvalNode` for
`ignore_changes` processing and integrating it into `EvalDiff`. This
allows us to only call `MergeDiff` with the final, filtered diff.
(2) When a resource had an ignored change but was still being replaced
anyways, the diff was being improperly filtered. This would cause
problems during apply when not all attributes were available to perform
the replacement.
We solve that by deferring actual attribute removal until after we've
decided that we do not have to replace the resource.
In #7170 we found two scenarios where the type checking done during the
`context.Validate()` graph walk was circumvented, and the subsequent
assumption of type safety in the provider's `Diff()` implementation
caused panics.
Both scenarios have to do with interpolations that reference Computed
values. The sentinel we use to indicate that a value is Computed does
not carry any type information with it yet.
That means that an incorrect reference to a list or a map in a string
attribute can "sneak through" validation only to crop up...
1. ...during Plan for Data Source References
2. ...during Apply for Resource references
In order to address this, we:
* add high-level tests for each of these two scenarios in `provider/test`
* add context-level tests for the same two scenarios in `terraform`
(these tests proved _really_ tricky to write!)
* place an `EvalValidateResource` just before `EvalDiff` and `EvalApply` to
catch these errors
* add some plumbing to `Plan()` and `Apply()` to return validation
errors, which were previously only generated during `Validate()`
* wrap unit-tests around `EvalValidateResource`
* add an `IgnoreWarnings` option to `EvalValidateResource` to prevent
active warnings from halting execution on the second-pass validation
Eventually, we might be able to attach type information to Computed
values, which would allow for these errors to be caught earlier. For
now, this solution keeps us safe from panics and raises the proper
errors to the user.
Fixes#7170
During accpeptance tests of some of the first data sources (see
hashicorp/terraform#6881 and hashicorp/terraform#6911),
"unknown resource type" errors have been coming up. Traced it down to
the ResourceCountTransformer, which transforms destroy nodes to a
graphNodeExpandedResourceDestroy node. This node's EvalTree() was still
indiscriminately using EvalApply for all resource types, versus
EvalReadDataApply. This accounts for both cases via EvalIf.
Previously the plan phase would produce a data diff only if no state was
already present. However, this is a faulty approach because a state will
already be present in the case where the data resource depends on a
managed resource that existed in state during refresh but became
computed during plan, due to a "forces new resource" diff.
Now we will produce a data diff regardless of the presence of the state
when the configuration is computed during the plan phase.
This fixes#6824.
This means it’s shown correctly in a plan and takes into account any
actions that are dependant on the tainted resource and, vice verse, any
actions that the tainted resource depends on.
So this changes the behaviour from saying this resource is tainted so
just forget about it and make sure it gets deleted in the background,
to saying I want that resource to be recreated (taking into account the
existing resource and it’s place in the graph).
Previously the "planDestroy" pass would correctly produce a destroy diff,
but the "apply" pass would just ignore it and make a fresh diff, turning
it back into a "create" because data resources are always eager to
refresh.
Now we consider the previous diff when re-diffing during apply and so
we can preserve the plan to destroy and then ultimately actually "destroy"
the data resource (remove from the state) when we get to ReadDataApply.
This ensures that the state is left empty after "terraform destroy";
previously we would leave behind data resource states.
The ResourceAddress struct grows a new "Mode" field to match with
Resource, and its parser learns to recognize the "data." prefix so it
can set that field.
Allows -target to be applied to data sources, although that is arguably
not a very useful thing to do. Other future uses of resource addressing,
like the state plumbing commands, may be better uses of this.
Previously they would get left behind in the state because we had no
support for planning their destruction. Now we'll create a "destroy" plan
and act on it by just producing an empty state on apply, thus ensuring
that the data resources don't get left behind in the state after
everything else is gone.
This implements the main behavior of data resources, including both the
early read in cases where the configuration is non-computed and the split
plan/apply read for cases where full configuration can't be known until
apply time.
The key difference between data and managed resources is in their
respective lifecycles. Now the expanded resource EvalTree switches on
the resource mode, generating a different lifecycle for each mode.
For this initial change only managed resources are implemented, using the
same implementation as before; data resources are no-ops. The data
resource implementation will follow in a subsequent change.
data resources are a separate namespace of resources than managed
resources, so we need to call a different provider method depending on
what mode of resource we're visiting.
Managed resources use ValidateResource, while data resources use
ValidateDataSource, since at the provider level of abstraction each
provider has separate sets of resources and data sources respectively.
The ignore_changes diff filter was stripping out attributes on Create
but the diff was still making it down to the provider, so Create would
end up missing attributes, causing a full failure if any required
attributes were being ignored.
In addition, any changes that required a replacement of the resource
were causing problems with `ignore_chages`, which didn't properly filter
out the replacement when the triggering attributes were filtered out.
Refs #5627
Adds the ability to target resources within modules, like:
module.mymod.aws_instance.foo
And the ability to target all resources inside a module, like:
module.mymod
Closes#1434
Adds an "alias" field to the provider which allows creating multiple instances
of a provider under different names. This provides support for configurations
such as multiple AWS providers for different regions. In each resource, the
provider can be set with the "provider" field.
(thanks to Cisco Cloud for their support)
When the `prevent_destroy` flag is set on a resource, any plan that
would destroy that resource instead returns an error. This has the
effect of preventing the resource from being unexpectedly destroyed by
Terraform until the flag is removed from the config.
CBD used to "borrow" the last slot of of the Tainted list in the State,
but as of #1078 it uses its own list. This seems like just a block of
old code that didn't get cleaned up from the refactor. Deposed resources
get their own destroy nodes, so they don't have to mess with the main
destroy node.
Only used in targets for now. The plan is to use this for interpolation
as well.
This allows us to target:
* individual resources expanded by `count` using bracket / index notation.
* deposed / tainted resources with an `InstanceType` field after name
Docs to follow.
Deposed instances need to be stored as a list for certain pathological
cases where destroys fail for some reason (e.g. upstream API failure,
Terraform interrupted mid-run). Terraform needs to be able to remember
all Deposed nodes so that it can clean them up properly in subsequent
runs.
Deposed instances will now never touch the Tainted list - they're fully
managed from within their own list.
Added a "multiDepose" test case that walks through a scenario to
exercise this.