Fixes#10729
Destruction ordering wasn't taking into account ordering implied through
variables across module boundaries.
This is because to build the destruction ordering we create a
non-destruction graph to determine the _creation_ ordering (to properly
flip edges). This creation graph we create wasn't including module
variables. This PR adds that transform to the graph.
Fixes#10711
The `ModuleVariablesTransformer` only adds module variables in use. This
was missing module variables used by providers since we ran the provider
too late. This moves the transformer and adds a test for this.
Fixes#10680
This moves TargetsTransformer to run after the transforms that add
module variables is run. This makes targeting work across modules (test
added).
This is a bug that only exists in the new graph, but was caught by a
shadow error in #10680. Tests were added to protect against regressions.
If a data source has explicit dependencies in `depends_on`, we can
assume the user has added those because of a dependency not tracked
directly in the config. If there are any entries in `depends_on`, don't
apply the data source early during Refresh.
Fixes#4645
This is something that never worked (even in legacy graphs), but as we
push forward towards encouraging multi-provider usage especially with
things like the Vault data source, I want to make sure we have this
right for 0.8.
When you have a config like this:
```
resource "foo_type" "name" {}
provider "bar" { attr = "${foo_type.name.value}" }
resource "bar_type" "name" {}
```
Then the destruction ordering MUST be:
1. `bar_type`
2. `foo_type`
Since configuring the client for `bar_type` requires accessing data from
`foo_type`. Prior to this PR, these two would be done in parallel. This
properly pushes forward the dependency.
There are more cases I want to test but this is a basic case that is
fixed.
Fixes#8695
When a list count was computed in a multi-resource access
(foo.bar.*.list), we were returning the value as empty string. I don't
actually know the histocal reasoning for this but this can't be correct:
we must return unknown.
When changing this to unknown, the new tests passed and none of the old
tests failed. This leads me further to believe that the return empty
string is probably a holdover from long ago to just avoid crashes or
UUIDs in the plan output and not actually the correct behavior.
Related to #8036
We have had this behavior for a _long_ time now (since 0.7.0) but it
seems people are still periodically getting bit by it. This adds an
explicit error message that explains that this kind of override isn't
allowed anymore.
Fixes#10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
Fixes#10439
When a CBD resource depends on a non-CBD resource, the non-CBD resource
is auto-promoted to CBD. This was done in
cf3a259. This PR makes it so that we
also set the config CBD to true. This causes the proper runtime
execution behavior to occur where we depose state and so on.
So in addition to simple graph edge tricks we also treat the non-CBD
resources as CBD resources.
Fixes#10313
The new graph wasn't properly recording resource dependencies to a
specific index of itself. For example: `foo.bar.2` depending on
`foo.bar.0` wasn't shown in the state when it should've been.
This adds a test to verify this and fixes it.
People with `uuid()` usage in their configurations would receive shadow
errors every time on plan because the UUID would change.
This is hacky fix but I also believe correct: if a shadow error contains
uuid() then we ignore the shadow error completely. This feels wrong but
I'll explain why it is likely right:
The "right" feeling solution is to create deterministic random output
across graph runs. This would require using math/rand and seeding it
with the same value each run. However, this alone probably won't work
due to Terraform's parallelism and potential to call uuid() in different
orders. In addition to this, you can't seed crypto/rand and its unlikely
that we'll NEVER use crypto/rand in the future even if we switched
uuid() to use math/rand.
Therefore, the solution is simple: if there is no shadow error, no
problem. If there is a shadow error and it contains uuid(), then ignore
it.
This fixes: `TestContext2Apply_moduleDestroyOrder`
The new destroy graph wasn't properly creating edges that happened
_through_ an output, it was only created the edges for _direct_
dependents.
To fix this, the DestroyEdgeTransformer now creates the full transitive
list of destroy edges by walking all ancestors. This will create more
edges than are necessary but also will no longer miss resources through
an output.
This will detect computed counts (which we don't currently support) and
change the error to be more informative that we don't allow computed
counts. Prior to this, the error would instead be something like
`strconv.ParseInt: "${var.foo}" cannot be parsed as int`.
The map output from the module "mod" loses the computed value from the
template when we validate. If the "extra" field is removed from the map,
the validation fails earlier with map "does not have any elements so
cannot determine type".
Apply will work, because the computed value will exist in the map.
Fixes#9920
This was an issue caught with the shadow graph. Self references in
provisioners were causing a self-edge on destroy apply graphs.
We need to explicitly check that we're not creating an edge to ourself.
This is also how the reference transformer works.
Fixes a shadow graph error found during usage.
The new apply graph was only adding module variables that referenced
data that existed _in the graph_. This isn't a valid optimization since
the data it is referencing may be in the state with no diff, and
therefore available but not in the graph.
This just removes that optimization logic, which causes no failing
tests. It also adds a test that exposes the bug if we had the pruning
logic.
Found via a shadow graph failure:
Provider aliases weren't being configured by the new apply graph.
This was caused by the transform that attaches configs to provider nodes
not being able to handle aliases and therefore not attaching a config.
Added a test to this and fixed it.
Fixes#9444
This appears to be a regression from 0.7.0, but there were no tests
covering it so we missed it and changed the behavior at some point! Oh
no.
This PR make the ordering of multi-var access: `resource.name.*.attr`
consistent: it is the ordering of the count, not the lexical ordering of
the value. This allows behavior where two lists are indexed by count
index and can be assumed to be related (for example user data for an aws
instance, as shown in the above referenced issue).
Two new context tests added to cover this case.
Fixes#9840
The new apply graph wasn't properly nesting provisioners. This resulted
in reading the provisioners being nil on apply in the shadow graph which
caused the crash in the above issue.
The actual cause of this is that the new graphs we're moving towards do
not have any "flattening" (they are flat to begin with): all modules are
in the root graph from the beginning of construction versus building a
number of different graphs and flattening them. The transform that adds
the provisioners wasn't modified to handle already-flat graphs and so
was only adding provisioners to the root module, not children.
The change modifies the `MissingProvisionerTransformer` (primarily) to
support already-flat graphs and add provisioners for all module levels.
Tests are there to cover this as well.
**NOTE:** This PR focuses on fixing that specific issue. I'm going to follow up
this PR with another PR that is more focused on being robust against
crashing (more nil checks, recover() for shadow graph, etc.). In the
interest of focus and keeping a PR reviewable this focuses only on the
issue itself.