Fixes#10911
Outputs that aren't targeted shouldn't be included in the graph.
This requires passing targets to the apply graph. This is unfortunate
but long term should be removable since I'd like to move output changes
to the diff as well.
This disables the computed value check for `count` during the validation
pass. This enables partial support for #3888 or #1497: as long as the
value is non-computed during the plan, complex values will work in
counts.
**Notably, this allows data source values to be present in counts!**
The "count" value can be disabled during validation safely because we
can treat it as if any field that uses `count.index` is computed for
validation. We then validate a single instance (as if `count = 1`) just
to make sure all required fields are set.
This switches to the Go "context" package for cancellation and threads
the context through all the way to evaluation to allow behavior based on
stopping deep within graph execution.
This also adds the Stop API to provisioners so they can quickly exit
when stop is called.
Fixes#11212
The import graph builder was missing the transform to setup links to
parent providers, so provider inheritance didn't work properly. This
adds that.
This also removes the `PruneProviderTransform` since that has no value
in this graph since we'll never add an unused provider.
This was possible with test fixtures but it is also conceiably possible
with older states or corrupted states. We can also extract the type from
the key so we do that now so that StateFilter is more robust.
Fixes#10729
Destruction ordering wasn't taking into account ordering implied through
variables across module boundaries.
This is because to build the destruction ordering we create a
non-destruction graph to determine the _creation_ ordering (to properly
flip edges). This creation graph we create wasn't including module
variables. This PR adds that transform to the graph.
Fixes#10711
The `ModuleVariablesTransformer` only adds module variables in use. This
was missing module variables used by providers since we ran the provider
too late. This moves the transformer and adds a test for this.
Fixes#10680
This moves TargetsTransformer to run after the transforms that add
module variables is run. This makes targeting work across modules (test
added).
This is a bug that only exists in the new graph, but was caught by a
shadow error in #10680. Tests were added to protect against regressions.
If a data source has explicit dependencies in `depends_on`, we can
assume the user has added those because of a dependency not tracked
directly in the config. If there are any entries in `depends_on`, don't
apply the data source early during Refresh.
Fixes#4645
This is something that never worked (even in legacy graphs), but as we
push forward towards encouraging multi-provider usage especially with
things like the Vault data source, I want to make sure we have this
right for 0.8.
When you have a config like this:
```
resource "foo_type" "name" {}
provider "bar" { attr = "${foo_type.name.value}" }
resource "bar_type" "name" {}
```
Then the destruction ordering MUST be:
1. `bar_type`
2. `foo_type`
Since configuring the client for `bar_type` requires accessing data from
`foo_type`. Prior to this PR, these two would be done in parallel. This
properly pushes forward the dependency.
There are more cases I want to test but this is a basic case that is
fixed.
Fixes#8695
When a list count was computed in a multi-resource access
(foo.bar.*.list), we were returning the value as empty string. I don't
actually know the histocal reasoning for this but this can't be correct:
we must return unknown.
When changing this to unknown, the new tests passed and none of the old
tests failed. This leads me further to believe that the return empty
string is probably a holdover from long ago to just avoid crashes or
UUIDs in the plan output and not actually the correct behavior.
Related to #8036
We have had this behavior for a _long_ time now (since 0.7.0) but it
seems people are still periodically getting bit by it. This adds an
explicit error message that explains that this kind of override isn't
allowed anymore.
Fixes#10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
Fixes#10439
When a CBD resource depends on a non-CBD resource, the non-CBD resource
is auto-promoted to CBD. This was done in
cf3a259. This PR makes it so that we
also set the config CBD to true. This causes the proper runtime
execution behavior to occur where we depose state and so on.
So in addition to simple graph edge tricks we also treat the non-CBD
resources as CBD resources.
Fixes#10313
The new graph wasn't properly recording resource dependencies to a
specific index of itself. For example: `foo.bar.2` depending on
`foo.bar.0` wasn't shown in the state when it should've been.
This adds a test to verify this and fixes it.
People with `uuid()` usage in their configurations would receive shadow
errors every time on plan because the UUID would change.
This is hacky fix but I also believe correct: if a shadow error contains
uuid() then we ignore the shadow error completely. This feels wrong but
I'll explain why it is likely right:
The "right" feeling solution is to create deterministic random output
across graph runs. This would require using math/rand and seeding it
with the same value each run. However, this alone probably won't work
due to Terraform's parallelism and potential to call uuid() in different
orders. In addition to this, you can't seed crypto/rand and its unlikely
that we'll NEVER use crypto/rand in the future even if we switched
uuid() to use math/rand.
Therefore, the solution is simple: if there is no shadow error, no
problem. If there is a shadow error and it contains uuid(), then ignore
it.