It appears data sources have always been coded to work during apply, as
can be verified with this test (no impl. changes were necessary to make
it pass).
This test should be added to ensure our apply graph always works with
data sources as well.
This adds the proper logic for "disabling" providers to the new apply
graph: interolating and storing the config for inheritance but not
actually initializing and configuring the provider.
This is important since parent modules will often contain incomplete
provider configurations for the purpose of inheritance that would error
if they were actually attempted to be configured (since they're
incomplete). If the provider is not used, it should be "disabled".
Related to #5254
If the count of a resource is interpolated (i.e. `${var.c}`), then it
must be interpolated before any splat variable using that resource can
be used (i.e. `type.name.*.attr`). The original fix for #5254 is to
always ensure that this is the case.
While working on a new apply builder based on the diff in
`f-apply-builder`, this truth no longer always holds. Rather than always
include such a resource, I believe the correct behavior instead is to
use the state as a source of truth during `walkApply` operations.
This change specifically is scoped to `walkApply` operation
interpolations since we know the state of any multi-variable should be
available. The behavior is less clear for other operations so I left the
logic unchanged from prior versions.
A JSON object will be decoded as a list with a single map value. This
will be properly coerced later, so let it through the initial config
semantic checks.
When targeting, only Addressable untargeted nodes were being removed
from the graph. Variable nodes are not directly Addressable, so they
were hanging around. This caused problems with module variables that
referred to Resource nodes. The Resource node would be filtered out of
the graph, but the module Variable node would not, so it would try to
interpolate during the graph walk and be unable to find it's referent.
This would present itself as strange "cannot find variable" errors for
variables that were uninvolved with the currently targeted set of
resources.
Here, we introduce a new interface that can be implemented by graph
nodes to indicate they should be filtered out from targeting even though
they are not directly addressable themselves.
This PR fixes#7824, which crashed when applying a plan file. The bug is
that while a map which has come from the HCL parser reifies as a
[]map[string]interface{}, the variable saved in the plan file was not.
We now cover both cases.
Fixes#7824.
Terraform 0.7 introduces lists and maps as first-class values for
variables, in addition to string values which were previously available.
However, there was previously no way to override the default value of a
list or map, and the functionality for overriding specific map keys was
broken.
Using the environment variable method for setting variable values, there
was previously no way to give a variable a value of a list or map. These
now support HCL for individual values - specifying:
TF_VAR_test='["Hello", "World"]'
will set the variable `test` to a two-element list containing "Hello"
and "World". Specifying
TF_VAR_test_map='{"Hello = "World", "Foo" = "bar"}'
will set the variable `test_map` to a two-element map with keys "Hello"
and "Foo", and values "World" and "bar" respectively.
The same logic is applied to `-var` flags, and the file parsed by
`-var-files` ("autoVariables").
Note that care must be taken to not run into shell expansion for `-var-`
flags and environment variables.
We also merge map keys where appropriate. The override syntax has
changed (to be noted in CHANGELOG as a breaking change), so several
tests needed their syntax updating from the old `amis.us-east-1 =
"newValue"` style to `amis = "{ "us-east-1" = "newValue"}"` style as
defined in TF-002.
In order to continue supporting the `-var "foo=bar"` type of variable
flag (which is not valid HCL), a special case error is checked after HCL
parsing fails, and the old code path runs instead.
The report in #7378 led us into a deep rabbit hole that turned out to
expose a bug in the graph walk implementation being used by the
`NoopTransformer`. The problem ended up being when two nodes in a single
dependency chain both reported `Noop() -> true` and needed to be
removed. This was breaking the walk and preventing the second node from
ever being visited.
Fixes#7378
This set of changes addresses two bug scenarios:
(1) When an ignored change canceled a resource replacement, any
downstream resources referencing computer attributes on that resource
would get "diffs didn't match" errors. This happened because the
`EvalDiff` implementation was calling `state.MergeDiff(diff)` on the
unfiltered diff. Generally this is what you want, so that downstream
references catch the "incoming" values. When there's a potential for the
diff to change, thought, this results in problems w/ references.
Here we solve this by doing away with the separate `EvalNode` for
`ignore_changes` processing and integrating it into `EvalDiff`. This
allows us to only call `MergeDiff` with the final, filtered diff.
(2) When a resource had an ignored change but was still being replaced
anyways, the diff was being improperly filtered. This would cause
problems during apply when not all attributes were available to perform
the replacement.
We solve that by deferring actual attribute removal until after we've
decided that we do not have to replace the resource.
The reproduction of issue #7421 involves a list of maps being passed to
a module, where one or more of the maps has a value which is computed
(for example, from another resource). There is a failure at the point of
use (via lookup interpolation) of the computed value of the form:
```
lookup: lookup failed to find 'elb' in:
${lookup(var.services[count.index], "elb")}
```
Where 'elb' is the key of the map.
* Fix nested module "unknown variable" during dstry
During a destroy with nested modules, accessing a variable between them
causes an "unknown variable accessed" during destroy.
Passing a literal map to a module looks like this in HCL:
module "foo" {
source = "./foo"
somemap {
somekey = "somevalue"
}
}
The HCL parser always wraps an extra list around the map, so we need to
remove that extra list wrapper when the parameter is indeed of type "map".
Fixes#7140
In #7170 we found two scenarios where the type checking done during the
`context.Validate()` graph walk was circumvented, and the subsequent
assumption of type safety in the provider's `Diff()` implementation
caused panics.
Both scenarios have to do with interpolations that reference Computed
values. The sentinel we use to indicate that a value is Computed does
not carry any type information with it yet.
That means that an incorrect reference to a list or a map in a string
attribute can "sneak through" validation only to crop up...
1. ...during Plan for Data Source References
2. ...during Apply for Resource references
In order to address this, we:
* add high-level tests for each of these two scenarios in `provider/test`
* add context-level tests for the same two scenarios in `terraform`
(these tests proved _really_ tricky to write!)
* place an `EvalValidateResource` just before `EvalDiff` and `EvalApply` to
catch these errors
* add some plumbing to `Plan()` and `Apply()` to return validation
errors, which were previously only generated during `Validate()`
* wrap unit-tests around `EvalValidateResource`
* add an `IgnoreWarnings` option to `EvalValidateResource` to prevent
active warnings from halting execution on the second-pass validation
Eventually, we might be able to attach type information to Computed
values, which would allow for these errors to be caught earlier. For
now, this solution keeps us safe from panics and raises the proper
errors to the user.
Fixes#7170
This test illustrates a failure which occurs during the Input walk, if
an interpolation is used with the input of a splat operation resulting
in a multi-variable.
The bug was found during use of the RC2, but does not correspond to an
open issue at present.
Previously, interpolation of multi-variables was returning an empty
variable if the resource count was 0. The empty variable was defined as
TypeString, Value "". This means that empty resource counts fail type
checking for interpolation functions which operate on lists.
Instead, return an empty list if the count is 0. A context test tests
this against further regression. Also add a regression test covering the
case of a single count multi-variable.
In order to make the context testing framework deal with this change it
was necessary to special case empty lists in the test diff function.
Fixes#7002
For `terraform destroy`, we currently build up the same graph we do for
`plan` and `apply` and we do a walk with a special Diff that says
"destroy everything".
We have fought the interpolation subsystem time and again through this
code path. Beginning in #2775 we gained a new feature to selectively
prune out problematic graph nodes. The past chain of destroy fixes I
have been involved with (#6557, #6599, #6753) have attempted to massage
the "noop" definitions to properly handle the edge cases reported.
"Variable is depended on by provider config" is another edge case we add
here and try to fix.
This dive only makes me more convinced that the whole `terraform
destroy` code path needs to be reworked.
For now, I went with a "surgical strike" approach to the problem
expressed in #7047. I found a couple of issues with the existing
Noop and DestroyEdgeInclude logic, especially with regards to
flattening, but I'm explicitly ignoring these for now so we can get this
particular bug fixed ahead of the 0.7 release. My hope is that we can
circle around with a fully specced initiative to refactor `terraform
destroy`'s graph to be more state-derived than config-derived.
Until then, this fixes#7407
Previously the plan phase would produce a data diff only if no state was
already present. However, this is a faulty approach because a state will
already be present in the case where the data resource depends on a
managed resource that existed in state during refresh but became
computed during plan, due to a "forces new resource" diff.
Now we will produce a data diff regardless of the presence of the state
when the configuration is computed during the plan phase.
This fixes#6824.
Earlier we had a bug where data resources would not yet removed from the
state during a destroy. This was fixed in cd0c452, and this test will
hopefully make sure it stays fixed.
Variables weren't being interpolated during the Input phase, causing a
syntax error on the interpolation string. Adding `walkInput` to the
EvalTree operations prevents skipping the interpolation step.
Apparently there's been a regression in the creation of data resource
diffs: they aren't showing up in the plan at all.
As a first step to fixing this, this is an intentionally-failing test
that proves it's broken.
Building on b10564a, adding tweaks that allow the module var count
search to act recursively, ensuring that a sitaution where something
like var.top gets passed to module middle, as var.middle, and then to
module bottom, as var.bottom, which is then used in a resource count.
A new problem was introduced by the prior fixes for destroy
interpolation messages when resources depend on module variables with
a _count_ attribute, this makes the variable crucial for properly
building the graph - even in destroys. So removing all module variables
from the graph as noops was overzealous.
By borrowing the logic in `DestroyEdgeInclude` we are able to determine
if we need to keep a given module variable relatively easily.
I'd like to overhaul the `Destroy: true` implementation so that it does
not depend on config at all, but I want to continue for now with the
targeted fixes that we can backport into the 0.6 series.