Previously the "planDestroy" pass would correctly produce a destroy diff,
but the "apply" pass would just ignore it and make a fresh diff, turning
it back into a "create" because data resources are always eager to
refresh.
Now we consider the previous diff when re-diffing during apply and so
we can preserve the plan to destroy and then ultimately actually "destroy"
the data resource (remove from the state) when we get to ReadDataApply.
This ensures that the state is left empty after "terraform destroy";
previously we would leave behind data resource states.
Building on b10564a, adding tweaks that allow the module var count
search to act recursively, ensuring that a sitaution where something
like var.top gets passed to module middle, as var.middle, and then to
module bottom, as var.bottom, which is then used in a resource count.
A new problem was introduced by the prior fixes for destroy
interpolation messages when resources depend on module variables with
a _count_ attribute, this makes the variable crucial for properly
building the graph - even in destroys. So removing all module variables
from the graph as noops was overzealous.
By borrowing the logic in `DestroyEdgeInclude` we are able to determine
if we need to keep a given module variable relatively easily.
I'd like to overhaul the `Destroy: true` implementation so that it does
not depend on config at all, but I want to continue for now with the
targeted fixes that we can backport into the 0.6 series.
This commit forward ports the changes made for 0.6.17, in order to store
the type and sensitive flag against outputs.
It also refactors the logic of the import for V0 to V1 state, and
fixes up the call sites of the new format for outputs in V2 state.
Finally we fix up tests which did not previously set a state version
where one is required.
Provider nodes interpolate their config during the input walk, but this
is very early and so it's pretty likely that any resources referenced are
entirely absent from the state.
As a special case then, we tolerate the normally-fatal case of having
an entirely missing resource variable so that the input walk can complete,
albeit skipping the providers that have such interpolations.
If these interpolations end up still being unresolved during refresh
(e.g. because the config references a resource that hasn't been created
yet) then we will catch that error on the refresh pass, or indeed on the
plan pass if -refresh=false is used.
The ResourceAddress struct grows a new "Mode" field to match with
Resource, and its parser learns to recognize the "data." prefix so it
can set that field.
Allows -target to be applied to data sources, although that is arguably
not a very useful thing to do. Other future uses of resource addressing,
like the state plumbing commands, may be better uses of this.
Previously they would get left behind in the state because we had no
support for planning their destruction. Now we'll create a "destroy" plan
and act on it by just producing an empty state on apply, thus ensuring
that the data resources don't get left behind in the state after
everything else is gone.
The handling of data "orphans" is simpler than for managed resources
because the only thing we need to deal with is our own state, and the
validation pass guarantees that by the time we get to refresh or apply
the instance state is no longer needed by any other resources and so
we can safely drop it with no fanfare.
This implements the main behavior of data resources, including both the
early read in cases where the configuration is non-computed and the split
plan/apply read for cases where full configuration can't be known until
apply time.
The key difference between data and managed resources is in their
respective lifecycles. Now the expanded resource EvalTree switches on
the resource mode, generating a different lifecycle for each mode.
For this initial change only managed resources are implemented, using the
same implementation as before; data resources are no-ops. The data
resource implementation will follow in a subsequent change.
data resources are a separate namespace of resources than managed
resources, so we need to call a different provider method depending on
what mode of resource we're visiting.
Managed resources use ValidateResource, while data resources use
ValidateDataSource, since at the provider level of abstraction each
provider has separate sets of resources and data sources respectively.
Once a data resource gets into the state, the state system needs to be
able to parse its id to match it with resources in the configuration.
Since data resources live in a separate namespace than managed resources,
the extra "mode" discriminator is required to specify which namespace
we're talking about, just like we do in the resource configuration.
This is a breaking change to the ResourceProvider interface that adds the
new operations relating to data sources.
DataSources, ValidateDataSource, ReadDataDiff and ReadDataApply are the
data source equivalents of Resources, Validate, Diff and Apply (respectively)
for managed resources.
The diff/apply model seems at first glance a rather strange workflow for
read-only resources, but implementing data resources in this way allows them
to fit cleanly into the standard plan/apply lifecycle in cases where the
configuration contains computed arguments and thus the read must be deferred
until apply time.
Along with breaking the interface, we also fix up the plugin client/server
and helper/schema implementations of it, which are all of the callers
used when provider plugins use helper/schema. This would be a breaking
change for any provider plugin that directly implements the provider
interface, but no known plugins do this and it is not recommended.
At the helper/schema layer the implementer sees ReadDataApply as a "Read",
as opposed to "Create" or "Update" as in the managed resource Apply
implementation. The planning mechanics are handled entirely within
helper/schema, so that complexity is hidden from the provider implementation
itself.
The fix that landed in #6557 was unfortunately the wrong subset of the
work I had been doing locally, and users of the attached bugs are still
reporting problems with Terraform v0.6.16.
At the very last step, I attempted to scope down both the failing test
and the implementation to their bare essentials, but ended up with a
test that did not exercise the root of the problem and a subset of the
implementation that was insufficient for a full bugfix.
The key thing I removed from the test was a _referencing output_ for the
module, which is what breaks down the #6557 solution.
I've re-tested the examples in #5440 and #3268 to verify this solution
does indeed solve the problem.
- Fix sensitive outputs for lists and maps
- Fix test prelude which was missed during conflict resolution
- Fix `terraform output` to match old behaviour and not have outputs
header and colouring
- Bump timeout on TestAtlasClient_UnresolvableConflict
This adds a test and the support necessary to read from native maps
passed as variables via interpolation - for example:
```
resource ...... {
mapValue = "${var.map}"
}
```
We also add support for interpolating maps from the flat-mapped resource
config, which is necessary to support assignment of computed maps, which
is now valid.
Unfortunately there is no good way to distinguish between a list and a
map in the flatmap. In lieu of changing that representation (which is
risky), we assume that if all the keys are numeric, this is intended to
be a list, and if not it is intended to be a map. This does preclude
maps which have purely numeric keys, which should be noted as a
backwards compatibility concern.
This commit adds support for native list variables and outputs, building
up on the previous change to state. Interpolation functions now return
native lists in preference to StringList.
List variables are defined like this:
variable "test" {
# This can also be inferred
type = "list"
default = ["Hello", "World"]
}
output "test_out" {
value = "${var.a_list}"
}
This results in the following state:
```
...
"outputs": {
"test_out": [
"hello",
"world"
]
},
...
```
And the result of terraform output is as follows:
```
$ terraform output
test_out = [
hello
world
]
```
Using the output name, an xargs-friendly representation is output:
```
$ terraform output test_out
hello
world
```
The output command also supports indexing into the list (with
appropriate range checking and no wrapping):
```
$ terraform output test_out 1
world
```
Along with maps, list outputs from one module may be passed as variables
into another, removing the need for the `join(",", var.list_as_string)`
and `split(",", var.list_as_string)` which was previously necessary in
Terraform configuration.
This commit also updates the tests and implementations of built-in
interpolation functions to take and return native lists where
appropriate.
A backwards compatibility note: previously the concat interpolation
function was capable of concatenating either strings or lists. The
strings use case was deprectated a long time ago but still remained.
Because we cannot return `ast.TypeAny` from an interpolation function,
this use case is no longer supported for strings - `concat` is only
capable of concatenating lists. This should not be a huge issue - the
type checker picks up incorrect parameters, and the native HIL string
concatenation - or the `join` function - can be used to replicate the
missing behaviour.
This changes the representation of maps in the interpolator from the
dotted flatmap form of a string variable named "var.variablename.key"
per map element to use native HIL maps instead.
This involves porting some of the interpolation functions in order to
keep the tests green, and adding support for map outputs.
There is one backwards incompatibility: as a result of an implementation
detail of maps, one could access an indexed map variable using the
syntax "${var.variablename.key}".
This is no longer possible - instead HIL native syntax -
"${var.variablename["key"]}" must be used. This was previously
documented, (though not heavily used) so it must be noted as a backward
compatibility issue for Terraform 0.7.
This commit adds the groundwork for supporting module outputs of types
other than string. In order to do so, the state version is increased
from 1 to 2 (though the "public-facing" state version is actually as the
first state file was binary).
Tests are added to ensure that V2 (1) state is upgraded to V3 (2) state,
though no separate read path is required since the V2 JSON will
unmarshal correctly into the V3 structure.
Outputs in a ModuleState are now of type map[string]interface{}, and a
test covers round-tripping string, []string and map[string]string, which
should cover all of the types in question.
Type switches have been added where necessary to deal with the
interface{} value, but they currently default to panicking when the input
is not a string.
This commit rectifies the fact that the original binary state is
referred to as V1 in the source code, but the first version of the JSON
state uses StateVersion: 1. We instead make the code refer to V0 as the
binary state, and V1 as the first version of JSON state.
This adds a field terraform_version to the state that represents the
Terraform version that wrote that state. If Terraform encounters a state
written by a future version, it will error. You must use at least the
version that wrote that state.
Internally we have fields to override this behavior (StateFutureAllowed),
but I chose not to expose them as CLI flags, since the user can just
modify the state directly. This is tricky, but should be tricky to
represent the horrible disaster that can happen by enabling it.
We didn't have to bump the state format version since the absense of the
field means it was written by version "0.0.0" which will always be
older. In effect though this change will always apply to version 2 of
the state since it appears in 0.7 which bumped the version for other
purposes.
I decided to split this up from the terraform state rm command to make the diff easier to see. These changes will also be used for terraform state mv.
This adds a `Remove` method to the `*terraform.State` struct. It takes a list of addresses and removes the items matching that list. This leverages the `StateFilter` committed last week to make the view of the world consistent across address lookups.
There is a lot of test duplication here with StateFilter, but in Terraform style: we like it that way.
This introduces the terraform state list command to list the resources
within a state. This is the first of many state management commands to
come into 0.7.
This is the first command of many to come that is considered a
"plumbing" command within Terraform (see "plumbing vs porcelain":
http://git.661346.n2.nabble.com/what-are-plumbing-and-porcelain-td2190639.html).
As such, this PR also introduces a bunch of groundwork to support
plumbing commands.
The main changes:
- Main command output is changed to split "common" and "uncommon"
commands.
- mitchellh/cli is updated to support nested subcommands, since
terraform state list is a nested subcommand.
- terraform.StateFilter is introduced as a way in core to filter/search
the state files. This is very basic currently but I expect to make it
more advanced as time goes on.
- terraform state list command is introduced to list resources in a
state. This can take a series of arguments to filter this down.
Known issues, or things that aren't done in this PR on purpose:
- Unit tests for terraform state list are on the way. Unit tests for the
core changes are all there.
Wow this one was tricky!
This bug presents itself only when using planfiles, because when doing a
straight `terraform apply` the interpolations are left in place from the
Plan graph walk and paper over the issue. (This detail is what made it
so hard to reproduce initially.)
Basically, graph nodes for module variables are visited during the apply
walk and attempt to interpolate. During a destroy walk, no attributes
are interpolated from resource nodes, so these interpolations fail.
This scenario is supposed to be handled by the `PruneNoopTransformer` -
in fact it's described as the example use case in the comment above it!
So the bug had to do with the actual behavor of the Noop transformer.
The resource nodes were not properly reporting themselves as Noops
during a destroy, so they were being left in the graph.
This in turn triggered the module variable nodes to see that they had
another node depending on them, so they also reported that they could
not be pruned.
Therefore we had two nodes in the graph that were effectively noops but
were being visited anyways. The module variable nodes were already graph
leaves, which is why this error presented itself as just stray messages
instead of actual failure to destroy.
Fixes#5440Fixes#5708Fixes#4988Fixes#3268
A consequnce of the work done in #6185 was that variables which were in
a module but not set explicitly (i.e. the default value was relied upon)
were marked as type errors. This was reported in #6230.
This commit adds a test case for this and a patch which fixes the issue.
The flattening process was not properly drawing dependencies between provider
nodes in modules and their parent provider nodes.
Fixes#2832Fixes#4443Fixes#4865
These tests demonstrates a problem where the types to a module input are
not checked. For example, if a module - inner - defines a variable
"should_be_a_map" as a map, or with a default variable of map, we do not
fail if the user sets the variable value in the outer module to a string
value. This is also a problem in nested modules.
The implementation changes add a type checking step into the graph
evaluation process to ensure invalid types are not passed.
The nodes it adds were immediately skipped by flattening and therefore
never had any effect. That makes the transformer effectively dead code
and removable. This was the only usage of FlattenSkip so we can remove
that as well.
The ContextGraphWalker struct includes a lock that's passed down to
BuiltinEvalContext and guards access to interpolation variables as
they're written using SetVariables.
The likely problem being expressed in #5733 is that the same map
reference is also passed down to the Interpolater.Variables field, which
is used for variable lookup.
Here, we plumb the same lock we're using to guard access for writes down
and acquire it before doing variable reads as well. It's not as fine
grained as perhaps it could be, but all the context tests pass and I
believe this should address #5733.
The ignore_changes diff filter was stripping out attributes on Create
but the diff was still making it down to the provider, so Create would
end up missing attributes, causing a full failure if any required
attributes were being ignored.
In addition, any changes that required a replacement of the resource
were causing problems with `ignore_chages`, which didn't properly filter
out the replacement when the triggering attributes were filtered out.
Refs #5627
When a user specifies `-target`s on a `terraform plan` and stores
the resulting diff in a plan file using `-out` - it usually works just
fine since the diff is scoped based on the targets.
When there are tainted resources in the state, however, graph nodes to
destroy them were popping back into the plan when it was being loaded
from a file. This was because Targets weren't being stored in the
Planfile, so Terraform didn't know to filter them out. (In the
non-Planfile scenario, we still had the Targets loaded directly from the
flags.)
By encoding Targets in with the Planfile we can ensure that the same
filters are always applied.
Backwards compatibility should be fine here, since we're just adding a
field. The gob encoder/decoder will just do the right thing (ignore/skip
the field) with planfiles stored w/ versions that don't know about
Targets.
Fixes#5183
Previously these details were relegated to the debug logs, which forces
the user to reproduce the error condition and then go digging for the
error message. Since we're asking users to report this error, let's give
them all the details they need right up front to make it a little easier
on them.
Context:
As part of building up a Plan, Terraform needs to detect "orphaned"
resources--resources which are present in the state but not in the
config. This happens when config for those resources is removed by the
user, making it Terraform's responsibility to destroy them.
Both state and config are organized by Module into a logical tree, so
the process of finding orphans involves checking for orphaned Resources
in the current module and for orphaned Modules, which themselves will
have all their Resources marked as orphans.
Bug:
In #3114 a problem was exposed where, given a module tree that looked
like this:
```
root
|
+-- parent (empty, except for sub-modules)
|
+-- child1 (1 resource)
|
+-- child2 (1 resource)
```
If `parent` was removed, a bunch of error messages would occur during
the plan. The root cause of this was duplicate orphans appearing for the
resources in child1 and child2.
Fix:
This turned out to be a bug in orphaned module detection. When looking
for deeply nested orphaned modules, root.parent was getting added twice
as an orphaned module to the graph.
Here, we add an additional check to prevent a double add, which
addresses this scenario properly.
Fixes#3114 (the Provisioner side of it was fixed in #4877)
Fixes an interpolation race that was occurring when a tainted destroy
node and a primary destroy node both tried to interpolate a computed
count in their config. Since they were sharing a pointer to the _same_
config, depending on how the race played out one of them could catch the
config uninterpolated and would then throw a syntax error.
The `Copy()` tree implemented for this fix can probably be used
elsewhere - basically we should copy the config whenever we drop nodes
into the graph - but for now I'm just applying it to the place that
fixes this bug.
Fixes#4982 - Includes a test covering that race condition.
References to computed list-ish attributes (set, list, map) were being
improperly resolved as an empty list `[]` during the plan phase (when
the value of the reference is not yet known) instead of as an
UnknownValue.
A "diffs didn't match" failure in an AWS DirectoryServices test led to
this discovery (and this commit fixes the failing test):
https://travis-ci.org/hashicorp/terraform/jobs/104812951
Refs #2157 which has the original work to support computed list
attributes at all. This is just a simple tweak to that work.
/cc @radeksimko
This commit adds support for declaring variable types in Terraform
configuration. Historically, the type has been inferred from the default
value, defaulting to string if no default was supplied. This has caused
users to devise workarounds if they wanted to declare a map but provide
values from a .tfvars file (for example).
The new syntax adds the "type" key to variable blocks:
```
variable "i_am_a_string" {
type = "string"
}
variable "i_am_a_map" {
type = "map"
}
```
This commit does _not_ extend the type system to include bools, integers
or floats - the only two types available are maps and strings.
Validation is performed if a default value is provided in order to
ensure that the default value type matches the declared type.
In the case that a type is not declared, the old logic is used for
determining the type. This allows backwards compatiblity with previous
Terraform configuration.
Instead of trying to skip non-targeted orphans as they are added to
the graph in OrphanTransformer, remove knowledge of targeting from
OrphanTransformer and instead make the orphan resource nodes properly
addressable.
That allows us to use existing logic in TargetTransformer to filter out
the nodes appropriately. This does require adding TargetTransformer to the
list of transforms that run during DynamicExpand so that targeting can
be applied to nodes with expanded counts.
Fixes#4515Fixes#2538Fixes#4462