Commit Graph

440 Commits

Author SHA1 Message Date
Mitchell Hashimoto
0ba3fcdc63
terraform: test static var being passed into grandchild for count 2017-01-27 20:38:07 -08:00
Mitchell Hashimoto
2162d6cf3d
terraform: test a basic static var count passed into a module 2017-01-27 20:32:55 -08:00
Mitchell Hashimoto
a8f64cbcee
terraform: make sure Stop blocks until full completion 2017-01-26 15:10:30 -08:00
Mitchell Hashimoto
f8c7b639c9
terraform: switch to Context for stop, Stoppable provisioners
This switches to the Go "context" package for cancellation and threads
the context through all the way to evaluation to allow behavior based on
stopping deep within graph execution.

This also adds the Stop API to provisioners so they can quickly exit
when stop is called.
2017-01-26 15:03:27 -08:00
Mitchell Hashimoto
b35b263015 Merge pull request #11329 from hashicorp/f-destroy-prov
Destroy Provisioners
2017-01-26 14:32:21 -08:00
Mitchell Hashimoto
0b0114c9bf Merge pull request #11426 from hashicorp/f-new-graph
core: Refresh, Validate, Input on new graph builders
2017-01-26 14:31:03 -08:00
Mitchell Hashimoto
b2fb3b1d0f
terraform: import graph should setup parent refs to providers
Fixes #11212

The import graph builder was missing the transform to setup links to
parent providers, so provider inheritance didn't work properly. This
adds that.

This also removes the `PruneProviderTransform` since that has no value
in this graph since we'll never add an unused provider.
2017-01-24 15:36:45 -08:00
Mitchell Hashimoto
290ad37489
terraform: test case for #11282 2017-01-24 12:56:13 -08:00
Mitchell Hashimoto
9c16489887
terraform: ConfigTransformer has Unique and mode filters 2017-01-22 12:58:18 -08:00
Mitchell Hashimoto
d0b7a4a072
terraform: StateFilter handles cases where ResourceState has no type
This was possible with test fixtures but it is also conceiably possible
with older states or corrupted states. We can also extract the type from
the key so we do that now so that StateFilter is more robust.
2017-01-21 10:24:03 -08:00
Mitchell Hashimoto
b56ee1a169
terraform: test on_failure with non-destroy provisioners 2017-01-20 20:05:28 -08:00
Mitchell Hashimoto
4a8c2d0958
terraform: on_failure for provisioners 2017-01-20 19:55:32 -08:00
Mitchell Hashimoto
e9f6c9c429
terraform: run destroy provisioners on destroy 2017-01-20 18:07:51 -08:00
James Bardin
61982be9c6 failing test with wrong interpolated list order
The plan diffs for aws_instance.a and aws_instance.b should be
identical.
2016-12-15 13:23:50 -05:00
Mitchell Hashimoto
817a593280
terraform: destroy edges should take into account module variables
Fixes #10729

Destruction ordering wasn't taking into account ordering implied through
variables across module boundaries.

This is because to build the destruction ordering we create a
non-destruction graph to determine the _creation_ ordering (to properly
flip edges). This creation graph we create wasn't including module
variables. This PR adds that transform to the graph.
2016-12-14 21:48:09 -08:00
Mitchell Hashimoto
89f7e3b79f
terraform: add module vars after providers to see references
Fixes #10711

The `ModuleVariablesTransformer` only adds module variables in use. This
was missing module variables used by providers since we ran the provider
too late. This moves the transformer and adds a test for this.
2016-12-13 21:22:21 -08:00
Mitchell Hashimoto
5f1e6ad020
terraform: TargetsTransformer should preserve module variables
Fixes #10680

This moves TargetsTransformer to run after the transforms that add
module variables is run. This makes targeting work across modules (test
added).

This is a bug that only exists in the new graph, but was caught by a
shadow error in #10680. Tests were added to protect against regressions.
2016-12-12 20:59:14 -08:00
James Bardin
ed3517858f Merge pull request #10670 from hashicorp/jbardin/GH-10603
Prevent data sources from being aplied early
2016-12-12 14:53:03 -05:00
Mitchell Hashimoto
80eab1d749 Merge pull request #10659 from hashicorp/b-multi-provider-dep
terraform: destroy resources in dependent providers first
2016-12-12 11:49:18 -08:00
Mitchell Hashimoto
8e19a8b79f Merge pull request #10657 from hashicorp/b-unknown-computed-list
terraform: allow indexing into a computed list for multi-count resources
2016-12-12 10:52:27 -08:00
James Bardin
d2c6f1b57f Prevent data sources from being aplied early
If a data source has explicit dependencies in `depends_on`, we can
assume the user has added those because of a dependency not tracked
directly in the config. If there are any entries in `depends_on`, don't
apply the data source early during Refresh.
2016-12-12 10:35:10 -05:00
Mitchell Hashimoto
b346ba32d1
terraform: dependent provider resources are destroyed first in modules
This extends the prior commit to also verify (and fix) that resources of
dependent providers are destroyed first even when they're within
modules.
2016-12-10 20:22:12 -05:00
Mitchell Hashimoto
14d079f914
terraform: destroy resources in dependent providers first
Fixes #4645

This is something that never worked (even in legacy graphs), but as we
push forward towards encouraging multi-provider usage especially with
things like the Vault data source, I want to make sure we have this
right for 0.8.

When you have a config like this:

```
resource "foo_type" "name" {}
provider "bar" { attr = "${foo_type.name.value}" }
resource "bar_type" "name" {}
```

Then the destruction ordering MUST be:

  1. `bar_type`
  2. `foo_type`

Since configuring the client for `bar_type` requires accessing data from
`foo_type`. Prior to this PR, these two would be done in parallel. This
properly pushes forward the dependency.

There are more cases I want to test but this is a basic case that is
fixed.
2016-12-10 20:11:24 -05:00
Mitchell Hashimoto
cabcc4b5b8
terraform: allow indexing into a computed list for multi-count resources
Fixes #8695

When a list count was computed in a multi-resource access
(foo.bar.*.list), we were returning the value as empty string. I don't
actually know the histocal reasoning for this but this can't be correct:
we must return unknown.

When changing this to unknown, the new tests passed and none of the old
tests failed. This leads me further to believe that the return empty
string is probably a holdover from long ago to just avoid crashes or
UUIDs in the plan output and not actually the correct behavior.
2016-12-10 19:17:29 -05:00
Mitchell Hashimoto
808f09f01f
terraform: user friendly error when using old map overrides
Related to #8036

We have had this behavior for a _long_ time now (since 0.7.0) but it
seems people are still periodically getting bit by it. This adds an
explicit error message that explains that this kind of override isn't
allowed anymore.
2016-12-09 15:58:24 -05:00
Mitchell Hashimoto
0e4a6e3e89
terraform: apply resource must depend on destroy deps
Fixes #10440

This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.

We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.

I'll explain this in examples...

Given the following configuration:

    resource "null_resource" "a" {
       count = "${var.count}"
    }

    resource "null_resource" "b" {
      triggers { key = "${join(",", null_resource.a.*.id)}" }
    }

Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.

If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440

In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.

The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:

  1. null_resource.b (create)
  2. null_resource.b (destroy)
  3. null_resource.a (destroy)

In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.

This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.

The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-03 23:54:29 -08:00
Mitchell Hashimoto
08a56304bb Merge pull request #10455 from hashicorp/b-non-cbd-promote
terraform: when promoting non-CBD to CBD, mark the config as such
2016-12-02 09:51:27 -05:00
Mitchell Hashimoto
f3a62c694d
terraform: when promoting non-CBD to CBD, mark the config as such
Fixes #10439

When a CBD resource depends on a non-CBD resource, the non-CBD resource
is auto-promoted to CBD. This was done in
cf3a259. This PR makes it so that we
also set the config CBD to true. This causes the proper runtime
execution behavior to occur where we depose state and so on.

So in addition to simple graph edge tricks we also treat the non-CBD
resources as CBD resources.
2016-12-02 09:46:04 -05:00
Mitchell Hashimoto
41c56ad002
terraform: new apply graph understands destroying deposed only 2016-11-28 14:34:24 -08:00
Mitchell Hashimoto
6ee3a08799
terraform: deposed shows up in plan, tests 2016-11-28 14:32:42 -08:00
Mitchell Hashimoto
3bf93501a1
terraform: test applying tainted + deposed (passes)
This is added just trying to reproduce a crash I saw. It passes so
adding it to the master tests.
2016-11-28 14:29:38 -08:00
Mitchell Hashimoto
f50c7acf95
terraform: record dependency to self (other index)
Fixes #10313

The new graph wasn't properly recording resource dependencies to a
specific index of itself. For example: `foo.bar.2` depending on
`foo.bar.0` wasn't shown in the state when it should've been.

This adds a test to verify this and fixes it.
2016-11-23 09:25:20 -08:00
Mitchell Hashimoto
f35d02cbee
terraform: test for alias inheritance [GH-4789] 2016-11-21 21:51:07 -08:00
Mitchell Hashimoto
e7d59ab245
terraform: test for interpolation escapes 2016-11-20 21:14:16 -08:00
Mitchell Hashimoto
25d19ef3d0 Merge pull request #10080 from hashicorp/f-tf-version
terraform: support version requirement in configuration
2016-11-14 11:53:30 -08:00
Mitchell Hashimoto
df34fa88ce Merge pull request #10076 from hashicorp/f-depend-module
terraform: depends_on can reference entire modules
2016-11-14 11:53:12 -08:00
Mitchell Hashimoto
e3a01ccfd8 Merge pull request #10072 from hashicorp/f-output-depends-on
terraform: output nodes can have `depends_on`
2016-11-14 11:52:18 -08:00
Mitchell Hashimoto
aa5d16be79
terraform: shadow errors with UUID() must be ignored
People with `uuid()` usage in their configurations would receive shadow
errors every time on plan because the UUID would change.

This is hacky fix but I also believe correct: if a shadow error contains
uuid() then we ignore the shadow error completely. This feels wrong but
I'll explain why it is likely right:

The "right" feeling solution is to create deterministic random output
across graph runs. This would require using math/rand and seeding it
with the same value each run. However, this alone probably won't work
due to Terraform's parallelism and potential to call uuid() in different
orders. In addition to this, you can't seed crypto/rand and its unlikely
that we'll NEVER use crypto/rand in the future even if we switched
uuid() to use math/rand.

Therefore, the solution is simple: if there is no shadow error, no
problem. If there is a shadow error and it contains uuid(), then ignore
it.
2016-11-14 10:20:26 -08:00
Mitchell Hashimoto
2c467e0f74
terraform: verify version requirements from configuration 2016-11-12 16:50:26 -08:00
Mitchell Hashimoto
538302f143
terraform: resources nested within a module must also be depended on
For example: A => B => C (modules). If A depends on module B, then it
also must depend on everything in module C.
2016-11-12 15:38:28 -08:00
Mitchell Hashimoto
bcfec4e24e
terraform: test that dependencies in the state are enough to maintain
order
2016-11-12 15:22:48 -08:00
Mitchell Hashimoto
0b87ef82c3
terraform: depends_on can reference entire modules 2016-11-12 08:07:45 -08:00
Mitchell Hashimoto
70a41c5e15
terraform: output nodes reference depends_on values 2016-11-11 18:16:04 -08:00
Mitchell Hashimoto
f6161a7dc9
terraform: destroy edge must include resources through outputs
This fixes: `TestContext2Apply_moduleDestroyOrder`

The new destroy graph wasn't properly creating edges that happened
_through_ an output, it was only created the edges for _direct_
dependents.

To fix this, the DestroyEdgeTransformer now creates the full transitive
list of destroy edges by walking all ancestors. This will create more
edges than are necessary but also will no longer miss resources through
an output.
2016-11-11 14:29:19 -08:00
Mitchell Hashimoto
f5da7e85c8
terraform: detect compute counts and show a nicer error
This will detect computed counts (which we don't currently support) and
change the error to be more informative that we don't allow computed
counts. Prior to this, the error would instead be something like
`strconv.ParseInt: "${var.foo}" cannot be parsed as int`.
2016-11-11 11:07:17 -08:00
Mitchell Hashimoto
b68b95dad0
terraform: destroy graph must connect edges for proper target ordering
This connects the destroy edges so that when a `-target` is specified on
a destroy, the proper dependencies get destroyed as well.
2016-11-10 21:14:44 -08:00
Mitchell Hashimoto
ec0ef95c6f
terraform: verify import providers only depend on vars 2016-11-09 15:09:13 -08:00
Mitchell Hashimoto
2b72882405
terraform: import loads the context module by default
This allows it to load provider configuration and interpolate it rather
than it coming purely from the environment.
2016-11-09 15:08:22 -08:00
James Bardin
f7d6fb368a
Add failing test for missing computed map entries
The map output from the module "mod" loses the computed value from the
template when we validate. If the "extra" field is removed from the map,
the validation fails earlier with map "does not have any elements so
cannot determine type".

Apply will work, because the computed value will exist in the map.
2016-11-09 14:28:15 -08:00
Mitchell Hashimoto
d29844969a
terraform: test fixture needs to use variable so its not pruned 2016-11-08 13:59:29 -08:00
Mitchell Hashimoto
e6be4fefe8
terraform: reference an output so it isn't pruned during plan 2016-11-08 13:59:28 -08:00
Mitchell Hashimoto
2608c5f282
terraform: transform for adding orphan resources + tests 2016-11-08 13:59:27 -08:00
Mitchell Hashimoto
dbac0785bc
terraform: tests for the plan graph builder 2016-11-08 13:59:26 -08:00
Mitchell Hashimoto
fb29b6a2dc
terraform: destroy edges should never point to self
Fixes #9920

This was an issue caught with the shadow graph. Self references in
provisioners were causing a self-edge on destroy apply graphs.

We need to explicitly check that we're not creating an edge to ourself.
This is also how the reference transformer works.
2016-11-08 12:27:33 -08:00
Mitchell Hashimoto
f580a8ed7b Merge pull request #9894 from hashicorp/b-provider-alias
terraform: configure provider aliases in the new apply graph
2016-11-07 07:59:33 -08:00
Mitchell Hashimoto
1beda4cd07 Merge pull request #9898 from hashicorp/b-new-ref-existing
terraform: remove pruning of module vars if they ref non-existing nodes
2016-11-07 07:59:25 -08:00
Mitchell Hashimoto
b7954a42fe
terraform: remove pruning of module vars if they ref non-existing nodes
Fixes a shadow graph error found during usage.

The new apply graph was only adding module variables that referenced
data that existed _in the graph_. This isn't a valid optimization since
the data it is referencing may be in the state with no diff, and
therefore available but not in the graph.

This just removes that optimization logic, which causes no failing
tests. It also adds a test that exposes the bug if we had the pruning
logic.
2016-11-04 17:47:20 -07:00
Mitchell Hashimoto
d7ed6637c1
terraform: configure provider aliases in the new apply graph
Found via a shadow graph failure:

Provider aliases weren't being configured by the new apply graph.

This was caused by the transform that attaches configs to provider nodes
not being able to handle aliases and therefore not attaching a config.
Added a test to this and fixed it.
2016-11-04 16:51:52 -07:00
Mitchell Hashimoto
944ffdb95b
terraform: multi-var ordering is by count
Fixes #9444

This appears to be a regression from 0.7.0, but there were no tests
covering it so we missed it and changed the behavior at some point! Oh
no.

This PR make the ordering of multi-var access: `resource.name.*.attr`
consistent: it is the ordering of the count, not the lexical ordering of
the value. This allows behavior where two lists are indexed by count
index and can be assumed to be related (for example user data for an aws
instance, as shown in the above referenced issue).

Two new context tests added to cover this case.
2016-11-04 11:07:01 -07:00
James Bardin
07ec946e7e Merge pull request #9853 from hashicorp/jbardin/cbd-datasource
fix CreateBeforeDestroy with datasources
2016-11-04 09:44:42 -04:00
James Bardin
40886218d5 Add test fixture for new CBD ancestor fix
This test will fail with a cycle before we check ancestors for
CreateBeforeDestroy.
2016-11-03 18:31:25 -04:00
Mitchell Hashimoto
d2e9c35007
terraform: new apply graph creates provisioners in modules
Fixes #9840

The new apply graph wasn't properly nesting provisioners. This resulted
in reading the provisioners being nil on apply in the shadow graph which
caused the crash in the above issue.

The actual cause of this is that the new graphs we're moving towards do
not have any "flattening" (they are flat to begin with): all modules are
in the root graph from the beginning of construction versus building a
number of different graphs and flattening them. The transform that adds
the provisioners wasn't modified to handle already-flat graphs and so
was only adding provisioners to the root module, not children.

The change modifies the `MissingProvisionerTransformer` (primarily) to
support already-flat graphs and add provisioners for all module levels.
Tests are there to cover this as well.

**NOTE:** This PR focuses on fixing that specific issue. I'm going to follow up
this PR with another PR that is more focused on being robust against
crashing (more nil checks, recover() for shadow graph, etc.). In the
interest of focus and keeping a PR reviewable this focuses only on the
issue itself.
2016-11-03 10:25:11 -07:00
Mitchell Hashimoto
9e5d1f10b0
terraform: add test to verify orphan outputs in modules are removed
For #7598

This doesn't work with the old graph, we guard it as such.
2016-11-01 22:42:41 -07:00
Mitchell Hashimoto
86edaeda60 Merge pull request #9707 from hashicorp/b-prevent-destroy-count
terraform: prevent_destroy works for decreasing count
2016-10-31 13:24:16 -07:00
Mitchell Hashimoto
144f31b6f2 Merge pull request #9728 from hashicorp/b-prov-cycle
terraform: validate graph on resource expansation to catch cycles
2016-10-31 13:24:10 -07:00
Mitchell Hashimoto
bac66430cb
terraform: consistent variable values for booleans
Fixes #6447

This ensures that all variables of type string are consistently
converted to a string value upon running Terraform.

The place this is done is in the `Variables()` call within the
`terraform` package. This is the function responsible for loading and
merging the variables from the various sources and seems ideal for
proper conversion to consistent values for various types. We actually
already had tests to this effect.

This also adds docs that talk about the fake-ish boolean variables
Terraform currently has and about how in future versions we'll likely
support them properly, which can cause BC issues so beware.
2016-10-31 11:22:26 -07:00
Mitchell Hashimoto
1aed6f8abb
terraform: validate graph on resource expansation to catch cycles
Fixes #5342

The dynamically expanded subgraph wasn't being validated so cycles
weren't being caught here and Terraform would just hang. This fixes
that.

Note that it may make sense to validate higher level when the graph is
expanded but there are certain cases we actually expect the graph to
potentially be invalid, so this seems safer for now.
2016-10-30 14:27:08 -07:00
Mitchell Hashimoto
a332c121bc
terraform: prevent_destroy works for decreasing count
Fixes #5826

The `prevent_destroy` lifecycle configuration was not being checked when
the count was decreased for a resource with a count. It was only
checking when attributes changed on pre-existing resources.

This fixes that.
2016-10-28 21:31:47 -04:00
Mitchell Hashimoto
f142978456
terraform: add test to verify tainted resources don't process
ignore_changes

For #7855
2016-10-27 08:44:59 -04:00
Mitchell Hashimoto
791a02e6e4
terraform: test that depends_on is used for destroy ordering 2016-10-25 11:05:48 -07:00
Mitchell Hashimoto
eb20db92cf
terraform: test that data sources can reference other data sources 2016-10-23 18:53:00 -07:00
Mitchell Hashimoto
1d27e554a5
terraform: test to ensure data sources work on Apply operation
It appears data sources have always been coded to work during apply, as
can be verified with this test (no impl. changes were necessary to make
it pass).

This test should be added to ensure our apply graph always works with
data sources as well.
2016-10-20 21:53:54 -07:00
Mitchell Hashimoto
e4ef1fe553
terraform: disable providers in new apply graph
This adds the proper logic for "disabling" providers to the new apply
graph: interolating and storing the config for inheritance but not
actually initializing and configuring the provider.

This is important since parent modules will often contain incomplete
provider configurations for the purpose of inheritance that would error
if they were actually attempted to be configured (since they're
incomplete). If the provider is not used, it should be "disabled".
2016-10-19 14:54:00 -07:00
Mitchell Hashimoto
924f7a49e0
terraform: module variable transform must do children later (tested) 2016-10-19 13:38:53 -07:00
Mitchell Hashimoto
3d4937b784
terraform: FlatConfigTransformer 2016-10-19 13:38:53 -07:00
Mitchell Hashimoto
08dade5475
terraform: more destroy edge tests 2016-10-19 13:38:52 -07:00
Mitchell Hashimoto
7b2bd93094
terraform: test the destroy edge transform 2016-10-19 13:38:52 -07:00
Mitchell Hashimoto
21888b1227
terraform: test for referencetransform for modules 2016-10-19 13:38:50 -07:00
Mitchell Hashimoto
994f5ce773
terraform: ReferenceTransform to connect references 2016-10-19 13:38:50 -07:00
Mitchell Hashimoto
0f0eecfee7
terraform: add provisioner nodes to the apply graph 2016-10-19 13:38:50 -07:00
Mitchell Hashimoto
9ea9e52185
terraform: rename Config to Module, tests for diff transform 2016-10-19 13:38:49 -07:00
Mitchell Hashimoto
728a1e5448
terraform: multi-var interpolation should use state for count
Related to #5254

If the count of a resource is interpolated (i.e. `${var.c}`), then it
must be interpolated before any splat variable using that resource can
be used (i.e. `type.name.*.attr`). The original fix for #5254 is to
always ensure that this is the case.

While working on a new apply builder based on the diff in
`f-apply-builder`, this truth no longer always holds. Rather than always
include such a resource, I believe the correct behavior instead is to
use the state as a source of truth during `walkApply` operations.

This change specifically is scoped to `walkApply` operation
interpolations since we know the state of any multi-variable should be
available. The behavior is less clear for other operations so I left the
logic unchanged from prior versions.
2016-10-13 17:57:11 -07:00
James Bardin
3297a460c7 Allow map variables from json
A JSON object will be decoded as a list with a single map value. This
will be properly coerced later, so let it through the initial config
semantic checks.
2016-09-27 13:29:14 -04:00
James Bardin
2623d84a41 Add a context test for a datasource with count 2016-09-03 13:08:41 -07:00
Sander van Harmelen
47dd1ad153 Add wildcard (match all) support to ignore_changes (#8599) 2016-09-02 15:44:35 +02:00
Mitchell Hashimoto
60f212b73e
terraform: test for referencing counts that are from vars 2016-08-31 11:54:14 -07:00
Mitchell Hashimoto
d2e15ab69a
terraform: add test explicitly for referencing count 2016-08-31 11:49:25 -07:00
Mitchell Hashimoto
08a9c8e2c2
terraform: self.count works in interpolations [GH-5283] 2016-08-31 11:36:51 -07:00
Mitchell Hashimoto
b9219103df
terraform: prefix destroy resources with module path [GH-2767] 2016-08-22 13:33:11 -07:00
Mitchell Hashimoto
f98459d683 Merge pull request #8304 from hashicorp/b-state-mv
Fix `state mv` with nested modules to work properly
2016-08-20 00:00:22 -04:00
Mitchell Hashimoto
05cbb5c0ea
terraform: add test for filtering nested modules 2016-08-17 18:54:47 -05:00
Mitchell Hashimoto
cc5abd0815
terraform: add tests for variables 2016-08-17 11:28:58 -07:00
Mitchell Hashimoto
f4faf2274b
config: count can't be a SimpleVariable 2016-08-16 13:48:12 -07:00
Mitchell Hashimoto
0a2fd1a5e5 terraform: filtering on name actually matches name 2016-08-15 14:36:23 -05:00
James Bardin
2e5791ab2b Allow the HCL input when prompted
We already accept HCL encoded input for -vars, and this expands that to
accept HCL when prompted for a value on the command line as well.
2016-08-10 11:14:31 -04:00
Paul Hinze
24c45fcd5d
terraform: Filter untargeted variable nodes
When targeting, only Addressable untargeted nodes were being removed
from the graph. Variable nodes are not directly Addressable, so they
were hanging around. This caused problems with module variables that
referred to Resource nodes. The Resource node would be filtered out of
the graph, but the module Variable node would not, so it would try to
interpolate during the graph walk and be unable to find it's referent.

This would present itself as strange "cannot find variable" errors for
variables that were uninvolved with the currently targeted set of
resources.

Here, we introduce a new interface that can be implemented by graph
nodes to indicate they should be filtered out from targeting even though
they are not directly addressable themselves.
2016-07-29 16:55:30 -05:00
James Nugent
7af10adcbe core: Do not assume HCL parser has touched vars
This PR fixes #7824, which crashed when applying a plan file. The bug is
that while a map which has come from the HCL parser reifies as a
[]map[string]interface{}, the variable saved in the plan file was not.
We now cover both cases.

Fixes #7824.
2016-07-27 17:14:47 -05:00
James Nugent
681d94ae20 core: Allow lists and maps as variable overrides
Terraform 0.7 introduces lists and maps as first-class values for
variables, in addition to string values which were previously available.
However, there was previously no way to override the default value of a
list or map, and the functionality for overriding specific map keys was
broken.

Using the environment variable method for setting variable values, there
was previously no way to give a variable a value of a list or map. These
now support HCL for individual values - specifying:

    TF_VAR_test='["Hello", "World"]'

will set the variable `test` to a two-element list containing "Hello"
and "World". Specifying

    TF_VAR_test_map='{"Hello = "World", "Foo" = "bar"}'

will set the variable `test_map` to a two-element map with keys "Hello"
and "Foo", and values "World" and "bar" respectively.

The same logic is applied to `-var` flags, and the file parsed by
`-var-files` ("autoVariables").

Note that care must be taken to not run into shell expansion for `-var-`
flags and environment variables.

We also merge map keys where appropriate. The override syntax has
changed (to be noted in CHANGELOG as a breaking change), so several
tests needed their syntax updating from the old `amis.us-east-1 =
"newValue"` style to `amis = "{ "us-east-1" = "newValue"}"` style as
defined in TF-002.

In order to continue supporting the `-var "foo=bar"` type of variable
flag (which is not valid HCL), a special case error is checked after HCL
parsing fails, and the old code path runs instead.
2016-07-26 15:27:29 -05:00
Paul Hinze
b45f53eef4
dag: fix ReverseDepthFirstWalk when nodes remove themselves
The report in #7378 led us into a deep rabbit hole that turned out to
expose a bug in the graph walk implementation being used by the
`NoopTransformer`. The problem ended up being when two nodes in a single
dependency chain both reported `Noop() -> true` and needed to be
removed. This was breaking the walk and preventing the second node from
ever being visited.

Fixes #7378
2016-07-15 13:43:28 -06:00
James Nugent
56aadab115 core: Add context test for module var from splat
This adds additional coverage of the situation reported in #7195 to
prevent against regression. The actual fix was in 2356afd, in response
to #7143.
2016-07-13 11:23:56 -06:00