Commit Graph

1221 Commits

Author SHA1 Message Date
James Bardin
2747d964cb Merge pull request #7877 from hashicorp/jbardin/races
core: Fix race conditions, mostly around diffs
2016-07-29 18:42:31 -04:00
Paul Hinze
46b78286bc Merge pull request #7865 from hashicorp/b-filter-untargeted-variables
terraform: Filter untargeted variable nodes
2016-07-29 17:08:08 -05:00
Paul Hinze
24c45fcd5d
terraform: Filter untargeted variable nodes
When targeting, only Addressable untargeted nodes were being removed
from the graph. Variable nodes are not directly Addressable, so they
were hanging around. This caused problems with module variables that
referred to Resource nodes. The Resource node would be filtered out of
the graph, but the module Variable node would not, so it would try to
interpolate during the graph walk and be unable to find it's referent.

This would present itself as strange "cannot find variable" errors for
variables that were uninvolved with the currently targeted set of
resources.

Here, we introduce a new interface that can be implemented by graph
nodes to indicate they should be filtered out from targeting even though
they are not directly addressable themselves.
2016-07-29 16:55:30 -05:00
James Nugent
9034196baf Merge pull request #7875 from hashicorp/b-outputs-by-module
core: Fix -module for terraform output command
2016-07-29 16:40:24 -05:00
James Nugent
0e4e94a86f core: Fix -module for terraform output command
The behaviour whereby outputs for a particular nested module can be
output was broken by the changes for lists and maps. This commit
restores the previous behaviour by passing the module path into the
outputsAsString function.

We also add a new test of this since the code path for indivdual output
vs all outputs for a module has diverged.
2016-07-29 16:39:59 -05:00
James Bardin
5802f76eaa Make all terraform package tests pass under -race
This isn't a pretty refactor, but fixes the race issues in this package
for now.

Fix race on RawConfig.Config()

fix command package races
2016-07-29 16:12:21 -04:00
Clint
a9aaf44a87 fix make issues (supersedes #7868) (#7876)
* Fixing the make error or invalid data type for errorf and printf

* fix make errors
2016-07-29 21:05:57 +01:00
James Bardin
9dec28bccf Merge pull request #7803 from hashicorp/jbardin/tf_vars-push
Add tf_vars to the variables sent in push
2016-07-28 16:31:04 -04:00
James Bardin
341abd7956 limit input retries
Prevent going into a busy loop if the input fd closes early.
2016-07-28 08:49:09 -04:00
James Nugent
7af10adcbe core: Do not assume HCL parser has touched vars
This PR fixes #7824, which crashed when applying a plan file. The bug is
that while a map which has come from the HCL parser reifies as a
[]map[string]interface{}, the variable saved in the plan file was not.
We now cover both cases.

Fixes #7824.
2016-07-27 17:14:47 -05:00
James Nugent
681d94ae20 core: Allow lists and maps as variable overrides
Terraform 0.7 introduces lists and maps as first-class values for
variables, in addition to string values which were previously available.
However, there was previously no way to override the default value of a
list or map, and the functionality for overriding specific map keys was
broken.

Using the environment variable method for setting variable values, there
was previously no way to give a variable a value of a list or map. These
now support HCL for individual values - specifying:

    TF_VAR_test='["Hello", "World"]'

will set the variable `test` to a two-element list containing "Hello"
and "World". Specifying

    TF_VAR_test_map='{"Hello = "World", "Foo" = "bar"}'

will set the variable `test_map` to a two-element map with keys "Hello"
and "Foo", and values "World" and "bar" respectively.

The same logic is applied to `-var` flags, and the file parsed by
`-var-files` ("autoVariables").

Note that care must be taken to not run into shell expansion for `-var-`
flags and environment variables.

We also merge map keys where appropriate. The override syntax has
changed (to be noted in CHANGELOG as a breaking change), so several
tests needed their syntax updating from the old `amis.us-east-1 =
"newValue"` style to `amis = "{ "us-east-1" = "newValue"}"` style as
defined in TF-002.

In order to continue supporting the `-var "foo=bar"` type of variable
flag (which is not valid HCL), a special case error is checked after HCL
parsing fails, and the old code path runs instead.
2016-07-26 15:27:29 -05:00
James Bardin
f66d1a10a4 Add VersionString
We conditionally format version with VersionPrerelease in a number of
places. Add a package-level function where we can unify the version
format. Replace most of version formatting in terraform, but leave th
few instances set from the top-level package to make sure we don't break
anything before release.
2016-07-21 16:43:49 -04:00
James Bardin
885935962c Add a terraform version header to all atlas calls
Using the DefaultHeader added to the atlas.Client
2016-07-21 11:04:27 -04:00
James Bardin
87a5ce8045 Add tests for maps with dots
This adds some unit tests for config maps with dots in the key values.
We check for maps with keys which have overlapping names. There are
however still issues with nested maps which create overlapping flattened
names, as well as nested lists with dots in the key.
2016-07-20 14:08:14 -04:00
James Nugent
5d18f41f04 core: Convert context vars to map[string]interface{}
This is the first step in allowing overrides of map and list variables.
We convert Context.variables to map[string]interface{} from
map[string]string and fix up all the call sites.
2016-07-18 13:02:54 -05:00
Paul Hinze
7c40c174ef
clean up after v0.7.0-rc3 2016-07-15 18:33:02 -06:00
Paul Hinze
3f4857a07a
v0.7.0-rc3 2016-07-15 22:29:21 +00:00
Paul Hinze
b45f53eef4
dag: fix ReverseDepthFirstWalk when nodes remove themselves
The report in #7378 led us into a deep rabbit hole that turned out to
expose a bug in the graph walk implementation being used by the
`NoopTransformer`. The problem ended up being when two nodes in a single
dependency chain both reported `Noop() -> true` and needed to be
removed. This was breaking the walk and preventing the second node from
ever being visited.

Fixes #7378
2016-07-15 13:43:28 -06:00
Paul Hinze
9fe916248c Merge pull request #7654 from hashicorp/zeroae-b-triton-dot-in-tags
Support "." in map keys
2016-07-15 09:48:19 -06:00
James Nugent
340655d56c core: Allow "." character in map keys
Fixes #2143 and fixes #7130.
2016-07-14 12:38:43 -06:00
James Nugent
56aadab115 core: Add context test for module var from splat
This adds additional coverage of the situation reported in #7195 to
prevent against regression. The actual fix was in 2356afd, in response
to #7143.
2016-07-13 11:23:56 -06:00
James Nugent
788bff46e2 Merge pull request #7563 from hashicorp/b-ignore-changes-dependency
terraform: another set of ignore_changes fixes
2016-07-13 11:06:49 -06:00
James Nugent
d955c5191c core: Fix interpolation tests with nested lists
Some of the tests for splat syntax were from the pre-list-and-map world,
and effectively flattened the values if interpolating a resource value
which was itself a list.

We now set the expected values correctly so that an interpolation like
`aws_instance.test.*.security_group_ids` now returns a list of lists.

We also fix the implementation to correctly deal with maps.
2016-07-11 17:02:12 -06:00
Paul Hinze
14cea95e86
terraform: another set of ignore_changes fixes
This set of changes addresses two bug scenarios:

(1) When an ignored change canceled a resource replacement, any
downstream resources referencing computer attributes on that resource
would get "diffs didn't match" errors. This happened because the
`EvalDiff` implementation was calling `state.MergeDiff(diff)` on the
unfiltered diff. Generally this is what you want, so that downstream
references catch the "incoming" values. When there's a potential for the
diff to change, thought, this results in problems w/ references.

Here we solve this by doing away with the separate `EvalNode` for
`ignore_changes` processing and integrating it into `EvalDiff`. This
allows us to only call `MergeDiff` with the final, filtered diff.

(2) When a resource had an ignored change but was still being replaced
anyways, the diff was being improperly filtered. This would cause
problems during apply when not all attributes were available to perform
the replacement.

We solve that by deferring actual attribute removal until after we've
decided that we do not have to replace the resource.
2016-07-08 16:48:23 -05:00
James Nugent
b6fff854a6 core: Set all unknown keys to UnknownVariableValue
As part of evaluating a variable block, there is a pass made on unknown
keys setting them to the config.DefaultVariableValue sentinal value.
Previously this only took into account one level of nesting and assumed
all values were strings.

This commit now traverses the unknown keys via lists and maps and sets
unknown map keys surgically.

Fixes #7241.
2016-07-08 16:44:40 +01:00
James Nugent
088feb933f terraform: Add test case reproducing #7241
The reproduction of issue #7421 involves a list of maps being passed to
a module, where one or more of the maps has a value which is computed
(for example, from another resource). There is a failure at the point of
use (via lookup interpolation) of the computed value of the form:

```
lookup: lookup failed to find 'elb' in:
${lookup(var.services[count.index], "elb")}
```

Where 'elb' is the key of the map.
2016-07-08 16:43:42 +01:00
James Nugent
1401a52a5c Merge pull request #7493 from hashicorp/b-pass-map-to-module
terraform: allow literal maps to be passed to modules
2016-07-08 11:50:29 +01:00
James Bardin
21e2173e0a Fix nested module "unknown variable" during dest (#7496)
* Fix nested module "unknown variable" during dstry

During a destroy with nested modules, accessing a variable between them
causes an "unknown variable accessed" during destroy.
2016-07-06 11:22:41 -04:00
Paul Hinze
559f14c3fa
terraform: allow literal maps to be passed to modules
Passing a literal map to a module looks like this in HCL:

    module "foo" {
      source = "./foo"
      somemap {
        somekey = "somevalue"
      }
    }

The HCL parser always wraps an extra list around the map, so we need to
remove that extra list wrapper when the parameter is indeed of type "map".

Fixes #7140
2016-07-06 09:52:32 -05:00
Paul Hinze
1a4bd24e1a
terraform: add test helper for inline config loading
In scenarios with a lot of small configs, it's tedious to fan out actual
dir trees in a test-fixtures dir. It also spreads out the context of the
test - requiring the reader fetch a bunch of scattered 3 line files in
order to understand what is being tested.

Our config loading code still only reads from disk, but in
the `helper/resource` acc test framework we work around this by writing
inline config to temp files and loading it from there. This helper is
based on that strategy.

Eventually it'd be great to be able to build up a `module.Tree` from
config directly, but this gets us the functionality today.

Example Usage:

    testModuleInline(t, map[string]string{
      "top.tf": `
        module "middle" {
          source = "./middle"
        }
      `,
      "middle/mid.tf": `
        module "bottom" {
          source = "./bottom"
          amap {
            foo = "bar"
          }
        }
      `,
      "middle/bottom/bot.tf": `
        variable "amap" {
          type = "map"
        }
      `,
    }),
2016-07-06 09:12:19 -05:00
Paul Hinze
4a1b36ac0d
core: rerun resource validation before plan and apply
In #7170 we found two scenarios where the type checking done during the
`context.Validate()` graph walk was circumvented, and the subsequent
assumption of type safety in the provider's `Diff()` implementation
caused panics.

Both scenarios have to do with interpolations that reference Computed
values. The sentinel we use to indicate that a value is Computed does
not carry any type information with it yet.

That means that an incorrect reference to a list or a map in a string
attribute can "sneak through" validation only to crop up...

 1. ...during Plan for Data Source References
 2. ...during Apply for Resource references

In order to address this, we:

 * add high-level tests for each of these two scenarios in `provider/test`
 * add context-level tests for the same two scenarios in `terraform`
   (these tests proved _really_ tricky to write!)
 * place an `EvalValidateResource` just before `EvalDiff` and `EvalApply` to
   catch these errors
 * add some plumbing to `Plan()` and `Apply()` to return validation
   errors, which were previously only generated during `Validate()`
 * wrap unit-tests around `EvalValidateResource`
 * add an `IgnoreWarnings` option to `EvalValidateResource` to prevent
   active warnings from halting execution on the second-pass validation

Eventually, we might be able to attach type information to Computed
values, which would allow for these errors to be caught earlier. For
now, this solution keeps us safe from panics and raises the proper
errors to the user.

Fixes #7170
2016-07-01 13:12:57 -05:00
Paul Hinze
40fbb8d2e8 Merge pull request #7370 from tpounds/show-tf-ver-on-state-mismatch
Show Terraform version on state version mismatch.
2016-06-29 10:48:13 -05:00
James Bardin
68010599b1 Merge pull request #7403 from hashicorp/jbardin/GH-7394
core: Don't set Modules to nil during state upgrade
2016-06-29 10:05:57 -04:00
James Bardin
1b75c51ed7 Don't nil module maps during state upgrade
The Outputs and Resources maps in the state modules are expected to be
non-nil, and initialized that way when a new module is added to the
state.  The V1->V2 upgrade was setting the maps to nil if the len == 0.
2016-06-29 09:17:25 -04:00
James Bardin
94f1899f4e increment the state serial whenever we upgrade
Always increment the state serial whenever we upgrade the state version.
This prevents possible version conflicts between local and remote state
when one has been upgraded, but the serial numbers match.
2016-06-28 16:32:36 -04:00
Trevor Pounds
c4423bba17 Fix state version error message check. 2016-06-27 13:42:44 -07:00
Trevor Pounds
5932214f0d Show Terraform version on state version mismatch. 2016-06-27 11:52:43 -07:00
James Bardin
0e507e7e5e Remove computed maps from the diff Same check
Just like computed sets, computed maps may have both different values
and different cardinality after they're computed. Remove the computed
maps and the values from the compared diffs.
2016-06-27 10:04:45 -04:00
James Nugent
1f9a2b241e Merge pull request #7207 from hashicorp/b-diff-maps
core/diff: Fix attribute mismatch with tags.%
2016-06-24 15:03:36 +03:00
James Nugent
cd354ed3c7 core: Use Terraform terms over HCL ones 2016-06-24 12:42:39 +01:00
James Bardin
416e875bff Output expected HCL types when evaluating config
Don't refer to Go types when an unexpected type is encountered in the
config.
2016-06-24 12:42:01 +01:00
James Nugent
2356afde84 core: Fix interpolation of unknown multi-variables
This commit test "TestContext2Input_moduleComputedOutputElement"
by ensuring that we treat a count of zero and non-reified resources
independently rather than returning an empty list for both, which
results in an interpolation failure when using the element function or
indexing.
2016-06-23 21:15:33 +01:00
James Nugent
983e4f13c6 core: Add context test for empty lists as module outputs
This test illustrates a failure which occurs during the Input walk, if
an interpolation is used with the input of a splat operation resulting
in a multi-variable.

The bug was found during use of the RC2, but does not correspond to an
open issue at present.
2016-06-23 21:15:33 +01:00
James Nugent
17f4777039 core: Fix Stringer on OutputState for types
The implementation of Stringer on OutputState previously assumed outputs
may only be strings - we now no longer cast to string, instead using the
built in formatting directives.
2016-06-23 21:15:33 +01:00
James Nugent
d60365af02 core: Correctly ensure that State() is a copy
The previous mechanism for testing state threw away the mutation made on
the state by calling State() twice - this commit corrects the test to
match the comment.

In addition, we replace the custom copying logic with the copystructure
library to simplify the code.
2016-06-22 17:21:27 +03:00
James Nugent
b190aa05a5 core: Add missing OutputStates in synthetic state
In cases where we construct state directly rather than reading it via
the usual methods, we need to ensure that the necessary maps are
initialized correctly.
2016-06-22 17:06:41 +03:00
clint shryock
9967641c4b core/diff: Fix attribute mismatch with tags.% 2016-06-16 18:22:21 -05:00
James Bardin
f4ef16a84b Add a core test for InstanceDiff.Same 2016-06-16 18:43:15 -04:00
James Bardin
a5c1bf1b36 Don't check any parts of a computed hash in Same
When checking for "same" values in a computed hash, not only might some
of the values differ between versions changing the hash, but there may be
fields not included at all in the original map, and different overall
counts.

Instead of trying to match individual set fields with different hashes,
remove any hashed key longer than the computed key with the same base
name.
2016-06-16 18:42:48 -04:00
Paul Hinze
0f92161f82 release: clean up after v0.7.0-rc2 2016-06-12 19:21:46 +00:00
Paul Hinze
46a0709bba
v0.7.0-rc2 2016-06-12 19:07:52 +00:00
James Nugent
052345abfe core: Fix empty multi-variable type
Previously, interpolation of multi-variables was returning an empty
variable if the resource count was 0. The empty variable was defined as
TypeString, Value "". This means that empty resource counts fail type
checking for interpolation functions which operate on lists.

Instead, return an empty list if the count is 0. A context test tests
this against further regression. Also add a regression test covering the
case of a single count multi-variable.

In order to make the context testing framework deal with this change it
was necessary to special case empty lists in the test diff function.

Fixes #7002
2016-06-12 14:00:16 +02:00
Paul Hinze
bf0e7705b1
core: Fix destroy when module vars used in provider config
For `terraform destroy`, we currently build up the same graph we do for
`plan` and `apply` and we do a walk with a special Diff that says
"destroy everything".

We have fought the interpolation subsystem time and again through this
code path. Beginning in #2775 we gained a new feature to selectively
prune out problematic graph nodes. The past chain of destroy fixes I
have been involved with (#6557, #6599, #6753) have attempted to massage
the "noop" definitions to properly handle the edge cases reported.

"Variable is depended on by provider config" is another edge case we add
here and try to fix.

This dive only makes me more convinced that the whole `terraform
destroy` code path needs to be reworked.

For now, I went with a "surgical strike" approach to the problem
expressed in #7047. I found a couple of issues with the existing
Noop and DestroyEdgeInclude logic, especially with regards to
flattening, but I'm explicitly ignoring these for now so we can get this
particular bug fixed ahead of the 0.7 release. My hope is that we can
circle around with a fully specced initiative to refactor `terraform
destroy`'s graph to be more state-derived than config-derived.

Until then, this fixes #7407
2016-06-11 21:21:08 -05:00
James Nugent
bdc6a49ae3 provider/terraform: Fix outputs from remote state
The work integrated in hashicorp/terraform#6322 silently broke the
ability to use remote state correctly. This commit adds a fix for that,
making use of the work integrated in hashicorp/terraform#7124.

In order to deal with outputs which are complex structures, we use a
forked version of the flatmap package - the difference in the version
this commit vs the github.com/hashicorp/terraform/flatmap package is
that we add in an additional key for map counts which state requires.
Because we bypass the normal helper/schema mechanism, this is not set
for us.

Because of the HIL type checking of maps, values must be of a homogenous
type. This is unfortunate, as it means we can no longer refer to outputs
as:

    ${terraform_remote_state.foo.output.outputname}

Instead we had to bring them to the top level namespace:

    ${terraform_remote_state.foo.outputname}

This actually does lead to better overall usability - and the BC
breakage is made better by the fact that indexing would have broken the
original syntax anyway.

We also add a real-world test and assert against specific values. Tests
which were previously acceptance tests are now run as unit tests, so
regression should be identified at a much earlier stage.
2016-06-11 16:53:45 +01:00
James Nugent
f51c9d5efd core: Fix interpolation of complex structures
This commit makes two changes: map interpolation can now read flatmapped
structures, such as those present in remote state outputs, and lists are
sorted by the index instead of the value.
2016-06-11 16:53:45 +01:00
Paul Hinze
00d004394c Merge pull request #7109 from hashicorp/f-state-lineage
core: State "Lineage" concept
2016-06-10 16:54:31 -05:00
clint shryock
7d71b8cc3c helper and terraform interpolate test update 2016-06-10 10:07:17 -05:00
Martin Atkins
985fa371dc core: State "Lineage" concept
The lineage of a state is an identifier shared by a set of states whose
serials are meaningfully comparable because they are produced by
progressive Refresh/Apply operations from the same initial empty state.

This is initialized as a type-4 (random) UUID when a new state is
initialized and then preserved on all other changes.

Since states before this change will not have lineage but users may wish
to set a lineage for an existing state in order to get the safety
benefits it will grow to imply, an empty lineage is considered to be
compatible with all lineages.
2016-06-10 07:31:30 -07:00
James Nugent
9554d54116 core: Add test for V2->V3 state upgrade 2016-06-09 11:16:34 +01:00
James Nugent
706ccb7dfe core: Introduce state v3 and upgrade process
This commit makes the current Terraform state version 3 (previously 2),
and a migration process as part of reading v2 state. For the most part
this is unnecessary: helper/schema will deal with upgrading state for
providers written with that framework. However, for providers which
implemented the resource model directly, this gives a best-efforts
attempt at lossless upgrade.

The heuristics used to change the count of a map from the .# key to the
.% key are as follows:

    - if the flat map contains any non-numeric keys, we treat it as a
      map
    - if the map is empty it must be computed or optional, so we remove
      it from state

There is a known edge condition: maps with all-numeric keys are
indistinguishable from sets without access to the schema. They will need
manual conversion or may result in spurious diffs.
2016-06-09 10:49:49 +01:00
James Nugent
074545e536 core: Use .% instead of .# for maps in state
The flatmapped representation of state prior to this commit encoded maps
and lists (and therefore by extension, sets) with a key corresponding to
the number of elements, or the unknown variable indicator under a .# key
and then individual items. For example, the list ["a", "b", "c"] would
have been encoded as:

    listname.# = 3
    listname.0 = "a"
    listname.1 = "b"
    listname.2 = "c"

And the map {"key1": "value1", "key2", "value2"} would have been encoded
as:

    mapname.# = 2
    mapname.key1 = "value1"
    mapname.key2 = "value2"

Sets use the hash code as the key - for example a set with a (fictional)
hashcode calculation may look like:

    setname.# = 2
    setname.12312512 = "value1"
    setname.56345233 = "value2"

Prior to the work done to extend the type system, this was sufficient
since the internal representation of these was effectively the same.
However, following the separation of maps and lists into distinct
first-class types, this encoding presents a problem: given a state file,
it is impossible to tell the encoding of an empty list and an empty map
apart. This presents problems for the type checker during interpolation,
as many interpolation functions will operate on only one of these two
structures.

This commit therefore changes the representation in state of maps to use
a "%" as the key for the number of elements. Consequently the map above
will now be encoded as:

    mapname.% = 2
    mapname.key1 = "value1"
    mapname.key2 = "value2"

This has the effect of an empty list (or set) now being encoded as:

    listname.# = 0

And an empty map now being encoded as:

    mapname.% = 0

Therefore we can eliminate some nasty guessing logic from the resource
variable supplier for interpolation, at the cost of having to migrate
state up front (to follow in a subsequent commit).

In order to reduce the number of potential situations in which resources
would be "forced new", we continue to accept "#" as the count key when
reading maps via helper/schema. There is no situation under which we can
allow "#" as an actual map key in any case, as it would not be
distinguishable from a list or set in state.
2016-06-09 10:49:42 +01:00
James Nugent
cb9ef298f3 core: Defeat backward compatibilty in mapstructure
The mapstructure library has a regrettable backward compatibility
concern whereby a WeakDecode of []interface{}{} into a target of
map[string]interface{} yields an empty map rather than an error. One
possibility is to switch to using Decode instead of WeakDecode, but this
loses the nice handling of type conversion, requiring a large volume of
code to be added to Terraform or HIL in order to retain that behaviour.

Instead we add a DecodeHook to our usage of the mapstructure library
which checks for decoding []interface{}{} or []string{} into a map and
returns an error instead.

This has the effect of defeating the code added to retain backwards
compatibility in mapstructure, giving us the correct (for our
circumstances) behaviour of Decode for empty structures and the type
conversion of WeakDecode.

The code is identical to that in the HIL library, and packaged into a
helper.
2016-06-08 18:38:41 +01:00
James Nugent
49995428fd core: Remove support for V0 state
This removes support for the V0 binary state format which was present in
Terraform prior to 0.3. We still check for the file type and present an
error message explaining to the user that they can upgrade it using a
prior version of Terraform.
2016-06-08 18:38:41 +01:00
James Nugent
e97720f5e3 release: clean up after v0.7.0-rc1 2016-05-31 21:16:01 +00:00
James Nugent
301da85f30
v0.7.0-rc1 2016-05-31 20:49:07 +00:00
Chris Marchesi
9d7fb89114 core: Adding Sensitive attribute to resource schema
This an effort to address hashicorp/terraform#516.

Adding the Sensitive attribute to the resource schema, opening up the
ability for resource maintainers to mark some fields as sensitive.
Sensitive fields are hidden in the output, and, possibly in the future,
could be encrypted.
2016-05-29 22:18:44 -07:00
James Nugent
8b252cfd2b Merge branch 'master' into b-data-diff-lag 2016-05-29 12:01:01 -07:00
Chris Marchesi
559799599e core: Ensure EvalReadDataApply is called on expanded destroy nodes
During accpeptance tests of some of the first data sources (see
hashicorp/terraform#6881 and hashicorp/terraform#6911),
"unknown resource type" errors have been coming up. Traced it down to
the ResourceCountTransformer, which transforms destroy nodes to a
graphNodeExpandedResourceDestroy node. This node's EvalTree() was still
indiscriminately using EvalApply for all resource types, versus
EvalReadDataApply. This accounts for both cases via EvalIf.
2016-05-28 22:48:28 -07:00
Martin Atkins
f41fe4879e core: produce diff when data resource config becomes computed
Previously the plan phase would produce a data diff only if no state was
already present. However, this is a faulty approach because a state will
already be present in the case where the data resource depends on a
managed resource that existed in state during refresh but became
computed during plan, due to a "forces new resource" diff.

Now we will produce a data diff regardless of the presence of the state
when the configuration is computed during the plan phase.

This fixes #6824.
2016-05-28 12:39:36 -07:00
Sander van Harmelen
5af1afd64e
Update newly added code to work with the updated taint semantics 2016-05-26 19:56:03 -05:00
Sander van Harmelen
d97b24e3c1
Add tests and fix last issues 2016-05-26 19:56:03 -05:00
Sander van Harmelen
8560f50cbc
Change taint behaviour to act as a normal resource
This means it’s shown correctly in a plan and takes into account any
actions that are dependant on the tainted resource and, vice verse, any
actions that the tainted resource depends on.

So this changes the behaviour from saying this resource is tainted so
just forget about it and make sure it gets deleted in the background,
to saying I want that resource to be recreated (taking into account the
existing resource and it’s place in the graph).
2016-05-26 19:55:26 -05:00
James Bardin
3f7197622a Merge pull request #6833 from hashicorp/jbardin/fix-interpolation
Interpolate variable during Input and Validate
2016-05-24 10:19:19 -04:00
James Nugent
bf91434576 Merge branch 'b-missing-data-diff' 2016-05-23 16:11:48 -05:00
Paul Hinze
b205ac2123
core: Another validate test for computed module var refs
I wanted to make sure this case was handled, and it is!

So here's an extra test for us.
2016-05-23 15:54:14 -05:00
Martin Atkins
ed9b8f91cf core: test that data sources are read during refresh
Data sources with non-computed configurations should be read during the
refresh walk. This test ensures that this remains true.
2016-05-23 15:21:00 -05:00
Martin Atkins
b832fb305b core: context test for destroying data resources
Earlier we had a bug where data resources would not yet removed from the
state during a destroy. This was fixed in cd0c452, and this test will
hopefully make sure it stays fixed.
2016-05-23 15:21:00 -05:00
James Bardin
ed042ab067 Interpolation also skipped during Validate phase
Adding walkValidate to the EvalTree operations, and removing the
walkValidate guard from the Interpolater.valueModuleVar allows the
values to be interpolated for Validate.
2016-05-23 13:44:13 -04:00
James Bardin
fc4ac52014 Module variables not being interpolated
Variables weren't being interpolated during the Input phase, causing a
syntax error on the interpolation string. Adding `walkInput` to the
EvalTree operations prevents skipping the interpolation step.
2016-05-23 13:44:09 -04:00
Martin Atkins
b255c389e2 core: restore data resource creation diffs
cd0c452 contained a bug where the creation diff for a data resource was
put into a new local variable within the else block rather than into the
diff variable in the parent scope, causing a null diff to always be
produced.

This restores the expected behavior: a computed data resource appears in
the diff, so it can then be fetched during the apply walk.
2016-05-21 13:23:28 -07:00
Martin Atkins
d9c137555f core: test to prove that data diffs are broken
Apparently there's been a regression in the creation of data resource
diffs: they aren't showing up in the plan at all.

As a first step to fixing this, this is an intentionally-failing test
that proves it's broken.
2016-05-21 13:00:46 -07:00
Martin Atkins
cd0c4522ee core: honor "destroy" diffs for data resources
Previously the "planDestroy" pass would correctly produce a destroy diff,
but the "apply" pass would just ignore it and make a fresh diff, turning
it back into a "create" because data resources are always eager to
refresh.

Now we consider the previous diff when re-diffing during apply and so
we can preserve the plan to destroy and then ultimately actually "destroy"
the data resource (remove from the state) when we get to ReadDataApply.

This ensures that the state is left empty after "terraform destroy";
previously we would leave behind data resource states.
2016-05-20 15:07:23 -05:00
James Nugent
2abe8b6612 Merge pull request #6753 from hashicorp/b-output-destroy-fix
: core: Fix destroy w/module vars used in counts
2016-05-19 11:21:50 -05:00
Chris Marchesi
2a679edd25 core: Fix destroy on nested module vars for count
Building on b10564a, adding tweaks that allow the module var count
search to act recursively, ensuring that a sitaution where something
like var.top gets passed to module middle, as var.middle, and then to
module bottom, as var.bottom, which is then used in a resource count.
2016-05-18 13:32:56 -05:00
Paul Hinze
f33ef43195 core: Fix destroy when modules vars are used in resource counts
A new problem was introduced by the prior fixes for destroy
interpolation messages when resources depend on module variables with
a _count_ attribute, this makes the variable crucial for properly
building the graph - even in destroys. So removing all module variables
from the graph as noops was overzealous.

By borrowing the logic in `DestroyEdgeInclude` we are able to determine
if we need to keep a given module variable relatively easily.

I'd like to overhaul the `Destroy: true` implementation so that it does
not depend on config at all, but I want to continue for now with the
targeted fixes that we can backport into the 0.6 series.
2016-05-18 13:32:49 -05:00
James Nugent
3ea3c657b5 core: Use OutputState in JSON instead of map
This commit forward ports the changes made for 0.6.17, in order to store
the type and sensitive flag against outputs.

It also refactors the logic of the import for V0 to V1 state, and
fixes up the call sites of the new format for outputs in V2 state.

Finally we fix up tests which did not previously set a state version
where one is required.
2016-05-18 13:25:20 -05:00
Martin Atkins
453fc505f4 core: Tolerate missing resource variables during input walk
Provider nodes interpolate their config during the input walk, but this
is very early and so it's pretty likely that any resources referenced are
entirely absent from the state.

As a special case then, we tolerate the normally-fatal case of having
an entirely missing resource variable so that the input walk can complete,
albeit skipping the providers that have such interpolations.

If these interpolations end up still being unresolved during refresh
(e.g. because the config references a resource that hasn't been created
yet) then we will catch that error on the refresh pass, or indeed on the
plan pass if -refresh=false is used.
2016-05-14 09:25:03 -07:00
Martin Atkins
61ab8bf39a core: ResourceAddress supports data resources
The ResourceAddress struct grows a new "Mode" field to match with
Resource, and its parser learns to recognize the "data." prefix so it
can set that field.

Allows -target to be applied to data sources, although that is arguably
not a very useful thing to do. Other future uses of resource addressing,
like the state plumbing commands, may be better uses of this.
2016-05-14 08:26:36 -07:00
Martin Atkins
afc7ec5ac0 core: Destroy data resources with "terraform destroy"
Previously they would get left behind in the state because we had no
support for planning their destruction. Now we'll create a "destroy" plan
and act on it by just producing an empty state on apply, thus ensuring
that the data resources don't get left behind in the state after
everything else is gone.
2016-05-14 08:26:36 -07:00
Martin Atkins
4d50f22a23 core: separate lifecycle for data resource "orphans"
The handling of data "orphans" is simpler than for managed resources
because the only thing we need to deal with is our own state, and the
validation pass guarantees that by the time we get to refresh or apply
the instance state is no longer needed by any other resources and so
we can safely drop it with no fanfare.
2016-05-14 08:26:36 -07:00
Martin Atkins
36054470e4 core: lifecycle for data resources
This implements the main behavior of data resources, including both the
early read in cases where the configuration is non-computed and the split
plan/apply read for cases where full configuration can't be known until
apply time.
2016-05-14 08:26:36 -07:00
Martin Atkins
1da560b653 core: Separate resource lifecycle for data vs. managed resources
The key difference between data and managed resources is in their
respective lifecycles. Now the expanded resource EvalTree switches on
the resource mode, generating a different lifecycle for each mode.

For this initial change only managed resources are implemented, using the
same implementation as before; data resources are no-ops. The data
resource implementation will follow in a subsequent change.
2016-05-14 08:26:36 -07:00
Martin Atkins
c1315b3f09 core: EvalValidate calls appropriate validator for resource mode
data resources are a separate namespace of resources than managed
resources, so we need to call a different provider method depending on
what mode of resource we're visiting.

Managed resources use ValidateResource, while data resources use
ValidateDataSource, since at the provider level of abstraction each
provider has separate sets of resources and data sources respectively.
2016-05-14 08:26:36 -07:00
Martin Atkins
844e1abdd3 core: ResourceStateKey understands resource modes
Once a data resource gets into the state, the state system needs to be
able to parse its id to match it with resources in the configuration.
Since data resources live in a separate namespace than managed resources,
the extra "mode" discriminator is required to specify which namespace
we're talking about, just like we do in the resource configuration.
2016-05-14 08:26:36 -07:00
Martin Atkins
0e0e3d73af core: New ResourceProvider methods for data resources
This is a breaking change to the ResourceProvider interface that adds the
new operations relating to data sources.

DataSources, ValidateDataSource, ReadDataDiff and ReadDataApply are the
data source equivalents of Resources, Validate, Diff and Apply (respectively)
for managed resources.

The diff/apply model seems at first glance a rather strange workflow for
read-only resources, but implementing data resources in this way allows them
to fit cleanly into the standard plan/apply lifecycle in cases where the
configuration contains computed arguments and thus the read must be deferred
until apply time.

Along with breaking the interface, we also fix up the plugin client/server
and helper/schema implementations of it, which are all of the callers
used when provider plugins use helper/schema. This would be a breaking
change for any provider plugin that directly implements the provider
interface, but no known plugins do this and it is not recommended.

At the helper/schema layer the implementer sees ReadDataApply as a "Read",
as opposed to "Create" or "Update" as in the managed resource Apply
implementation. The planning mechanics are handled entirely within
helper/schema, so that complexity is hidden from the provider implementation
itself.
2016-05-14 08:26:36 -07:00
Mitchell Hashimoto
59a9bcf3dc
terraform: fix compilation with new func call 2016-05-11 13:07:33 -07:00
Mitchell Hashimoto
27452f0043
terraform: Module option to Import to add module to graph 2016-05-11 13:02:37 -07:00
Mitchell Hashimoto
ec02c8c0e2
helper/resource: testing of almost all aspects of ImportState tests 2016-05-11 13:02:37 -07:00
Mitchell Hashimoto
0d1debc0ae
terraform: import verifies the refresh results in non-nil state
/cc @jen20
2016-05-11 13:02:36 -07:00
Mitchell Hashimoto
6bdab07174
providers/aws: security group import imports rules 2016-05-11 13:02:36 -07:00