Prior to Terraform 0.7, lists in Terraform were just a shallow abstraction
on top of strings with a magic delimiter between items. Wrapping a single
string in brackets in the configuration was Terraform's prompt that it
needed to split the string on that delimiter during interpolation.
In 0.7, when first-class lists were added, this convention was preserved
by flattening lists-of-lists by one level when they were encountered in
configuration. However, there was an oversight in that change where it
did not correctly handle the case where the inner list was unknown.
In #14135 we removed some code that was flattening partially-unknown lists
into fully-unknown (untyped) values. This inadvertently exposed the missed
case from the previous paragraph, causing issues for list-wrapped splat
expressions with unknown members. While this worked fine for resources,
due to some fixup done inside helper/schema, this did not work for other
interpolation contexts such as module blocks.
Various attempts to fix this up and restore the flattening behavior
selectively were unsuccessful, due to a proliferation of assumptions all
over the core code that would be too risky to change just to fix this bug.
This change, then, takes the different approach of removing the
requirement that splats be presented inside list brackets. This
requirement didn't make much sense anymore anyway, since no other
list-returning expression had this constraint and so the rest of Terraform
was already successfully dealing with both cases.
This leaves us with two different scenarios:
- For resource arguments, existing normalization code in helper/schema
does its own flattening that preserves compatibility with the common
practice of using bracketed splats. This change proves this with a test
within the "test" provider that exercises the whole Terraform core and
helper/schema stack that assigns bracketed splats to list and set
attributes.
- For arguments in other blocks, such as in module callsites, the
interpolator's own flattening behavior applies to known lists,
preserving compatibility with configurations from before
partially-computed splats were possible, but those wishing to use
partially-computed splats are required to drop the surrounding brackets.
This is less concerning because this scenario was introduced only in
0.9.5, so the scope for breakage is limited to those who adopted this
new feature quickly after upgrading.
As of this commit, the recommendation is to stop using brackets around
splats but the old form continues to be supported for backward
compatibility. In a future _major_ version of Terraform we will probably
phase out this legacy form to improve consistency, but for now both
forms are acceptable at the expense of some (pre-existing) weird behavior
when _actual_ lists-of-lists are used.
This addresses #14521 by officially adopting the suggested workaround of
dropping the brackets around the splat. However, it doesn't yet allow
passing of a partially-unknown list between modules: that still violates
assumptions in Terraform's core, so for the moment partially-unknown lists
work only within a _single_ interpolation expression, and cannot be
passed around between expressions. Until more holistic work is done to
improve Terraform's type handling, passing a partially-unknown splat
through to a module will result in a fully-unknown list emerging on
the other side, just as was the case before #14135; this change just
addresses the fact that this was failing with an error in 0.9.5.
Instead of using a hardcoded version prerelease string, which makes release automation difficult, set the version prerelease string from an environment variable via the go linker tool during compile time.
The environment variable `TF_RELEASE` should only be set via the `make bin` target, and thus leaves the version prerelease string unset. Otherwise, when running a local compile of terraform via the `make dev` makefile target, the version prerelease string is set to `"dev"`, as usual.
This also requires some changes to both the circonus and postgresql providers, as they directly used the `VersionPrerelease` constant. We now simply call the `VersionString()` function, which returns the proper interpolated version string with the prerelease string populated correctly.
`TF_RELEASE` is unset:
```sh
$ make dev
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/22 10:38:19 Generated command/internal_plugin_list.go
==> Removing old directory...
==> Building...
Number of parallel builds: 3
--> linux/amd64: github.com/hashicorp/terraform
==> Results:
total 209M
-rwxr-xr-x 1 jake jake 209M May 22 10:39 terraform
$ terraform version
Terraform v0.9.6-dev (fd472e4a86500606b03c314f70d11f2bc4bc84e5+CHANGES)
```
`TF_RELEASE` is set (mimicking the `make bin` target):
```sh
$ TF_RELEASE=1 make dev
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/22 10:40:39 Generated command/internal_plugin_list.go
==> Removing old directory...
==> Building...
Number of parallel builds: 3
--> linux/amd64: github.com/hashicorp/terraform
==> Results:
total 121M
-rwxr-xr-x 1 jake jake 121M May 22 10:42 terraform
$ terraform version
Terraform v0.9.6
```
For child modules, a ModuleState isn't allocated until the first time a
module instance is inserted into the state under the module's path.
Normally interpolations of resource attributes are delayed until at least
one resource has been created due to the nature of the dependency graph,
but if the interpolation value is a multi-var (splat) then it is possible
that the referenced resource has count=0 and thus created _no_ resource
states when it was visited.
Previously we would crash when trying to access the resource map for the
nil module in order to count how many instances are present. Since we know
there can't be any instances present in a nil module, we now preempt
this crash by returning zero early.
This edge-case does not apply to the root module because its ModuleState
is allocated as part of initializing the main State instance.
This fixes#14438.
Currently, the refresh graph uses the resources from state as a base,
with data sources then layered on. Config is not consulted for resources
and hence new resources that are added with count (or any new resource
from config, for that matter) do not get added to the graph during
refresh.
This is leading to issues with scale in and scale out when the same
value for count is used in both resources, and data sources that may
depend on that resource (and possibly vice versa). While the resources
exist in config and can be used, the fact that ConfigTransformer for
resources is missing means that they don't get added into the graph,
leading to "index out of range" errors and what not.
Further to that, if we add these new resources to the graph for scale
out, considerations need to be taken for scale in as well, which are not
being caught 100% by the current implementation of
NodeRefreshableDataResource. Scale-in resources should be treated as
orphans, which according to the instance-form NodeRefreshableResource
node, should be NodeDestroyableDataResource nodes, but this this logic
is currently not rolled into NodeRefreshableDataResource. This causes
issues on scale-in in the form of race-ish "index out of range" errors
again.
This commit updates the refresh graph so that StateTransformer is no
longer used as the base of the graph. Instead, we add resources from the
state and config in a hybrid fashion:
* First off, resource nodes are added from config, but only if
resources currently exist in state. NodeRefreshableManagedResource
is a new expandable resource node that will expand count and add
orphans from state. Any count-expanded node that has config but no
state is also transformed into a plannable resource, via a new
ResourceRefreshPlannableTransformer.
* The NodeRefreshableDataResource node type will now add count orphans
as NodeDestroyableDataResource nodes. This achieves the same effect
as if the data sources were added by StateTransformer, but ensures
there are no races in the dependency chain, with the added benefit of
directing these nodes straight to the proper
NodeDestroyableDataResource node.
* Finally, config orphans (nodes that don't exist in config anymore
period) are then added, to complete the graph.
This should ensure as much as possible that there is a refresh graph
that best represents both the current state and config with updated
variables and counts.
The previous behavior of targets was that targeting a particular node
would implicitly target everything it depends on. This makes sense when
the dependencies in question are between resources, since we need to
make sure all of a resource's dependencies are in place before we can
create or update it.
However, it had the undesirable side-effect that targeting a resource
would _exclude_ any outputs referring to it, since the dependency edge
goes from output to resource. This then causes the output to be "stale",
which is problematic when outputs are being consumed by downstream
configs using terraform_remote_state.
GraphNodeTargetDownstream allows nodes to opt-in to a new behavior where
they can be targeted by _inverted_ dependency edges. That is, it allows
outputs to be considered targeted if anything they directly depend on
is targeted.
This is different than the implied targeting behavior in the other
direction because transitive dependencies are not considered unless the
intermediate nodes themselves have TargetDownstream. This means that
an output1→output2→resource chain can implicitly target both outputs, but
an output→resource1→resource2 chain _won't_ target the output if only
resource2 is targeted.
This behavior creates a scenario where an output can be visited before
all of its dependencies are ready, since it may have a mixture of both
targeted and untargeted dependencies. This is fine for outputs because
they silently ignore any errors encountered during interpolation anyway,
but other hypothetical future implementers of this interface may need to
be more careful.
This fixes#14186.
This was actually redundant anyway since HIL itself applied a similar
rule where any partially-unknown list would be automatically flattened
to a single unknown value.
However, now we're changing HIL to explicitly permit partially-unknown
lists so that we can allow the index operator [...] to succeed when
applied to one of the elements that _is_ known.
This, in conjunction with hashicorp/hil#51 and hashicorp/hil#52,
fixes#3449.
stringer has changed the boilerplate it generates in a recent version.
We'd previously updated to the new format but accientally rolled back
to the old while merging a long-running feature branch.
This restores us back to the new format again.
Moving the transformer wholesale looks like it broke some tests, with
some actually doing legit work in normalizing singular resources from a
foo.0 notation to just foo.
Adjusted the TestPlanGraphBuilder to account for the extra
meta.count-boundary nodes in the graph output now, as well as added
another context test that tests this case. It appears the issue happens
during validate, as this is where the state can be altered to a broken
state if things are not properly transformed in the plan graph.
This fixes interpolation issues on grandchild data sources that have
multiple instances (ie: counts). For example, baz depends on bar, which
depends on foo.
In this instance, after an initial TF run is done and state is saved,
the next refresh/plan is not properly transformed, and instead of the
graph/state coming through as data.x.bar.0, it comes through as
data.x.bar. This breaks interpolations that rely on splat operators -
ie: data.x.bar.*.out.
* Revert #11245, #11321, #11498 and #11757
These PR’s are all related to issue #11170 for which I would like to propose a different solution then the one currently implemented.
* A different approach to solve #11170
This approach has (IMHO) a few advantages with regards to the solution currently implemented. I will elaborate on this in the PR.
The documentation for Refresh indicates that it will always return a
valid state, but that wasn't true in the case of a graph builder error.
While this same concept wasn't documented for Apply, it was still
assumed in the terraform apply code.
Since the helper testing framework relies on the absence of a state to
determine if it can call Destroy, the Context can't can't start
returning a state in all cases. Document this, and use the State method
to fetch the correct state value after Apply.
Add a nil check to the WriteState function, so that writing a nil state
is a noop.
Make sure to init before sorting the state, to make sure we're not
attempting to sort nil values. This isn't technically needed with the
current code, but it's just safer in general.
Make sure duplicate depends_on entries are pruned from existing states
on read.
Make sure new state built from configs with multiple references to the
same resource only add it once to the Dependencies.
duplicate entries could end up in "depends_on" in the state, which could
possible lead to erroneous state comparisons. Remove them when walking
the graph, and remove existing duplicates when pruning the state.
Previously this function was depending on the mapstructure behavior of
failing with an error when trying to decode a map into a list or
vice-versa, but mapstructure's WeakDecode behavior changed so that it
will go to greater lengths to coerce the given value to fit into the
target type, causing us to mis-handle certain ambigous cases.
Here we exert a bit more control over what's going on by using 'reflect'
to first check whether we have a slice or map value and only then try
to decode into one with mapstructure. This allows us to still rely on
mapstructure's ability to decode nested structures but ensure that lists
and maps never get implicitly converted to each other.
Since the validation of connection blocks is delegated to the communicator
selected by "type", we were not previously doing any validation of the
attribute names in these blocks until running provisioners during apply.
Proper validation here requires us to already have the instance state,
since the final connection info is a merge of values provided in config
with values assigned automatically by the resource. However, we can do
some basic name validation to catch typos during the validation pass, even
though semantic validation and checking for missing attributes will still
wait until the provisioner is instantiated.
This fixes#6582 as much as we reasonably can.
This previously lacked tests altogether. This new test verifies the
"happy path", ensuring that both literal and computed values pass through
correctly into the VariableValues map.
This crash resulted because the type switch checked for either of two
types but the type assertion within it assumed only one of them.
A straightforward (if inelegant) fix is to simply duplicate the relevant
case block and change the type assertion, thus allowing the types to match
up in all cases.
This fixes#13297.
During the input walk we stash the values resulting from user input
(if any) in the eval context for use when later walks need to resolve
the provider config.
However, this repository of input results is only able to represent
literal values, since it does not retain the record of which of the keys
have values that are "computed".
Previously we were blindly stashing all of the results, failing to
consider that some of them might be computed. That resulted in the
UnknownValue placeholder being misinterpreted as a literal value when
the data is used later, which ultimately resulted in it clobbering the
actual expression evaluation result and thus causing the provider to
fail to configure itself.
Now we are careful to only retain in this repository the keys whose values
are known statically during the input phase. This eventually gets merged
with the dynamic evaluation results on subsequent walks, with the dynamic
keys left untouched due to their absence from the stored input map.
This fixes#11264.
This method mirrors that of config.Backend, so we can compare the
configration of a backend read from a config vs that of a backend read
from a state. This will prevent init from reinitializing when using
`-backend-config` options that match the existing state.
golang/tools commit 23ca8a263 changed the format of the leading comment
to comply with some new standards discussed here:
https://golang.org/issue/13560
This is the result of running generate with the latest version of
stringer. Everyone working on Terraform will need to update stringer
after this is merged, to avoid reverting this:
go get -u golang.org/x/tools/cmd/stringer
It appears there are no tests for this as far as I can find.
We change V1 states (very old) to assume a nil path is a root path.
Staet.Validate() later will catch any duplicate paths.
When transforming a diff from DestroyCreate to a simple Update,
ignore_changes can cause keys from flatmapped objects to be filtered
form the diff. We need to filter each flatmapped container as a whole to
ensure that unchanged keys aren't lost in the update.
ignore_changes is causing changes in other flatmapped sets to be
filtered out incorrectly.
This required fixing the testDiffFn to create diffs which include the
old value, breaking one other test.
Fixes#12836
Realistically, these should be caught during validation anyways. In this
case, this was causing 12386 because refresh with a data source will
attempt to use module variables. I don't see any clear logic to prune
those module variables or not add them so its easier to return unknown
to cause the data to be computed and not run.
the terraform package doesn't know about TestProvider, so don't put the
hooks in terraform.MockResourceProvider. Wrap the mock in the test where
we need to check the TestProvider functionality.
Always wait for watchStop to return during context.walk.
Context.walk would often complete immediately after sending the close
signal to watchStop, which would in turn call the deferred releaseRun
cancelling the runContext.
Without any synchronization points after the select statement in
watchStop, that goroutine was not guaranteed to be scheduled
immediately, and in fact it often didn't continue until after the
runContext was canceled. This in turn left the select statement with
multiple successful cases, and half the time it would chose to Stop the
providers.
Stopping the providers after the walk of course didn't cause any
immediate failures, but if there was another walk performed, the
provider StopContext would no longer be valid and could cause
cancellation errors in the provider.
Starting with Go 1.8 betas, we've periodically received SIGQUITs on our
tests in Travis. The stack trace looks like this:
https://gist.github.com/mitchellh/abf09b0980f8ea01269f8d9d6133884d
The tests are timing out! This is a test that hasn't been touched really
in a very long time and has always passed. I've **reproduced this
locally** by setting `GOMAXPROCS=1` and running the test. By yielding
the scheduler in the hot loop, it now passes almost instantly every
time.
Perhaps the test can be written in a different way, but this gets tests
passing and I think will fix our periodic errors.
A couple interpolation tests were using invalid state that didn't match
the config. These will still pass but were flushed out by an attempt to
make this an error. The repl however still required interpolation
without a config, and tests there will provide a indication if this
behavior changes.
It turns out that a few use cases depend on not finding a resource
without an error.
The other code paths had sufficient nil checks for this, but there was
one place where we called Count() that needed to be checked. If the
existence of the resource matters, it would be caught at a higher level
and still return an "unknown resource" error to the user.
Module resource were being sorted lexically by name by the state filter.
If there are 10 or more resources, the order won't match the index
order, and resources will have different indexes in their new location.
Sort the FilterResults by index numerically when the names match.
Clean up the module String output for visual inspection by sorting
Resource name parts numerically when they are an integer value.
Due to the change to `interface{}` we need to use `reflect.DeepEqual`
here. With the restriction of primitive types this should always be
safe. We'll never get functions, channels, etc.
This changes the type of values in Meta for InstanceState to
`interface{}`. They were `string` before.
This will allow richer structures to be persisted to this without
flatmapping them (down with flatmap!). The documentation clearly states
that only primitives/collections are allowed here.
The only thing using this was helper/schema for schema versioning.
Appropriate type checking was added to make this change safe.
The timeout work @catsby is doing will use this for a richer structure.
Fixes#12183
The fix is in flatmap for this but the entire issue is a bit more
complex. Given a schema with a computed set, if you reference it like
this:
lookup(attr[0], "field")
And "attr" contains a computed set within it, it would panic even though
"field" is available. There were a couple avenues I could've taken to
fix this:
1.) Any complex value containing any unknown value at any point is
entirely unknown.
2.) Only the specific part of the complex value is unknown.
I took route 2 so that the above works without any computed (since
"name" is not computed but something else is). This may actually have an
effect on other parts of Terraform configs, however those similar
configs would've simply crashed previously so it shouldn't break any
pre-existing configs.
Fixes#10911
Outputs that aren't targeted shouldn't be included in the graph.
This requires passing targets to the apply graph. This is unfortunate
but long term should be removable since I'd like to move output changes
to the diff as well.
During backend initialization, especially during a migration, there is a
chance that an existing state could be overwritten.
Attempt to get a locks when writing the new state. It would be nice to
always have a lock when reading the states, but the recursive structure
of the Meta.Backend config functions makes that quite complex.
Fixes#11749
I'm **really** surprised this didn't come up earlier.
When only the state is available for a node, the advertised
referenceable name (the name used for dependency connections) included
the module path. This module path is automatically prepended to the
name. This means that probably every non-root resource for state-only
operations (destroys) didn't order properly.
This fixes that by omitting the path properly.
Multiple tests added to verify both graph correctness as well as a
higher level context test.
Will backport to 0.8.x
To avoid chasing down issues like #11635 I'm proposing we disable the
shadow graph for end users now that we have merged in all the new
graphs. I've kept it around and default-on for tests so that we can use
it to test new features as we build them. I think it'll still have value
going forward but I don't want to hold us for making it work 100% with
all of Terraform at all times.
I propose backporting this to 0-8-stable, too.
Fixes#11349
I tracked this bug back to the early 0.7 days so this has been around a
really long time. I wanted to confirm that this wasn't introduced by any
new graph changes and it appears to predate all of that. I couldn't find
a single 0.7.x release where this worked, and I didn't want to go back
to 0.6.x since it was pre-vendoring.
The test case shows the logic the best, but the basic idea is: for
collections that go to zero elements, the "RequiresNew" sameness check
should be ignored, since the new diff can choose to not have that at all
in the diff.
This adds a Meta field (similar to InstanceState.Meta) to InstanceDiff.
This allows providers to store arbitrary k/v data as part of a diff and
have it persist through to the Apply. This will be used by helper/schema
for timeout storage being done by @catsby.
The type here is `map[string]interface{}`. A couple notes:
* **Not using `string`**: The Meta field of InstanceState is a string
value. We've learned that forcing things to strings is bad. Let's
just allow types.
* **Primitives only**: Even though it is type `interface{}`, it must
be able to cleanly pass the go-plugin RPC barrier as well as be
encoded to a file as Gob. Given these constraints, the value must
only comprise of primitive types and collections. No structs,
functions, channels, etc.
Read state would assume that having a reader meant there should be a
valid state. Check for an empty file and return ErrNoState to
differentiate a bad file from an empty one.
This disables the computed value check for `count` during the validation
pass. This enables partial support for #3888 or #1497: as long as the
value is non-computed during the plan, complex values will work in
counts.
**Notably, this allows data source values to be present in counts!**
The "count" value can be disabled during validation safely because we
can treat it as if any field that uses `count.index` is computed for
validation. We then validate a single instance (as if `count = 1`) just
to make sure all required fields are set.