Provider input is now longer handled with a graph walk, so the code
related to the input graph and walk are no longer needed.
For now the Input method is retained on the ResourceProvider interface,
but it will never be called. Subsequent work to revamp the provider API
will remove this method.
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.
The three main goals here are:
- Use the configuration models from the "configs" package instead of the
older models in the "config" package, which is now deprecated and
preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
new "lang" package, instead of the Interpolator type and related
functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
rather than hand-constructed strings. This is not critical to support
the above, but was a big help during the implementation of these other
points since it made it much more explicit what kind of address is
expected in each context.
Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.
I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
When computing the count value, make sure to include walkDestroy with
walkApply, as the former is only a special case of the latter.
When applying a saved plan, the computed count values are lost and we
can no longer query the state for those values. The apply walk was
already considered in the `resourceCountMax` function, but the destroy
walk was not. This worked when destroying in a single operation
("terraform destroy"), since the state would still be updated with the
latest counts from the plan.
If an existing resources is scaled back to 0, locals and outputs will
still have a multi-variable reference to evaluate, which should return
an empty list. Due to how the resource is removed, the resource will
still exist in the state but with no primary instance, which needs to be
ignored in the instance count.
The id attribute can be missing during the destroy operation.
While the new destroy-time ordering of outputs and locals should prevent
resources from having their id attributes set to an empty string,
there's no reason to error out if we have the canonical ID field
available.
This still interrogates the attributes map first to retain any previous
behavior, but in the future we should settle on a single ID location.
We stash the locals in the module state in a map that is ignored for JSON
serialization. We don't include locals in the persisted state because they
can be trivially recomputed and this allows us to assume that they will
pass through verbatim, without any normalization or other transforms
caused by the JSON serialization.
From a user standpoint a local is just a named alias for an expression,
so it's desirable that the result passes through here in as raw a form
as possible, so it behaves as closely as possible to simply using the
given expression directly.
As part of our terminology shift, the interpolation variable for the name
of the current workspace changes to terraform.workspace. The old name
continues to be supported for compatibility.
We can't generate a deprecation warning from here so for now we'll just
silently accept terraform.env as an alias, but not mention it at all in
the error message in the hope that its use phases out over time before we
actually remove it.
For child modules, a ModuleState isn't allocated until the first time a
module instance is inserted into the state under the module's path.
Normally interpolations of resource attributes are delayed until at least
one resource has been created due to the nature of the dependency graph,
but if the interpolation value is a multi-var (splat) then it is possible
that the referenced resource has count=0 and thus created _no_ resource
states when it was visited.
Previously we would crash when trying to access the resource map for the
nil module in order to count how many instances are present. Since we know
there can't be any instances present in a nil module, we now preempt
this crash by returning zero early.
This edge-case does not apply to the root module because its ModuleState
is allocated as part of initializing the main State instance.
This fixes#14438.
This was actually redundant anyway since HIL itself applied a similar
rule where any partially-unknown list would be automatically flattened
to a single unknown value.
However, now we're changing HIL to explicitly permit partially-unknown
lists so that we can allow the index operator [...] to succeed when
applied to one of the elements that _is_ known.
This, in conjunction with hashicorp/hil#51 and hashicorp/hil#52,
fixes#3449.
Fixes#12836
Realistically, these should be caught during validation anyways. In this
case, this was causing 12386 because refresh with a data source will
attempt to use module variables. I don't see any clear logic to prune
those module variables or not add them so its easier to return unknown
to cause the data to be computed and not run.
It turns out that a few use cases depend on not finding a resource
without an error.
The other code paths had sufficient nil checks for this, but there was
one place where we called Count() that needed to be checked. If the
existence of the resource matters, it would be caught at a higher level
and still return an "unknown resource" error to the user.
Fixes#8695
When a list count was computed in a multi-resource access
(foo.bar.*.list), we were returning the value as empty string. I don't
actually know the histocal reasoning for this but this can't be correct:
we must return unknown.
When changing this to unknown, the new tests passed and none of the old
tests failed. This leads me further to believe that the return empty
string is probably a holdover from long ago to just avoid crashes or
UUIDs in the plan output and not actually the correct behavior.
Because we now rely on HIL to do the computed calculation, we must make
sure the type is correct (TypeUnknown). Before, we'd just check for the
UUID in the string.
This changes all variable returns in the interpolater to run it through
`hil.InterfaceToVariable` which handles this lookup for us.
This makes all the computed stuff "just work" since HIL uses the same
computed sentinel value (string UUID) and the type differentiates it
from a regular string.
Fixes#9444
This appears to be a regression from 0.7.0, but there were no tests
covering it so we missed it and changed the behavior at some point! Oh
no.
This PR make the ordering of multi-var access: `resource.name.*.attr`
consistent: it is the ordering of the count, not the lexical ordering of
the value. This allows behavior where two lists are indexed by count
index and can be assumed to be related (for example user data for an aws
instance, as shown in the above referenced issue).
Two new context tests added to cover this case.
Fixes#5338 (and I'm sure many others)
There is no use case for "simple" variables in Terraform at all so
anytime one is found it should be an error.
There is a _huge_ backwards incompatibility here that was not supposed
to be by design but I'm sure a lot of people are relying on: in the
`template_file` datasource, this bug allowed you to not escape your
interpolations and have the work. For example:
```
data "template_file" "foo" {
template = "${a}"
vars { a = 12 }
}
```
The above would work, but it shouldn't. The template should have to be
`"$${a}"` (to escape the interpolation).
Because of this BC, I recommend holding this until Terraform 0.8.0 and
documenting it carefully. As part of this PR, I've added some special
error message notes.
Related to #5254
If the count of a resource is interpolated (i.e. `${var.c}`), then it
must be interpolated before any splat variable using that resource can
be used (i.e. `type.name.*.attr`). The original fix for #5254 is to
always ensure that this is the case.
While working on a new apply builder based on the diff in
`f-apply-builder`, this truth no longer always holds. Rather than always
include such a resource, I believe the correct behavior instead is to
use the state as a source of truth during `walkApply` operations.
This change specifically is scoped to `walkApply` operation
interpolations since we know the state of any multi-variable should be
available. The behavior is less clear for other operations so I left the
logic unchanged from prior versions.
Some of the tests for splat syntax were from the pre-list-and-map world,
and effectively flattened the values if interpolating a resource value
which was itself a list.
We now set the expected values correctly so that an interpolation like
`aws_instance.test.*.security_group_ids` now returns a list of lists.
We also fix the implementation to correctly deal with maps.
This commit test "TestContext2Input_moduleComputedOutputElement"
by ensuring that we treat a count of zero and non-reified resources
independently rather than returning an empty list for both, which
results in an interpolation failure when using the element function or
indexing.
Previously, interpolation of multi-variables was returning an empty
variable if the resource count was 0. The empty variable was defined as
TypeString, Value "". This means that empty resource counts fail type
checking for interpolation functions which operate on lists.
Instead, return an empty list if the count is 0. A context test tests
this against further regression. Also add a regression test covering the
case of a single count multi-variable.
In order to make the context testing framework deal with this change it
was necessary to special case empty lists in the test diff function.
Fixes#7002
The work integrated in hashicorp/terraform#6322 silently broke the
ability to use remote state correctly. This commit adds a fix for that,
making use of the work integrated in hashicorp/terraform#7124.
In order to deal with outputs which are complex structures, we use a
forked version of the flatmap package - the difference in the version
this commit vs the github.com/hashicorp/terraform/flatmap package is
that we add in an additional key for map counts which state requires.
Because we bypass the normal helper/schema mechanism, this is not set
for us.
Because of the HIL type checking of maps, values must be of a homogenous
type. This is unfortunate, as it means we can no longer refer to outputs
as:
${terraform_remote_state.foo.output.outputname}
Instead we had to bring them to the top level namespace:
${terraform_remote_state.foo.outputname}
This actually does lead to better overall usability - and the BC
breakage is made better by the fact that indexing would have broken the
original syntax anyway.
We also add a real-world test and assert against specific values. Tests
which were previously acceptance tests are now run as unit tests, so
regression should be identified at a much earlier stage.
This commit makes two changes: map interpolation can now read flatmapped
structures, such as those present in remote state outputs, and lists are
sorted by the index instead of the value.
The flatmapped representation of state prior to this commit encoded maps
and lists (and therefore by extension, sets) with a key corresponding to
the number of elements, or the unknown variable indicator under a .# key
and then individual items. For example, the list ["a", "b", "c"] would
have been encoded as:
listname.# = 3
listname.0 = "a"
listname.1 = "b"
listname.2 = "c"
And the map {"key1": "value1", "key2", "value2"} would have been encoded
as:
mapname.# = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
Sets use the hash code as the key - for example a set with a (fictional)
hashcode calculation may look like:
setname.# = 2
setname.12312512 = "value1"
setname.56345233 = "value2"
Prior to the work done to extend the type system, this was sufficient
since the internal representation of these was effectively the same.
However, following the separation of maps and lists into distinct
first-class types, this encoding presents a problem: given a state file,
it is impossible to tell the encoding of an empty list and an empty map
apart. This presents problems for the type checker during interpolation,
as many interpolation functions will operate on only one of these two
structures.
This commit therefore changes the representation in state of maps to use
a "%" as the key for the number of elements. Consequently the map above
will now be encoded as:
mapname.% = 2
mapname.key1 = "value1"
mapname.key2 = "value2"
This has the effect of an empty list (or set) now being encoded as:
listname.# = 0
And an empty map now being encoded as:
mapname.% = 0
Therefore we can eliminate some nasty guessing logic from the resource
variable supplier for interpolation, at the cost of having to migrate
state up front (to follow in a subsequent commit).
In order to reduce the number of potential situations in which resources
would be "forced new", we continue to accept "#" as the count key when
reading maps via helper/schema. There is no situation under which we can
allow "#" as an actual map key in any case, as it would not be
distinguishable from a list or set in state.
Adding walkValidate to the EvalTree operations, and removing the
walkValidate guard from the Interpolater.valueModuleVar allows the
values to be interpolated for Validate.
Variables weren't being interpolated during the Input phase, causing a
syntax error on the interpolation string. Adding `walkInput` to the
EvalTree operations prevents skipping the interpolation step.
This commit forward ports the changes made for 0.6.17, in order to store
the type and sensitive flag against outputs.
It also refactors the logic of the import for V0 to V1 state, and
fixes up the call sites of the new format for outputs in V2 state.
Finally we fix up tests which did not previously set a state version
where one is required.
Provider nodes interpolate their config during the input walk, but this
is very early and so it's pretty likely that any resources referenced are
entirely absent from the state.
As a special case then, we tolerate the normally-fatal case of having
an entirely missing resource variable so that the input walk can complete,
albeit skipping the providers that have such interpolations.
If these interpolations end up still being unresolved during refresh
(e.g. because the config references a resource that hasn't been created
yet) then we will catch that error on the refresh pass, or indeed on the
plan pass if -refresh=false is used.
This adds a test and the support necessary to read from native maps
passed as variables via interpolation - for example:
```
resource ...... {
mapValue = "${var.map}"
}
```
We also add support for interpolating maps from the flat-mapped resource
config, which is necessary to support assignment of computed maps, which
is now valid.
Unfortunately there is no good way to distinguish between a list and a
map in the flatmap. In lieu of changing that representation (which is
risky), we assume that if all the keys are numeric, this is intended to
be a list, and if not it is intended to be a map. This does preclude
maps which have purely numeric keys, which should be noted as a
backwards compatibility concern.
This commit adds support for native list variables and outputs, building
up on the previous change to state. Interpolation functions now return
native lists in preference to StringList.
List variables are defined like this:
variable "test" {
# This can also be inferred
type = "list"
default = ["Hello", "World"]
}
output "test_out" {
value = "${var.a_list}"
}
This results in the following state:
```
...
"outputs": {
"test_out": [
"hello",
"world"
]
},
...
```
And the result of terraform output is as follows:
```
$ terraform output
test_out = [
hello
world
]
```
Using the output name, an xargs-friendly representation is output:
```
$ terraform output test_out
hello
world
```
The output command also supports indexing into the list (with
appropriate range checking and no wrapping):
```
$ terraform output test_out 1
world
```
Along with maps, list outputs from one module may be passed as variables
into another, removing the need for the `join(",", var.list_as_string)`
and `split(",", var.list_as_string)` which was previously necessary in
Terraform configuration.
This commit also updates the tests and implementations of built-in
interpolation functions to take and return native lists where
appropriate.
A backwards compatibility note: previously the concat interpolation
function was capable of concatenating either strings or lists. The
strings use case was deprectated a long time ago but still remained.
Because we cannot return `ast.TypeAny` from an interpolation function,
this use case is no longer supported for strings - `concat` is only
capable of concatenating lists. This should not be a huge issue - the
type checker picks up incorrect parameters, and the native HIL string
concatenation - or the `join` function - can be used to replicate the
missing behaviour.
This changes the representation of maps in the interpolator from the
dotted flatmap form of a string variable named "var.variablename.key"
per map element to use native HIL maps instead.
This involves porting some of the interpolation functions in order to
keep the tests green, and adding support for map outputs.
There is one backwards incompatibility: as a result of an implementation
detail of maps, one could access an indexed map variable using the
syntax "${var.variablename.key}".
This is no longer possible - instead HIL native syntax -
"${var.variablename["key"]}" must be used. This was previously
documented, (though not heavily used) so it must be noted as a backward
compatibility issue for Terraform 0.7.
This commit adds the groundwork for supporting module outputs of types
other than string. In order to do so, the state version is increased
from 1 to 2 (though the "public-facing" state version is actually as the
first state file was binary).
Tests are added to ensure that V2 (1) state is upgraded to V3 (2) state,
though no separate read path is required since the V2 JSON will
unmarshal correctly into the V3 structure.
Outputs in a ModuleState are now of type map[string]interface{}, and a
test covers round-tripping string, []string and map[string]string, which
should cover all of the types in question.
Type switches have been added where necessary to deal with the
interface{} value, but they currently default to panicking when the input
is not a string.
The ContextGraphWalker struct includes a lock that's passed down to
BuiltinEvalContext and guards access to interpolation variables as
they're written using SetVariables.
The likely problem being expressed in #5733 is that the same map
reference is also passed down to the Interpolater.Variables field, which
is used for variable lookup.
Here, we plumb the same lock we're using to guard access for writes down
and acquire it before doing variable reads as well. It's not as fine
grained as perhaps it could be, but all the context tests pass and I
believe this should address #5733.
References to computed list-ish attributes (set, list, map) were being
improperly resolved as an empty list `[]` during the plan phase (when
the value of the reference is not yet known) instead of as an
UnknownValue.
A "diffs didn't match" failure in an AWS DirectoryServices test led to
this discovery (and this commit fixes the failing test):
https://travis-ci.org/hashicorp/terraform/jobs/104812951
Refs #2157 which has the original work to support computed list
attributes at all. This is just a simple tweak to that work.
/cc @radeksimko
Building on the work of #3846, deprecate `filename` in favor of a
`template` attribute that accepts file contents instead of a path.
Required a bit of work in the interpolation code to prevent Terraform
from assuming that template interpolations were resource variables that
needed to be resolved. Leaving them as "Unknown Variables" prevents
interpolation from happening early and lets the `template_file` resource
do its thing.