Now that variables parse and retain a set of default values for
object attributes, we must apply the defaults during variable
evaluation. We do so immediately before type conversion, preprocessing
the given value so that conversion will receive the intended defaults as
appropriate.
The optional modifier previously accepted a single argument: the
attribute type. This commit adds an optional second argument, which
specifies a default value for the attribute.
To record the default values for a variable's type, we use a separate
parallel structure of `typeexpr.Defaults`, rather than extending
`cty.Type` to include a `cty.Value` of defaults (which may in turn
include a `cty.Type` with defaults, and so on, and so forth).
The new `typeexpr.TypeConstraintWithDefaults` returns a type constraint
and defaults value. Defaults will be `nil` unless there are default
values specified somewhere in the variable's type.
In type constraints, object attributes may be marked as optional in
order to allow them to be omitted from input values. Doing so results in
filling the attribute value with a typed `null`.
This commit adds a new type `typeexpr.Defaults` which mirrors the
structure of a type constraint, storing default values for optional
attributes. This will allow specification of non-`null` default values
for attributes.
The `Defaults` type is a tree structure, each node containing a sub-tree
type, a map of children, and for object nodes, a map of defaults. The
keys in the children map depend on the type of the node:
- Object nodes have children for each attribute;
- Tuple nodes have children for each index, with indices converted to
string values;
- Collection nodes have a single child at the empty string key.
When traversing this tree we must take this structure into account, with
special cases for map input values which may later be converted to
objects.
The traversal defined in this commit uses a pre-order transformer in
order to pre-populate descendent nodes before their defaults are
applied. This allows type nested type default values to be specified
more compactly.
Rather than maintain a separate graph builder for destroy, use the
normal plan graph with some extra options. Utilize the same pattern as
the validate graph for now, where we take the normal plan graph builder
and inject a new concrete function for the destroy nodes.
All instances in state are being removed for destroy, so we can skip
checking for orphans. Because we want to use the normal plan graph, we
need to be able to still call this during destroy, so flag it off.
Destroy nodes should never participate in references. These edges didn't
come up before, because we weren't building a complete graph including
all temporary values.
This also sets an additional variable if it detects that this is an alpha
or development build, which currently does nothing but might eventually
turn on the ability to use experimental features, if we make that
something available only in prereleases.
When a user reports a "Configuration contains unknown value" error,
there is no information on what might have been unknown during apply.
Add unknown attribute paths to the diagnostic message to provide some
more information when a reproduction may not be possible. Sine this is
one of those "should never happen" types of errors which will be
reported to the developers directly, we can leave the format as the raw
internal representation for simplicity.
5417975946 addressed a regression in the
logic for catching when the newer module meta-arguments are used in
conjunction with the legacy practice of including provider configurations
inside modules, where the check started incorrectly catching situations
where the legacy nested provider configuration was in the same module
as the child call using one of those meta-arguments.
This is a regression test to catch if a similar bug arises in the future.
Since it's testing validation rules that apply to an entire configuration
tree, it ended up in a rather idiosyncratic location under the "configload"
package, rather than directly in "configs". The "configs" package only
knows how to load one module at a time, so it's harder to write a test
like this in that context. Due to it being further removed from the code
it is testing, I included a test for the correct error too in order to
increase the chance that we'll learn if future changes in the "configs"
package invalidate this regression test.
I've verified that this new test fails without the change made in the
earlier commit.
This fixes a bug introduced in 1879a39 in which initialising a module will fail
if that module contains both a provider block and a module call using for_each.
In configurations which have already been initialized, updating the
source of a non-root module call to an invalid value could cause a nil
pointer panic. This commit fixes the bug and adds test coverage.
When a data resource is used for the purposes of verifying a condition
about an object managed elsewhere (e.g. if the managed resource doesn't
directly export all of the information required for the condition) it's
important that we defer the data resource read to the apply step if the
corresponding managed resource has any changes pending.
Typically we'd expect that to come "for free" but unfortunately we have
a pragmatic special case in our handling of data resources where we
normally defer to the apply step only if a _direct_ dependency of the data
resource has a change pending, and allow a plan-time read if there's
a pending change in an indirect dependency. This allowed us to preserve
some compatibility with the questionable historical behavior of always
reading data resources proactively unless the configuration contains
unknown values, since the arguably-more-correct behavior would've been a
regression for anyone who had been depending on that before.
Since preconditions and postconditions didn't exist until now, we are not
constrained in the same way by backward compatibility, and so we can adopt
the more correct behavior in the case where a data resource has conditions
specified. This does unfortunately make the handling of data resources
with conditions subtly inconsistent with those that don't, but this is
a better situation than the alternative where it would be easy to get into
a trapped situation where the remote system is invalid and it's impossible
to plan the change that would make it valid again because the conditions
evaluate too soon, prior to the fix being applied.
We have two different reasons why a data resource might be read only
during apply, rather than during planning as usual: the configuration
contains unknown values, or the data resource as a whole depends on a
managed resource which itself has a change pending.
However, we didn't previously distinguish these two in a way that allowed
the UI to describe the difference, and so we confusingly reported both
as "config refers to values not yet known", which in turn led to a number
of reasonable questions about why Terraform was claiming that but then
immediately below showing the configuration entirely known.
Now we'll use our existing "ActionReason" mechanism to tell the UI layer
which of the two reasons applies to a particular data resource instance.
The "dependency pending" situation tends to happen in conjunction with
"config unknown", so we'll prefer to refer that the configuration is
unknown if both are true.
We can no longer be assured that the particular instance of a provider
we are using has had GetProviderSchema called. Always check the
diagnostics even if we're fetching a cached response.
When specifying variable values on the command line, name-value pairs
must be joined with an equals sign, without surrounding spaces.
Previously Terraform would interpret "foo = bar" as assigning the value
" bar" to the variable named "foo ". This is never valid, as variable
names may not include whitespace.
This commit looks for this specific error and returns a diagnostic with
a suggestion for correcting it. We cannot simply trim whitespace,
because it is valid to write "foo= bar" to assign the value " bar" to
the variable "foo", as unlikely as it seems.
When executing an apply with no plan, it's possible for a cancellation
to arrive during the final batch of provider operations, resulting in no
errors in the plan. The run context was next checked during the
confirmation for apply, but in the case of -auto-approve that
confirmation is skipped, resulting in the canceled plan being applied.
Make sure we directly check for cancellation before confirming the plan.
Instances of the same AbsResource may share the same Dependencies, which
could point to the same backing array of values. Since address values
are not pointers, and not meant to be shared, we must copy the value
before sorting the slice in-place. Because individual instances of the
same resource may be encoded to state concurrently, failure to copy the
slice first can result in a data race.
Terraform's remote-exec provision hangs out when it execs on HTTP Proxy bacause it dosen't support SSH over HTTP Proxy. This commits enables Terraform's remote-exec to support SSH over HTTP Proxy.
* adds `proxy_*` fields to `connection` which add configuration for a proxy host
* if `proxy_host` set, connect to that proxy host via CONNECT method, then make the SSH connection to `host` or `bastion_host`
Using the any keyword with arguments (e.g. any(string, bool)) is
invalid, but any is not technically a "primitive type keyword". This
commit corrects the language in the diagnostic and updates the tests.