We originally introduced the idea of language experiments as a way to get
early feedback on not-yet-proven feature ideas, ideally as part of the
initial exploration of the solution space rather than only after a
solution has become relatively clear.
Unfortunately, our tradeoff of making them available in normal releases
behind an explicit opt-in in order to make it easier to participate in the
feedback process had the unintended side-effect of making it feel okay
to use experiments in production and endure the warnings they generate.
This in turn has made us reluctant to make use of the experiments feature
lest experiments become de-facto production features which we then feel
compelled to preserve even though we aren't yet ready to graduate them
to stable features.
In an attempt to tweak that compromise, here we make the availability of
experiments _at all_ a build-time flag which will not be set by default,
and therefore experiments will not be available in most release builds.
The intent (not yet implemented in this PR) is for our release process to
set this flag only when it knows it's building an alpha release or a
development snapshot not destined for release at all, which will therefore
allow us to still use the alpha releases as a vehicle for giving feedback
participants access to a feature (without needing to install a Go
toolchain) but will not encourage pretending that these features are
production-ready before they graduate from experimental.
Only language experiments have an explicit framework for dealing with them
which outlives any particular experiment, so most of the changes here are
to that generalized mechanism. However, the intent is that non-language
experiments, such as experimental CLI commands, would also in future
check Meta.AllowExperimentalFeatures and gate the use of those experiments
too, so that we can be consistent that experimental features will never
be available unless you explicitly choose to use an alpha release or
a custom build from source code.
Since there are already some experiments active at the time of this commit
which were not previously subject to this restriction, we'll pragmatically
leave those as exceptions that will remain generally available for now,
and so this new approach will apply only to new experiments started in the
future. Once those experiments have all concluded, we will be left with
no more exceptions unless we explicitly choose to make an exception for
some reason we've not imagined yet.
It's important that we be able to write tests that rely on experiments
either being available or not being available, so here we're using our
typical approach of making "package main" deal with the global setting
that applies to Terraform CLI executables while making the layers below
all support fine-grain selection of this behavior so that tests with
different needs can run concurrently without trampling on one another.
As a compromise, the integration tests in the terraform package will
run with experiments enabled _by default_ since we commonly need to
exercise experiments in those tests, but they can selectively opt-out
if they need to by overriding the loader setting back to false again.
When attempting to lock a remote state backend, failure due to an
existing lock should return an instance of LockError. This allows the
wrapping code to retry until the specified timeout, instead of
immediately exiting.
This commit adds a test for this in the TestRemoteLocks test helper,
which is used in many of the remote state backend test suites.
We introduced the addrs.UniqueKey and addrs.UniqueKeyer mechanics as part
of implementing the ValidateMoves and ApplyMoves functions, as a way to
better encapsulate the solution to the problem that lots of our address
types aren't comparable and so cannot be used directly as map keys.
However, exposing addrs.UniqueKey handling directly in the logic adds
various noise to the algorithms and, in particular, obscures the fact that
MoveResults.Changes and MoveResult.Blocked both have different map key
types.
Here then we'll use the new addrs.Map helper type, which encapsulates the
idea of a map from an addrs.UniqueKeyer type to an arbitrary value type,
using the unique keys as the map keys internally. This does unfortunately
mean that we lose the conventional Go map access syntax and have to use
a method-based API instead, but I (subjectively) think that's an okay
compromise in return for avoiding the need to keep track inline of which
addrs.UniqueKey values correspond with which real addresses.
This is intended as an entirely-mechanical change, with equivalent
behavior to what it replaced. If anything here is doing something
materially different than what it replaced then that's a mistake.
The addrs.Set type previously snuck in accidentally as part of the work
to add addrs.UniqueKey and addrs.UniqueKeyer, because without support for
generic types the addrs.Set type was a bit of a safety hazard due to not
being able to enforce particular address types at compile time.
However, with Go 1.18 adding support for type parameters we can now turn
addrs.Set into a generic type over any specific addrs.UniqueKeyer type,
and complement it with an addrs.Map type which supports addrs.UniqueKeyer
keys as a way to encapsulate the handling of maps with UniqueKey keys that
we currently do inline in various other parts of Terraform.
This doesn't yet introduce any callers of these types, but we'll convert
existing users of addrs.UniqueKeyer gradually in subsequent commits.
Expanded resource instances can initially share the same dependency
slice, so we must take care to not modify the array values when
checking the dependencies.
In the future we can convert these to a generic Set data type, as we
often need to compare for equality and take the union of multiple groups
of dependencies.
The JSON output for sequences previously omitted unknown values for
tuples and sets, which made it impossible to interpret the corresponding
unknown marks. For example, consider this resource:
resource "example_resource" "example" {
tags = toset(["alpha", timestamp(), "charlie"])
}
This would previously be encoded in JSON as:
"after": {
"tags": ["alpha", "charlie"]
},
"after_unknown": {
"id": true,
"tags": [false, true, false]
},
That is, the timestamp value would be omitted from the output
altogether, while the corresponding unknown marks would include a value
for each of the set members.
This commit changes the behaviour to:
"after": {
"tags": ["alpha", null, "charlie"]
},
"after_unknown": {
"id": true,
"tags": [false, true, false]
},
This aligns tuples and sets with the prior behaviour for lists, and
makes it clear which elements are known and which are unknown.
Planned output changes are represented in the JSON output format using
the same change object as planned resource changes. This structure
includes an `after` value and a parallel `after_unknown` value, which
can be combined to determine which specific parts of a value are known
only at apply time.
Previously, structured output values would be marked in the JSON plan as
coarsely known or unknown, even if only some subset of the structure
will be known only at apply time. This simplification was unnecessary,
and this commit reuses the same logic for resource changes to give more
information to consumers of this format.
For example, consider this output:
output "bar" {
value = tolist([
"hello",
timestamp(),
"world",
])
}
The plan output for this output would be:
+ bar = [
+ "hello",
+ (known after apply),
+ "world",
]
For the same plan, the JSON output was previously:
"bar": {
"actions": [
"create"
],
"before": null,
"after_unknown": true,
"before_sensitive": false,
"after_sensitive": false
}
After this commit, the output is instead:
"bar": {
"actions": [
"create"
],
"before": null,
"after": [
"hello",
null,
"world"
],
"after_unknown": [
false,
true,
false
],
"before_sensitive": false,
"after_sensitive": false
}
Adding multiple local names for the same provider type in
required_providers was not prevented, which can lead to ambiguous
behavior in Terraform. Providers are always indexed by the providers
fully qualified name, so duplicate local names cannot be differentiated.
Now that variables parse and retain a set of default values for
object attributes, we must apply the defaults during variable
evaluation. We do so immediately before type conversion, preprocessing
the given value so that conversion will receive the intended defaults as
appropriate.
The optional modifier previously accepted a single argument: the
attribute type. This commit adds an optional second argument, which
specifies a default value for the attribute.
To record the default values for a variable's type, we use a separate
parallel structure of `typeexpr.Defaults`, rather than extending
`cty.Type` to include a `cty.Value` of defaults (which may in turn
include a `cty.Type` with defaults, and so on, and so forth).
The new `typeexpr.TypeConstraintWithDefaults` returns a type constraint
and defaults value. Defaults will be `nil` unless there are default
values specified somewhere in the variable's type.
In type constraints, object attributes may be marked as optional in
order to allow them to be omitted from input values. Doing so results in
filling the attribute value with a typed `null`.
This commit adds a new type `typeexpr.Defaults` which mirrors the
structure of a type constraint, storing default values for optional
attributes. This will allow specification of non-`null` default values
for attributes.
The `Defaults` type is a tree structure, each node containing a sub-tree
type, a map of children, and for object nodes, a map of defaults. The
keys in the children map depend on the type of the node:
- Object nodes have children for each attribute;
- Tuple nodes have children for each index, with indices converted to
string values;
- Collection nodes have a single child at the empty string key.
When traversing this tree we must take this structure into account, with
special cases for map input values which may later be converted to
objects.
The traversal defined in this commit uses a pre-order transformer in
order to pre-populate descendent nodes before their defaults are
applied. This allows type nested type default values to be specified
more compactly.
Rather than maintain a separate graph builder for destroy, use the
normal plan graph with some extra options. Utilize the same pattern as
the validate graph for now, where we take the normal plan graph builder
and inject a new concrete function for the destroy nodes.
All instances in state are being removed for destroy, so we can skip
checking for orphans. Because we want to use the normal plan graph, we
need to be able to still call this during destroy, so flag it off.
Destroy nodes should never participate in references. These edges didn't
come up before, because we weren't building a complete graph including
all temporary values.
This also sets an additional variable if it detects that this is an alpha
or development build, which currently does nothing but might eventually
turn on the ability to use experimental features, if we make that
something available only in prereleases.
When a user reports a "Configuration contains unknown value" error,
there is no information on what might have been unknown during apply.
Add unknown attribute paths to the diagnostic message to provide some
more information when a reproduction may not be possible. Sine this is
one of those "should never happen" types of errors which will be
reported to the developers directly, we can leave the format as the raw
internal representation for simplicity.