Now that variables parse and retain a set of default values for
object attributes, we must apply the defaults during variable
evaluation. We do so immediately before type conversion, preprocessing
the given value so that conversion will receive the intended defaults as
appropriate.
The optional modifier previously accepted a single argument: the
attribute type. This commit adds an optional second argument, which
specifies a default value for the attribute.
To record the default values for a variable's type, we use a separate
parallel structure of `typeexpr.Defaults`, rather than extending
`cty.Type` to include a `cty.Value` of defaults (which may in turn
include a `cty.Type` with defaults, and so on, and so forth).
The new `typeexpr.TypeConstraintWithDefaults` returns a type constraint
and defaults value. Defaults will be `nil` unless there are default
values specified somewhere in the variable's type.
In type constraints, object attributes may be marked as optional in
order to allow them to be omitted from input values. Doing so results in
filling the attribute value with a typed `null`.
This commit adds a new type `typeexpr.Defaults` which mirrors the
structure of a type constraint, storing default values for optional
attributes. This will allow specification of non-`null` default values
for attributes.
The `Defaults` type is a tree structure, each node containing a sub-tree
type, a map of children, and for object nodes, a map of defaults. The
keys in the children map depend on the type of the node:
- Object nodes have children for each attribute;
- Tuple nodes have children for each index, with indices converted to
string values;
- Collection nodes have a single child at the empty string key.
When traversing this tree we must take this structure into account, with
special cases for map input values which may later be converted to
objects.
The traversal defined in this commit uses a pre-order transformer in
order to pre-populate descendent nodes before their defaults are
applied. This allows type nested type default values to be specified
more compactly.
Rather than maintain a separate graph builder for destroy, use the
normal plan graph with some extra options. Utilize the same pattern as
the validate graph for now, where we take the normal plan graph builder
and inject a new concrete function for the destroy nodes.
All instances in state are being removed for destroy, so we can skip
checking for orphans. Because we want to use the normal plan graph, we
need to be able to still call this during destroy, so flag it off.
Destroy nodes should never participate in references. These edges didn't
come up before, because we weren't building a complete graph including
all temporary values.
This also sets an additional variable if it detects that this is an alpha
or development build, which currently does nothing but might eventually
turn on the ability to use experimental features, if we make that
something available only in prereleases.
When a user reports a "Configuration contains unknown value" error,
there is no information on what might have been unknown during apply.
Add unknown attribute paths to the diagnostic message to provide some
more information when a reproduction may not be possible. Sine this is
one of those "should never happen" types of errors which will be
reported to the developers directly, we can leave the format as the raw
internal representation for simplicity.
5417975946 addressed a regression in the
logic for catching when the newer module meta-arguments are used in
conjunction with the legacy practice of including provider configurations
inside modules, where the check started incorrectly catching situations
where the legacy nested provider configuration was in the same module
as the child call using one of those meta-arguments.
This is a regression test to catch if a similar bug arises in the future.
Since it's testing validation rules that apply to an entire configuration
tree, it ended up in a rather idiosyncratic location under the "configload"
package, rather than directly in "configs". The "configs" package only
knows how to load one module at a time, so it's harder to write a test
like this in that context. Due to it being further removed from the code
it is testing, I included a test for the correct error too in order to
increase the chance that we'll learn if future changes in the "configs"
package invalidate this regression test.
I've verified that this new test fails without the change made in the
earlier commit.
This fixes a bug introduced in 1879a39 in which initialising a module will fail
if that module contains both a provider block and a module call using for_each.
In configurations which have already been initialized, updating the
source of a non-root module call to an invalid value could cause a nil
pointer panic. This commit fixes the bug and adds test coverage.
When a data resource is used for the purposes of verifying a condition
about an object managed elsewhere (e.g. if the managed resource doesn't
directly export all of the information required for the condition) it's
important that we defer the data resource read to the apply step if the
corresponding managed resource has any changes pending.
Typically we'd expect that to come "for free" but unfortunately we have
a pragmatic special case in our handling of data resources where we
normally defer to the apply step only if a _direct_ dependency of the data
resource has a change pending, and allow a plan-time read if there's
a pending change in an indirect dependency. This allowed us to preserve
some compatibility with the questionable historical behavior of always
reading data resources proactively unless the configuration contains
unknown values, since the arguably-more-correct behavior would've been a
regression for anyone who had been depending on that before.
Since preconditions and postconditions didn't exist until now, we are not
constrained in the same way by backward compatibility, and so we can adopt
the more correct behavior in the case where a data resource has conditions
specified. This does unfortunately make the handling of data resources
with conditions subtly inconsistent with those that don't, but this is
a better situation than the alternative where it would be easy to get into
a trapped situation where the remote system is invalid and it's impossible
to plan the change that would make it valid again because the conditions
evaluate too soon, prior to the fix being applied.
We have two different reasons why a data resource might be read only
during apply, rather than during planning as usual: the configuration
contains unknown values, or the data resource as a whole depends on a
managed resource which itself has a change pending.
However, we didn't previously distinguish these two in a way that allowed
the UI to describe the difference, and so we confusingly reported both
as "config refers to values not yet known", which in turn led to a number
of reasonable questions about why Terraform was claiming that but then
immediately below showing the configuration entirely known.
Now we'll use our existing "ActionReason" mechanism to tell the UI layer
which of the two reasons applies to a particular data resource instance.
The "dependency pending" situation tends to happen in conjunction with
"config unknown", so we'll prefer to refer that the configuration is
unknown if both are true.
We can no longer be assured that the particular instance of a provider
we are using has had GetProviderSchema called. Always check the
diagnostics even if we're fetching a cached response.
When specifying variable values on the command line, name-value pairs
must be joined with an equals sign, without surrounding spaces.
Previously Terraform would interpret "foo = bar" as assigning the value
" bar" to the variable named "foo ". This is never valid, as variable
names may not include whitespace.
This commit looks for this specific error and returns a diagnostic with
a suggestion for correcting it. We cannot simply trim whitespace,
because it is valid to write "foo= bar" to assign the value " bar" to
the variable "foo", as unlikely as it seems.
When executing an apply with no plan, it's possible for a cancellation
to arrive during the final batch of provider operations, resulting in no
errors in the plan. The run context was next checked during the
confirmation for apply, but in the case of -auto-approve that
confirmation is skipped, resulting in the canceled plan being applied.
Make sure we directly check for cancellation before confirming the plan.
Instances of the same AbsResource may share the same Dependencies, which
could point to the same backing array of values. Since address values
are not pointers, and not meant to be shared, we must copy the value
before sorting the slice in-place. Because individual instances of the
same resource may be encoded to state concurrently, failure to copy the
slice first can result in a data race.
Terraform's remote-exec provision hangs out when it execs on HTTP Proxy bacause it dosen't support SSH over HTTP Proxy. This commits enables Terraform's remote-exec to support SSH over HTTP Proxy.
* adds `proxy_*` fields to `connection` which add configuration for a proxy host
* if `proxy_host` set, connect to that proxy host via CONNECT method, then make the SSH connection to `host` or `bastion_host`
Using the any keyword with arguments (e.g. any(string, bool)) is
invalid, but any is not technically a "primitive type keyword". This
commit corrects the language in the diagnostic and updates the tests.
Previously the supported JSON plan and state formats included only
serialized output values, which was a lossy serialization of the
Terraform type system. This commit adds a type field in the usual cty
JSON format, which allows reconstitution of the original value.
For example, previously a list(string) and a set(string) containing the
same values were indistinguishable. This change serializes these as
follows:
{
"value": ["a","b","c"],
"type": ["list","string"]
}
and:
{
"value": ["a","b","c"],
"type": ["set","string"]
}
For non-interactive contexts, Terraform is typically executed with the flag -input=false.
However for runs that are not set to auto approve, the cloud integration will prompt a user for
approval input even with input being set to false. This commit enables the cloud integration to know
the value of the input flag and use it to determine whether or not to ask the user for input.
If -input is set to false and the run cannot be auto approved, the cloud integration will throw an error
stating run confirmation can no longer be handled in the CLI and that they must do so through the browser.
The plan graph does not contain all the information necessary to detect
cycles which may happen when building the apply graph. Once we have more
information from the plan we can build the complete apply graph with all
individual instances to verify that the apply can begin without errors.
The raw plan output changes were stored in the output exec node, when
they should have instead been fetch lazily through the context via the
synchronized ChangesSync value.
We had intended these functions to attempt to convert any given value, but
there is a special behavior in the function system where functions must
opt in to being able to handle dynamically-typed arguments so that we
don't need to repeat the special case for that inside every function
implementation.
In this case we _do_ want to specially handle dynamically-typed values,
because the keyword "null" in HCL produces
cty.NullVal(cty.DynamicPseudoType) and we want the conversion function
to convert it to a null of a more specific type.
These conversion functions are already just a thin wrapper around the
underlying type conversion functionality anyway, and that already supports
converting dynamic-typed values in the expected way, so we can just opt
in to allowing dynamically-typed values and let the conversion
functionality do the expected work.
Fixing this allows module authors to use type conversion functions to
give additional type information to Terraform in situations that are too
ambiguous to be handled automatically by the type inference/unification
process. Previously tostring(null) was effectively a no-op, totally
ignoring the author's request to treat the null as a string.
replace_triggered_by references are scoped to the current module, so we
need to filter changes for the current module instance. Rather than
creating a ConfigResource and filtering the result, make a
Changes.InstancesForAbsResource method to get only the AbsResource
changes.
Check for triggered resource replacement in the plan. While the
functionality of the feature works here, we ill want to follow up with a
way to indicate in the plan _why_ the resource was replaced.
The EvalContext is the only place with all the information to be able to
complete the evaluation of the replace_triggered_by expressions. These
need to be evaluated into a reference, which is then looked up in the
pending changes which the context has access too. On top of needing the
plan changes, we also need access to all providers and schemas to decode
the changes if we need to traverse the resource values for individual
attributes.
* internal/getproviders: Add URL to error message for clarity
Occasionally `terraform init` on some providers may return the following error message:
Error while installing citrix/citrixadc v1.13.0: could not query provider
registry for registry.terraform.io/citrix/citrixadc: failed to retrieve
authentication checksums for provider: 403 Forbidden
The 403 is most often returned from GitHub (rather than Registry API)
and this change makes it more obvious.
* Use Host instead of full URL
Hyphen characters are allowed in environment variable names, but are not valid POSIX variable names. Usually, it's still possible to set variable names with hyphens using utilities like env or docker. But, as a fallback, host names may encode their hyphens as double underscores in the variable name. For the example "café.fr", the variable name "TF_TOKEN_xn____caf__dma_fr" or "TF_TOKEN_xn--caf-dma_fr"
may be used.
Introduces a new method of configuring token service credentials using a host-specific environment variable. This configuration was previously possible using the [terraform-credentials-env](https://github.com/apparentlymart/terraform-credentials-env) credentials helper.
This new method is now consulted first, as it is seen to be the most proximate source of credentials before CLI configuration while still falling back to the credentials helper.
A previous change added missing quoting around object keys which do not
parse as barewords. At the same time we introduced a bug where map keys
could be double-quoted, due to calling the `displayAttributeName` helper
function (to quote non-bareword keys) then using the `writeValue` method
(which quotes all strings).
This commit fixes this and adds test coverage for map keys which require
quoting.
Data sources should not require reading the previous versions. While we
previously skipped the decoding if it were to fail, this removes the
need for any prior state at all.
The only place where the prior state was functionally used was in the
destroy path. Because a data source destroy is only for cleanup purposes
to clean out the state using the same code paths as a managed resource,
we can substitute the prior state in the change change with a null value
to maintain the same behavior.
After data source handling was moved from a separate refresh phase into
the planning phase, reading the existing state was only used for
informational purposes. This had been reduced to reporting warnings when
the provider returned an unexpected value to try and help locate legacy
provider bugs, but any actual issues located from those warnings were
very few and far between.
Because the prior state cannot be reliably decoded when faced with
incompatible provider schema upgrades, and there is no longer any
significant reason to try and get the prior state at all, we can skip
the process entirely.
Data sources do not have state migrations, so there may be no way to
decode the prior state when faced with incompatible type changes.
Because prior state is only informational to the plan, and its existence
should not effect the planning process, we can skip decoding when faced
with errors.
When rendering diffs for resources which use nested attribute types, we
must cope with collections backing those attributes which are entirely
sensitive. The most common way this will be seen is through sensitive
values being present in sets, which will result in the entire set being
marked sensitive.
This commit replaces `ioutil.TempDir` with `t.TempDir` in tests. The
directory created by `t.TempDir` is automatically removed when the test
and all its subtests complete.
Prior to this commit, temporary directory created using `ioutil.TempDir`
needs to be removed manually by calling `os.RemoveAll`, which is omitted
in some tests. The error handling boilerplate e.g.
defer func() {
if err := os.RemoveAll(dir); err != nil {
t.Fatal(err)
}
}
is also tedious, but `t.TempDir` handles this for us nicely.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
TF_WORKSPACE can now be used for your cloud configuration, effectively serving as an alternative
to setting the name attribute in your workspaces configuration.
In order to include condition block results in the JSON plan output, we
must store them in the plan and its serialization.
Terraform can evaluate condition blocks multiple times, so we must be
able to update the result. Accordingly, the plan.Conditions object is a
map with keys representing the condition block's address. Condition
blocks are not referenceable in any other context, so this address form
cannot be used anywhere in the configuration.
The commit includes a new test case for the JSON output of a
refresh-only plan, which is currently the only way for a failing
condition result to be rendered through this path.