Make DAG walks test-able, and add tests for more complex graph ordering.
We also add breadth-first for comparison, though it's not used currently
in Terraform.
These tests were originally written long before Go supported subtests
explicitly, but now that we have t.Run we can avoid the prior problem
that one test failing would mask all of the others that followed it.
Now we'll always run all of them, potentially collecting more errors in
a single run so we can have more context to debug with and potentially
fix them all in a single step rather than one by one.
* terraform init: add suggested fix for when a checksum is missing from the lock file
* improve error message
* add link to the documentation
* cleanup leftovers from previous attempt
* fix tests
* s/,/;
* fix imports
* Add golden JSON test for Terraform plan
* Add data source to golden JSON plan
* Move output comparison code into shared helper function
* Add note for maintainer to contact TFC when UI changes
UI changes may potentially impact the behavior of structured run output
on TFC.
* Add test_data_source to other mock providers
* refactor: Use tfaddr for provider address parsing
* refactor: Use tfaddr for module address parsing
* deps: introduce hashicorp/terraform-registry-address
So far we've only ever needed to re-parse address strings that happen not
to contain instance keys and so we've gotten away with our serialization
of these not being quite right, but given how liberally we've expected to
be able to use address strings from this package for wire format
interchange it seems likely that this is going to surprise us eventually.
Now we'll use an escaping scheme compatible with HCL's parser rather
than Go's parser, and so we can safely rely on hclsyntax.ParseTraversal
as part of reversing this operation to transform an address string back
into an address equivalent to the value it was created from.
This is most easily handled in the plugin code, without involving
Terraform core.
The biggest change here other than checking the PlanDestroy capability,
is the removal of the schema helper methods in the plugins. With the
addition of the capabilities field, combined with the necessity of
checking diagnostics from the schema, the helpers have outlived their
usefulness. Perhaps there's a better pattern for these repetitive calls,
but for now there isn't too extra verbosity involved.
Call PlanResourceDestroy during a destroy plan.
This allows providers two new abilities:
- They can evaluate if the plan is valid, notifying users of any
potential errors before an apply is started, which may not be able to
complete.
- They can inspect and modify their private data during a destroy plan
just like they can with an other plan operation.
Since we've both concluded the module_variables_optional_attrs experiment
and made experiments available only in alpha releases in the same minor
release, we accidentally made the more general message about experiments
not being available mask the specific message about the experiment being
concluded.
In order to give better feedback to those who were participating in the
experiment in earlier Terraform releases, we'll retain a minimal exception
to our checks to allow the "experiment has concluded" error message to
shine through if and only if that is the only selected experiment.
* Fail global required_version check if it contains any prerelease fields
* go mod tidy
* Improve required_version prerelease not supported error string
* Add prerelease version constraint unit tests
* Fix side-effects by populating global diags too soon
There are no good options for inserting diagnostics into the backend
lookup, or creating a backend which reports it's removal because none of
the init or GetSchema functions return any errors.
Keep a registry of the removed backend so that we can at least notify
users that a backend was removed vs an invalid name.
This allows us to remove the manual replace directives
github.com/dgrijalva/jwt-go and google.golang.org/grpc, so that we can
remove the CVE warnings and update the grpc packages.
While the etcdv3 backend is also marked as deprecated, the changes here
are done in a manner to keep that backend working for the time being.
By observing the sorts of questions people ask in the community, and the
ways they ask them, we've inferred that various different people have been
confused by Terraform reporting that a value won't be known until apply
or that a value is sensitive as part of an error message when that message
doesn't actually relate to the known-ness and sensitivity of any value.
Quite reasonably, someone who sees Terraform discussing an unfamiliar
concept like unknown values can assume that it must be somehow relevant to
the problem being discussed, and so in that sense Terraform's current
error messages are giving "too much information": information that isn't
actually helpful in understanding the problem being described, and in the
worst case is a distraction from understanding the problem being described.
With that in mind then, here we introduce an explicit annotation on
diagnostic objects that are directly talking about unknown values or
sensitive values, and then the diagnostic renderer will react to that to
avoid using the terminology "known only after apply" or "sensitive" in the
generated diagnostic annotations unless we're rendering a message that is
explicitly related to one of those topics.
This ends up being a bit of a cross-cutting concern because the code that
generates these diagnostics and the code that renders them are in separate
packages and are not directly aware of each other. With that in mind, the
logic for actually deciding for a particular diagnostic whether it's
flagged in one of these special ways lives inside the tfdiags package as
an intermediation point, which both the diagnostic generator (in the core
package) and the diagnostic renderer can both depend on.
When an error occurs in a function call, the error message text often
includes references to particular parameters in the function signature.
This commit improves that reporting by also including a summary of the
full function signature as part of the diagnostic context in that case,
so a reader can see which parameter is which given that function
arguments are always assigned positionally and so the parameter names
do not appear in the caller's source code.
HCL's diagnostic model now includes the idea of "extra information" which
works by attaching an initially-opaque interface value to each diagnostic
and then asking callers to type-assert against that value to sniff for
particular interfaces in order to discover additional machine-readable
context about a certain diagnostic message.
This commit echoes that idea into our tfdiags API, for now only for
diagnostics that are backed by an hcl.Diagnostic. All other implementations
of the diagnostic interface just always return nil, which means they never
carry any "extra information".
As is typical for our wrapping abstraction, we have here also a modified
copy of HCL's helper function for conveniently probing a diagnostic for
information of a particular type, designed to work with our diagnostic
interface instead of HCL's concrete diagnostic type.
When validating self-references for resource and data source
preconditions and postconditions, we previously did not nil-check the
block's condition field, which caused a panic when the block had no
condition.
While fixing this I noticed that we were not validating that there are
no self-references in the error message, so fixed that.
Combine all plan-time graphs into a single graph builder, because
_everything is a plan_!
Convert the import graph to a plan graph. This should resolve a few edge
cases about things not being properly evaluated during import, and takes
a step towards being able to _plan_ an import.
We originally introduced the idea of language experiments as a way to get
early feedback on not-yet-proven feature ideas, ideally as part of the
initial exploration of the solution space rather than only after a
solution has become relatively clear.
Unfortunately, our tradeoff of making them available in normal releases
behind an explicit opt-in in order to make it easier to participate in the
feedback process had the unintended side-effect of making it feel okay
to use experiments in production and endure the warnings they generate.
This in turn has made us reluctant to make use of the experiments feature
lest experiments become de-facto production features which we then feel
compelled to preserve even though we aren't yet ready to graduate them
to stable features.
In an attempt to tweak that compromise, here we make the availability of
experiments _at all_ a build-time flag which will not be set by default,
and therefore experiments will not be available in most release builds.
The intent (not yet implemented in this PR) is for our release process to
set this flag only when it knows it's building an alpha release or a
development snapshot not destined for release at all, which will therefore
allow us to still use the alpha releases as a vehicle for giving feedback
participants access to a feature (without needing to install a Go
toolchain) but will not encourage pretending that these features are
production-ready before they graduate from experimental.
Only language experiments have an explicit framework for dealing with them
which outlives any particular experiment, so most of the changes here are
to that generalized mechanism. However, the intent is that non-language
experiments, such as experimental CLI commands, would also in future
check Meta.AllowExperimentalFeatures and gate the use of those experiments
too, so that we can be consistent that experimental features will never
be available unless you explicitly choose to use an alpha release or
a custom build from source code.
Since there are already some experiments active at the time of this commit
which were not previously subject to this restriction, we'll pragmatically
leave those as exceptions that will remain generally available for now,
and so this new approach will apply only to new experiments started in the
future. Once those experiments have all concluded, we will be left with
no more exceptions unless we explicitly choose to make an exception for
some reason we've not imagined yet.
It's important that we be able to write tests that rely on experiments
either being available or not being available, so here we're using our
typical approach of making "package main" deal with the global setting
that applies to Terraform CLI executables while making the layers below
all support fine-grain selection of this behavior so that tests with
different needs can run concurrently without trampling on one another.
As a compromise, the integration tests in the terraform package will
run with experiments enabled _by default_ since we commonly need to
exercise experiments in those tests, but they can selectively opt-out
if they need to by overriding the loader setting back to false again.
When attempting to lock a remote state backend, failure due to an
existing lock should return an instance of LockError. This allows the
wrapping code to retry until the specified timeout, instead of
immediately exiting.
This commit adds a test for this in the TestRemoteLocks test helper,
which is used in many of the remote state backend test suites.
We introduced the addrs.UniqueKey and addrs.UniqueKeyer mechanics as part
of implementing the ValidateMoves and ApplyMoves functions, as a way to
better encapsulate the solution to the problem that lots of our address
types aren't comparable and so cannot be used directly as map keys.
However, exposing addrs.UniqueKey handling directly in the logic adds
various noise to the algorithms and, in particular, obscures the fact that
MoveResults.Changes and MoveResult.Blocked both have different map key
types.
Here then we'll use the new addrs.Map helper type, which encapsulates the
idea of a map from an addrs.UniqueKeyer type to an arbitrary value type,
using the unique keys as the map keys internally. This does unfortunately
mean that we lose the conventional Go map access syntax and have to use
a method-based API instead, but I (subjectively) think that's an okay
compromise in return for avoiding the need to keep track inline of which
addrs.UniqueKey values correspond with which real addresses.
This is intended as an entirely-mechanical change, with equivalent
behavior to what it replaced. If anything here is doing something
materially different than what it replaced then that's a mistake.
The addrs.Set type previously snuck in accidentally as part of the work
to add addrs.UniqueKey and addrs.UniqueKeyer, because without support for
generic types the addrs.Set type was a bit of a safety hazard due to not
being able to enforce particular address types at compile time.
However, with Go 1.18 adding support for type parameters we can now turn
addrs.Set into a generic type over any specific addrs.UniqueKeyer type,
and complement it with an addrs.Map type which supports addrs.UniqueKeyer
keys as a way to encapsulate the handling of maps with UniqueKey keys that
we currently do inline in various other parts of Terraform.
This doesn't yet introduce any callers of these types, but we'll convert
existing users of addrs.UniqueKeyer gradually in subsequent commits.
Expanded resource instances can initially share the same dependency
slice, so we must take care to not modify the array values when
checking the dependencies.
In the future we can convert these to a generic Set data type, as we
often need to compare for equality and take the union of multiple groups
of dependencies.
The JSON output for sequences previously omitted unknown values for
tuples and sets, which made it impossible to interpret the corresponding
unknown marks. For example, consider this resource:
resource "example_resource" "example" {
tags = toset(["alpha", timestamp(), "charlie"])
}
This would previously be encoded in JSON as:
"after": {
"tags": ["alpha", "charlie"]
},
"after_unknown": {
"id": true,
"tags": [false, true, false]
},
That is, the timestamp value would be omitted from the output
altogether, while the corresponding unknown marks would include a value
for each of the set members.
This commit changes the behaviour to:
"after": {
"tags": ["alpha", null, "charlie"]
},
"after_unknown": {
"id": true,
"tags": [false, true, false]
},
This aligns tuples and sets with the prior behaviour for lists, and
makes it clear which elements are known and which are unknown.
Planned output changes are represented in the JSON output format using
the same change object as planned resource changes. This structure
includes an `after` value and a parallel `after_unknown` value, which
can be combined to determine which specific parts of a value are known
only at apply time.
Previously, structured output values would be marked in the JSON plan as
coarsely known or unknown, even if only some subset of the structure
will be known only at apply time. This simplification was unnecessary,
and this commit reuses the same logic for resource changes to give more
information to consumers of this format.
For example, consider this output:
output "bar" {
value = tolist([
"hello",
timestamp(),
"world",
])
}
The plan output for this output would be:
+ bar = [
+ "hello",
+ (known after apply),
+ "world",
]
For the same plan, the JSON output was previously:
"bar": {
"actions": [
"create"
],
"before": null,
"after_unknown": true,
"before_sensitive": false,
"after_sensitive": false
}
After this commit, the output is instead:
"bar": {
"actions": [
"create"
],
"before": null,
"after": [
"hello",
null,
"world"
],
"after_unknown": [
false,
true,
false
],
"before_sensitive": false,
"after_sensitive": false
}
Adding multiple local names for the same provider type in
required_providers was not prevented, which can lead to ambiguous
behavior in Terraform. Providers are always indexed by the providers
fully qualified name, so duplicate local names cannot be differentiated.
Now that variables parse and retain a set of default values for
object attributes, we must apply the defaults during variable
evaluation. We do so immediately before type conversion, preprocessing
the given value so that conversion will receive the intended defaults as
appropriate.
The optional modifier previously accepted a single argument: the
attribute type. This commit adds an optional second argument, which
specifies a default value for the attribute.
To record the default values for a variable's type, we use a separate
parallel structure of `typeexpr.Defaults`, rather than extending
`cty.Type` to include a `cty.Value` of defaults (which may in turn
include a `cty.Type` with defaults, and so on, and so forth).
The new `typeexpr.TypeConstraintWithDefaults` returns a type constraint
and defaults value. Defaults will be `nil` unless there are default
values specified somewhere in the variable's type.
In type constraints, object attributes may be marked as optional in
order to allow them to be omitted from input values. Doing so results in
filling the attribute value with a typed `null`.
This commit adds a new type `typeexpr.Defaults` which mirrors the
structure of a type constraint, storing default values for optional
attributes. This will allow specification of non-`null` default values
for attributes.
The `Defaults` type is a tree structure, each node containing a sub-tree
type, a map of children, and for object nodes, a map of defaults. The
keys in the children map depend on the type of the node:
- Object nodes have children for each attribute;
- Tuple nodes have children for each index, with indices converted to
string values;
- Collection nodes have a single child at the empty string key.
When traversing this tree we must take this structure into account, with
special cases for map input values which may later be converted to
objects.
The traversal defined in this commit uses a pre-order transformer in
order to pre-populate descendent nodes before their defaults are
applied. This allows type nested type default values to be specified
more compactly.
Rather than maintain a separate graph builder for destroy, use the
normal plan graph with some extra options. Utilize the same pattern as
the validate graph for now, where we take the normal plan graph builder
and inject a new concrete function for the destroy nodes.
All instances in state are being removed for destroy, so we can skip
checking for orphans. Because we want to use the normal plan graph, we
need to be able to still call this during destroy, so flag it off.
Destroy nodes should never participate in references. These edges didn't
come up before, because we weren't building a complete graph including
all temporary values.
This also sets an additional variable if it detects that this is an alpha
or development build, which currently does nothing but might eventually
turn on the ability to use experimental features, if we make that
something available only in prereleases.
When a user reports a "Configuration contains unknown value" error,
there is no information on what might have been unknown during apply.
Add unknown attribute paths to the diagnostic message to provide some
more information when a reproduction may not be possible. Sine this is
one of those "should never happen" types of errors which will be
reported to the developers directly, we can leave the format as the raw
internal representation for simplicity.
5417975946 addressed a regression in the
logic for catching when the newer module meta-arguments are used in
conjunction with the legacy practice of including provider configurations
inside modules, where the check started incorrectly catching situations
where the legacy nested provider configuration was in the same module
as the child call using one of those meta-arguments.
This is a regression test to catch if a similar bug arises in the future.
Since it's testing validation rules that apply to an entire configuration
tree, it ended up in a rather idiosyncratic location under the "configload"
package, rather than directly in "configs". The "configs" package only
knows how to load one module at a time, so it's harder to write a test
like this in that context. Due to it being further removed from the code
it is testing, I included a test for the correct error too in order to
increase the chance that we'll learn if future changes in the "configs"
package invalidate this regression test.
I've verified that this new test fails without the change made in the
earlier commit.
This fixes a bug introduced in 1879a39 in which initialising a module will fail
if that module contains both a provider block and a module call using for_each.
In configurations which have already been initialized, updating the
source of a non-root module call to an invalid value could cause a nil
pointer panic. This commit fixes the bug and adds test coverage.
When a data resource is used for the purposes of verifying a condition
about an object managed elsewhere (e.g. if the managed resource doesn't
directly export all of the information required for the condition) it's
important that we defer the data resource read to the apply step if the
corresponding managed resource has any changes pending.
Typically we'd expect that to come "for free" but unfortunately we have
a pragmatic special case in our handling of data resources where we
normally defer to the apply step only if a _direct_ dependency of the data
resource has a change pending, and allow a plan-time read if there's
a pending change in an indirect dependency. This allowed us to preserve
some compatibility with the questionable historical behavior of always
reading data resources proactively unless the configuration contains
unknown values, since the arguably-more-correct behavior would've been a
regression for anyone who had been depending on that before.
Since preconditions and postconditions didn't exist until now, we are not
constrained in the same way by backward compatibility, and so we can adopt
the more correct behavior in the case where a data resource has conditions
specified. This does unfortunately make the handling of data resources
with conditions subtly inconsistent with those that don't, but this is
a better situation than the alternative where it would be easy to get into
a trapped situation where the remote system is invalid and it's impossible
to plan the change that would make it valid again because the conditions
evaluate too soon, prior to the fix being applied.
We have two different reasons why a data resource might be read only
during apply, rather than during planning as usual: the configuration
contains unknown values, or the data resource as a whole depends on a
managed resource which itself has a change pending.
However, we didn't previously distinguish these two in a way that allowed
the UI to describe the difference, and so we confusingly reported both
as "config refers to values not yet known", which in turn led to a number
of reasonable questions about why Terraform was claiming that but then
immediately below showing the configuration entirely known.
Now we'll use our existing "ActionReason" mechanism to tell the UI layer
which of the two reasons applies to a particular data resource instance.
The "dependency pending" situation tends to happen in conjunction with
"config unknown", so we'll prefer to refer that the configuration is
unknown if both are true.
We can no longer be assured that the particular instance of a provider
we are using has had GetProviderSchema called. Always check the
diagnostics even if we're fetching a cached response.
When specifying variable values on the command line, name-value pairs
must be joined with an equals sign, without surrounding spaces.
Previously Terraform would interpret "foo = bar" as assigning the value
" bar" to the variable named "foo ". This is never valid, as variable
names may not include whitespace.
This commit looks for this specific error and returns a diagnostic with
a suggestion for correcting it. We cannot simply trim whitespace,
because it is valid to write "foo= bar" to assign the value " bar" to
the variable "foo", as unlikely as it seems.
When executing an apply with no plan, it's possible for a cancellation
to arrive during the final batch of provider operations, resulting in no
errors in the plan. The run context was next checked during the
confirmation for apply, but in the case of -auto-approve that
confirmation is skipped, resulting in the canceled plan being applied.
Make sure we directly check for cancellation before confirming the plan.
Instances of the same AbsResource may share the same Dependencies, which
could point to the same backing array of values. Since address values
are not pointers, and not meant to be shared, we must copy the value
before sorting the slice in-place. Because individual instances of the
same resource may be encoded to state concurrently, failure to copy the
slice first can result in a data race.
Terraform's remote-exec provision hangs out when it execs on HTTP Proxy bacause it dosen't support SSH over HTTP Proxy. This commits enables Terraform's remote-exec to support SSH over HTTP Proxy.
* adds `proxy_*` fields to `connection` which add configuration for a proxy host
* if `proxy_host` set, connect to that proxy host via CONNECT method, then make the SSH connection to `host` or `bastion_host`
Using the any keyword with arguments (e.g. any(string, bool)) is
invalid, but any is not technically a "primitive type keyword". This
commit corrects the language in the diagnostic and updates the tests.
Previously the supported JSON plan and state formats included only
serialized output values, which was a lossy serialization of the
Terraform type system. This commit adds a type field in the usual cty
JSON format, which allows reconstitution of the original value.
For example, previously a list(string) and a set(string) containing the
same values were indistinguishable. This change serializes these as
follows:
{
"value": ["a","b","c"],
"type": ["list","string"]
}
and:
{
"value": ["a","b","c"],
"type": ["set","string"]
}
For non-interactive contexts, Terraform is typically executed with the flag -input=false.
However for runs that are not set to auto approve, the cloud integration will prompt a user for
approval input even with input being set to false. This commit enables the cloud integration to know
the value of the input flag and use it to determine whether or not to ask the user for input.
If -input is set to false and the run cannot be auto approved, the cloud integration will throw an error
stating run confirmation can no longer be handled in the CLI and that they must do so through the browser.
The plan graph does not contain all the information necessary to detect
cycles which may happen when building the apply graph. Once we have more
information from the plan we can build the complete apply graph with all
individual instances to verify that the apply can begin without errors.
The raw plan output changes were stored in the output exec node, when
they should have instead been fetch lazily through the context via the
synchronized ChangesSync value.
We had intended these functions to attempt to convert any given value, but
there is a special behavior in the function system where functions must
opt in to being able to handle dynamically-typed arguments so that we
don't need to repeat the special case for that inside every function
implementation.
In this case we _do_ want to specially handle dynamically-typed values,
because the keyword "null" in HCL produces
cty.NullVal(cty.DynamicPseudoType) and we want the conversion function
to convert it to a null of a more specific type.
These conversion functions are already just a thin wrapper around the
underlying type conversion functionality anyway, and that already supports
converting dynamic-typed values in the expected way, so we can just opt
in to allowing dynamically-typed values and let the conversion
functionality do the expected work.
Fixing this allows module authors to use type conversion functions to
give additional type information to Terraform in situations that are too
ambiguous to be handled automatically by the type inference/unification
process. Previously tostring(null) was effectively a no-op, totally
ignoring the author's request to treat the null as a string.
replace_triggered_by references are scoped to the current module, so we
need to filter changes for the current module instance. Rather than
creating a ConfigResource and filtering the result, make a
Changes.InstancesForAbsResource method to get only the AbsResource
changes.
Check for triggered resource replacement in the plan. While the
functionality of the feature works here, we ill want to follow up with a
way to indicate in the plan _why_ the resource was replaced.
The EvalContext is the only place with all the information to be able to
complete the evaluation of the replace_triggered_by expressions. These
need to be evaluated into a reference, which is then looked up in the
pending changes which the context has access too. On top of needing the
plan changes, we also need access to all providers and schemas to decode
the changes if we need to traverse the resource values for individual
attributes.
* internal/getproviders: Add URL to error message for clarity
Occasionally `terraform init` on some providers may return the following error message:
Error while installing citrix/citrixadc v1.13.0: could not query provider
registry for registry.terraform.io/citrix/citrixadc: failed to retrieve
authentication checksums for provider: 403 Forbidden
The 403 is most often returned from GitHub (rather than Registry API)
and this change makes it more obvious.
* Use Host instead of full URL
Hyphen characters are allowed in environment variable names, but are not valid POSIX variable names. Usually, it's still possible to set variable names with hyphens using utilities like env or docker. But, as a fallback, host names may encode their hyphens as double underscores in the variable name. For the example "café.fr", the variable name "TF_TOKEN_xn____caf__dma_fr" or "TF_TOKEN_xn--caf-dma_fr"
may be used.
Introduces a new method of configuring token service credentials using a host-specific environment variable. This configuration was previously possible using the [terraform-credentials-env](https://github.com/apparentlymart/terraform-credentials-env) credentials helper.
This new method is now consulted first, as it is seen to be the most proximate source of credentials before CLI configuration while still falling back to the credentials helper.
A previous change added missing quoting around object keys which do not
parse as barewords. At the same time we introduced a bug where map keys
could be double-quoted, due to calling the `displayAttributeName` helper
function (to quote non-bareword keys) then using the `writeValue` method
(which quotes all strings).
This commit fixes this and adds test coverage for map keys which require
quoting.
Data sources should not require reading the previous versions. While we
previously skipped the decoding if it were to fail, this removes the
need for any prior state at all.
The only place where the prior state was functionally used was in the
destroy path. Because a data source destroy is only for cleanup purposes
to clean out the state using the same code paths as a managed resource,
we can substitute the prior state in the change change with a null value
to maintain the same behavior.
After data source handling was moved from a separate refresh phase into
the planning phase, reading the existing state was only used for
informational purposes. This had been reduced to reporting warnings when
the provider returned an unexpected value to try and help locate legacy
provider bugs, but any actual issues located from those warnings were
very few and far between.
Because the prior state cannot be reliably decoded when faced with
incompatible provider schema upgrades, and there is no longer any
significant reason to try and get the prior state at all, we can skip
the process entirely.
Data sources do not have state migrations, so there may be no way to
decode the prior state when faced with incompatible type changes.
Because prior state is only informational to the plan, and its existence
should not effect the planning process, we can skip decoding when faced
with errors.
When rendering diffs for resources which use nested attribute types, we
must cope with collections backing those attributes which are entirely
sensitive. The most common way this will be seen is through sensitive
values being present in sets, which will result in the entire set being
marked sensitive.
This commit replaces `ioutil.TempDir` with `t.TempDir` in tests. The
directory created by `t.TempDir` is automatically removed when the test
and all its subtests complete.
Prior to this commit, temporary directory created using `ioutil.TempDir`
needs to be removed manually by calling `os.RemoveAll`, which is omitted
in some tests. The error handling boilerplate e.g.
defer func() {
if err := os.RemoveAll(dir); err != nil {
t.Fatal(err)
}
}
is also tedious, but `t.TempDir` handles this for us nicely.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
TF_WORKSPACE can now be used for your cloud configuration, effectively serving as an alternative
to setting the name attribute in your workspaces configuration.
In order to include condition block results in the JSON plan output, we
must store them in the plan and its serialization.
Terraform can evaluate condition blocks multiple times, so we must be
able to update the result. Accordingly, the plan.Conditions object is a
map with keys representing the condition block's address. Condition
blocks are not referenceable in any other context, so this address form
cannot be used anywhere in the configuration.
The commit includes a new test case for the JSON output of a
refresh-only plan, which is currently the only way for a failing
condition result to be rendered through this path.
When rendering a diff, we should quote object attribute names if the
string representation is not a valid identifier. While this is not
strictly necessary, it makes the diff output more closely resemble the
configuration language, which is less confusing.
This commit applies to both top-level schema attributes and any object
value attributes. We use a simplistic "%q" Go format string to quote the
strings, which is not strictly identical to HCL's quoting requirements,
but is the pattern used elsewhere in HCL and Terraform.
Co-Authored-By: Katy Moe <katy@katy.moe>
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
We previously used to throw an error denoting where in the configuration the attribute was missing or invalid.
Considering that organization can be now be omitted from the configuration, our previous error message will be
improperly formatted. This commit also updates the message to mention `TF_ORGANIZATION` as a valid substitute if
organization is missing or invalid in the configuration.
The initial rough implementation contained a bug where it would
incorrectly return a NilVal in some cases.
Improve the heuristics here to insert null values more precisely when
parent objects change to or from null. We also check for dynamic types
changing, in which case the entire object must be taken when we can't
match the individual attribute values.
TF_ORGANIZATION will serve as a fallback for configuring the organization in the `cloud`
block. This is the first step to make it easier for users wanting to configure Terraform
programmatically.
Add the resource instances and individual attributes which may have
contributed to the planned changes to the json format of the plan. We
use the existing path encoding for individual attributes, which is
already used in the replace_paths change field.
Track individual instance drift rather than whole resources which
contributed to the plan. This will allow the output to be more precise,
and we can still use NoKey instances as a proxy for containing resources
when needed.
Filter the refresh changes from the normal plan UI at the attribute
level. We do this by constructing fake plans.Change records for diff
generation, reverting all attribute changes that do not match any of the
plan's ContributingResourceReferences.
Convert a global reference to a specific AbsResource and attribute pair.
The hcl.Traversal is converted to a cty.Path at this point because plan
rendering is based on cty values.
When rendering a diff for an object value within a resource, Terraform
should always display the value of attributes which may be identifying.
At present, this is a simple rule: render attributes named "id", "name",
or "tags".
Prior to this commit, Terraform would only apply this rule to top-level
resource attributes and those inside nested blocks. Here we extend the
implementation to include object values in other contexts as well.
The sum() function accepts a collection of values which must all convert
to numbers. It is valid for this to be a collection of string values
representing numbers.
Previously the function would panic if the first element of a collection
was a non-number type, as we didn't attempt to convert it to a number
before calling the cty `Add` method.
* fix: local variables should not be overridden by remote variables during `terraform import`
* chore: applied the same fix in the 'internal/cloud' package
* backport changes from cloud package to remote package
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
Co-authored-by: uturunku1 <luces.huayhuaca@gmail.com>
Variable validation error message expressions which generated sensitive
values would previously crash. This commit updates the logic to align
with preconditions and postconditions, eliding sensitive error message
values and adding a separate diagnostic explaining why.
Precondition and postcondition blocks which evaluated expressions
resulting in sensitive values would previously crash. This commit fixes
the crashes, and adds an additional diagnostic if the error message
expression produces a sensitive value (which we also elide).
Evaluate precondition and postcondition blocks in refresh-only mode, but
report any failures as warnings instead of errors. This ensures that any
deviation from the contract defined by condition blocks is reported as
early as possible, without preventing the completion of a state refresh
operation.
Prior to this commit, Terraform evaluated output preconditions and data
source pre/postconditions as normal in refresh-only mode, while managed
resource pre/postconditions were not evaluated at all. This omission
could lead to confusing partial condition errors, or failure to detect
undesired changes which would otherwise cause resources to become
invalid.
Reporting the failures as errors also meant that changes retrieved
during refresh could cause the refresh operation to fail. This is also
undesirable, as the primary purpose of the operation is to update local
state. Precondition/postcondition checks are still valuable here, but
should be informative rather than blocking.
The graphs used for the CBD tests wouldn't validate because they skipped
adding the root module node. Re add the root module transformer and
transitive reduction transformer to the build steps, and match the new
reduced output in the test fixtures.
Complete the removal of the Validate option for graph building. There is
no case where we want to allow an invalid graph, as the primary reason
for validation is to ensure we have no cycles, and we can't walk a graph
with cycles. The only code which specifically relied on there being no
validation was a test to ensure the Validate flag prevented it.
The previous precondition/postcondition block validation implementation
failed if the enclosing resource was expanded. This commit fixes this by
generating appropriate placeholder instance data for the resource,
depending on whether `count` or `for_each` is used.
This set of diagnostic messages is under a number of unusual constraints
that make them tough to get right:
- They are discussing a couple finicky concepts which authors are
likely to be encountering for the first time in these error messages:
the idea of "local names" for providers, the relationship between those
and provider source addresses, and additional ("aliased") provider
configurations.
- They are reporting concerns that span across a module call boundary,
and so need to take care to be clear about whether they are talking
about a problem in the caller or a problem in the callee.
- Some of them are effectively deprecation warnings for features that
might be in use by a third-party module that the user doesn't control,
in which case they have no recourse to address them aside from opening
a feature request with the upstream module maintainer.
- Terraform has, for backward-compatibility reasons, a lot of implied
default behaviors regarding providers and provider configurations,
and these errors can arise in situations where Terraform's assumptions
don't match the author's intent, and so we need to be careful to
explain what Terraform assumed in order to make the messages
understandable.
After seeing some confusion with these messages in the community, and
being somewhat confused by some of them myself, I decided to try to edit
them a bit for consistency of terminology (both between the messages and
with terminology in our docs), being explicit about caller vs. callee
by naming them in the messages, and making explicit what would otherwise
be implicit with regard to the correspondences between provider source
addresses and local names.
My assumed audience for all of these messages is the author of the caller
module, because it's the caller who is responsible for creating the
relationship between caller and callee. As much as possible I tried to
make the messages include specific actions for that author to take to
quiet the warning or fix the error, but some of the warnings are only
fixable by the callee's maintainer and so those messages are, in effect,
a suggestion to send a request to the author to stop using a deprecated
feature.
I think these new messages are also not ideal by any means, because it's
just tough to pack so much information into concise messages while being
clear and consistent, but I hope at least this will give users seeing
these messages enough context to infer what's going on, possibly with the
help of our documentation.
I intentionally didn't change which cases Terraform will return warnings
or errors -- only the message texts -- although I did highlight in a
comment in one of the tests that what it is a asserting seems a bit
suspicious to me. I don't intend to address that here; instead, I intend
that note to be something to refer to if we later see a bug report that
calls that behavior into question.
This does actually silence some _unrelated_ warnings and errors in cases
where a provider block has an invalid provider local name as its label,
because our other functions for dealing with provider addresses are
written to panic if given invalid addresses under the assumption that
earlier code will have guarded against that. Doing this allowed for the
provider configuration validation logic to safely include more information
about the configuration as helpful context, without risking tripping over
known-invalid configuration and panicking in the process.
PreDiff and PostDiff hooks were designed to be called immediately before
and after the PlanResourceChange calls to the provider. Probably due to
the confusing legacy naming of the hooks, these were scattered about the
nodes involved with planning, causing the hooks to be called in a number
of places where they were designed, including data sources and destroy
plans. Since these hooks are not used at all any longer anyway, we can
removed the extra calls with no effect.
If we choose in the future to call PlanResourceChange for resource
destroy plans, the hooks can be re-inserted (even though they currently
are unused) into the new code path which must diverge from the current
combined path of managed and data sources.
The UI hooks for data source reads were missed during planning. Move the
hook calls to immediatley before and after the ReadDataSource calls to
ensure they are called during both plan and apply.
Our existing functionality for dealing with references generally only has
to concern itself with one level of references at a time, and only within
one module, because we use it to draw a dependency graph which then ends
up reflecting the broader context.
However, there are some situations where it's handy to be able to ask
questions about the indirect contributions to a particular expression in
the configuration, particularly for additional hints in the user interface
where we're just providing some extra context rather than changing
behavior.
This new "globalref" package therefore aims to be the home for algorithms
for use-cases like this. It introduces its own special "Reference" type
that wraps addrs.Reference to annotate it also with the usually-implied
context about where the references would be evaluated.
With that building block we can therefore ask questions whose answers
might involve discussing references in multiple packages at once, such as
"which resources directly or indirectly contribute to this expression?",
including indirect hops through input variables or output values which
would therefore change the evaluation context.
The current implementations of this are around mapping references onto the
static configuration expressions that they refer to, which is a pretty
broad and conservative approach that unfortunately therefore loses
accuracy when confronted with complex expressions that might take dynamic
actions on the contents of an object. My hunch is that this'll be good
enough to get some initial small use-cases solved, though there's plenty
room for improvement in accuracy.
It's somewhat ironic that this sort of "what is this value built from?"
question is the use-case I had in mind when I designed the "marks" feature
in cty, yet we've ended up putting it to an unexpected but still valid
use in Terraform for sensitivity analysis and our currently handling of
that isn't really tight enough to permit other concurrent uses of marks
for other use-cases. I expect we can address that later and so maybe we'll
try for a more accurate version of these analyses at a later date, but my
hunch is that this'll be good enough for us to still get some good use out
of it in the near future, particular related to helping understand where
unknown values came from and in tailoring our refresh results in plan
output to deemphasize detected changes that couldn't possibly have
contributed to the proposed plan.
Previously the "providers" package contained only a type for representing
the schema of a particular object within a provider, and the terraform
package had the responsibility of aggregating many of those together to
describe the entire surface area of a provider.
Here we move what was previously terraform.ProviderSchema to instead be
providers.Schemas, retaining its existing API otherwise, and leave behind
a type alias to allow us to gradually update other references over time.
We've gradually been shrinking down the responsibilities of the
"terraform" package to just representing the graph components and
behaviors anyway, but the specific motivation for doing this _now_ is to
allow for other packages to both be called by the terraform package _and_
work with provider schemas at the same time, without creating a package
dependency cycle: instead, these other packages can just import the
"providers" package and not need to import the "terraform" package at all.
For now this does still leave the responsibility for _building_ a
providers.Schemas object over in the "terraform" package, because it's
currently doing that as part of some larger work that isn't easily
separable, and so reorganizing that would be a more involved and riskier
change than just moving the existing type elsewhere.
We've ended up implementing something approximately like this in a few
places now, so this is a centralized version that we can consolidate on
moving forward, gradually removing that duplication.
Custom variable validations specified using JSON syntax would always
parse error messages as string literals, even if they included template
expressions. We need to be as backwards compatible with this behaviour
as possible, which results in this complex fallback logic. More detail
about this in the extensive code comments.
During the validation walk, we attempt to proactively evaluate check
rule condition and error message expressions. This will help catch some
errors as early as possible.
At present, resource values in the validation walk are of dynamic type.
This means that any references to resources will cause validation to be
delayed, rather than presenting useful errors. Validation may still
catch other errors, and any future changes which cause better type
propagation will result in better validation too.
Error messages for preconditions, postconditions, and custom variable
validations have until now been string literals. This commit changes
this to treat the field as an HCL expression, which must evaluate to a
string. Most commonly this will either be a string literal or a template
expression.
When the check rule condition is evaluated, we also evaluate the error
message. This means that the error message should always evaluate to a
string value, even if the condition passes. If it does not, this will
result in an error diagnostic.
If the condition fails, and the error message also fails to evaluate, we
fall back to a default error message. This means that the check rule
failure will still be reported, alongside diagnostics explaining why the
custom error message failed to render.
As part of this change, we also necessarily remove the heuristic about
the error message format. This guidance can be readded in future as part
of a configuration hint system.
This commit stems from the change to make post plan the default run task stage, at the
time of this commit's writing! Since pre apply is under internal revision, we have removed
the block that polls the pre apply stage until the team decides to re-add support for pre apply
run tasks.