Rather than maintain a separate graph builder for destroy, use the
normal plan graph with some extra options. Utilize the same pattern as
the validate graph for now, where we take the normal plan graph builder
and inject a new concrete function for the destroy nodes.
All instances in state are being removed for destroy, so we can skip
checking for orphans. Because we want to use the normal plan graph, we
need to be able to still call this during destroy, so flag it off.
Destroy nodes should never participate in references. These edges didn't
come up before, because we weren't building a complete graph including
all temporary values.
This also sets an additional variable if it detects that this is an alpha
or development build, which currently does nothing but might eventually
turn on the ability to use experimental features, if we make that
something available only in prereleases.
When a user reports a "Configuration contains unknown value" error,
there is no information on what might have been unknown during apply.
Add unknown attribute paths to the diagnostic message to provide some
more information when a reproduction may not be possible. Sine this is
one of those "should never happen" types of errors which will be
reported to the developers directly, we can leave the format as the raw
internal representation for simplicity.
5417975946 addressed a regression in the
logic for catching when the newer module meta-arguments are used in
conjunction with the legacy practice of including provider configurations
inside modules, where the check started incorrectly catching situations
where the legacy nested provider configuration was in the same module
as the child call using one of those meta-arguments.
This is a regression test to catch if a similar bug arises in the future.
Since it's testing validation rules that apply to an entire configuration
tree, it ended up in a rather idiosyncratic location under the "configload"
package, rather than directly in "configs". The "configs" package only
knows how to load one module at a time, so it's harder to write a test
like this in that context. Due to it being further removed from the code
it is testing, I included a test for the correct error too in order to
increase the chance that we'll learn if future changes in the "configs"
package invalidate this regression test.
I've verified that this new test fails without the change made in the
earlier commit.
This fixes a bug introduced in 1879a39 in which initialising a module will fail
if that module contains both a provider block and a module call using for_each.
In configurations which have already been initialized, updating the
source of a non-root module call to an invalid value could cause a nil
pointer panic. This commit fixes the bug and adds test coverage.
When a data resource is used for the purposes of verifying a condition
about an object managed elsewhere (e.g. if the managed resource doesn't
directly export all of the information required for the condition) it's
important that we defer the data resource read to the apply step if the
corresponding managed resource has any changes pending.
Typically we'd expect that to come "for free" but unfortunately we have
a pragmatic special case in our handling of data resources where we
normally defer to the apply step only if a _direct_ dependency of the data
resource has a change pending, and allow a plan-time read if there's
a pending change in an indirect dependency. This allowed us to preserve
some compatibility with the questionable historical behavior of always
reading data resources proactively unless the configuration contains
unknown values, since the arguably-more-correct behavior would've been a
regression for anyone who had been depending on that before.
Since preconditions and postconditions didn't exist until now, we are not
constrained in the same way by backward compatibility, and so we can adopt
the more correct behavior in the case where a data resource has conditions
specified. This does unfortunately make the handling of data resources
with conditions subtly inconsistent with those that don't, but this is
a better situation than the alternative where it would be easy to get into
a trapped situation where the remote system is invalid and it's impossible
to plan the change that would make it valid again because the conditions
evaluate too soon, prior to the fix being applied.
We have two different reasons why a data resource might be read only
during apply, rather than during planning as usual: the configuration
contains unknown values, or the data resource as a whole depends on a
managed resource which itself has a change pending.
However, we didn't previously distinguish these two in a way that allowed
the UI to describe the difference, and so we confusingly reported both
as "config refers to values not yet known", which in turn led to a number
of reasonable questions about why Terraform was claiming that but then
immediately below showing the configuration entirely known.
Now we'll use our existing "ActionReason" mechanism to tell the UI layer
which of the two reasons applies to a particular data resource instance.
The "dependency pending" situation tends to happen in conjunction with
"config unknown", so we'll prefer to refer that the configuration is
unknown if both are true.
We can no longer be assured that the particular instance of a provider
we are using has had GetProviderSchema called. Always check the
diagnostics even if we're fetching a cached response.
When specifying variable values on the command line, name-value pairs
must be joined with an equals sign, without surrounding spaces.
Previously Terraform would interpret "foo = bar" as assigning the value
" bar" to the variable named "foo ". This is never valid, as variable
names may not include whitespace.
This commit looks for this specific error and returns a diagnostic with
a suggestion for correcting it. We cannot simply trim whitespace,
because it is valid to write "foo= bar" to assign the value " bar" to
the variable "foo", as unlikely as it seems.
When executing an apply with no plan, it's possible for a cancellation
to arrive during the final batch of provider operations, resulting in no
errors in the plan. The run context was next checked during the
confirmation for apply, but in the case of -auto-approve that
confirmation is skipped, resulting in the canceled plan being applied.
Make sure we directly check for cancellation before confirming the plan.
Instances of the same AbsResource may share the same Dependencies, which
could point to the same backing array of values. Since address values
are not pointers, and not meant to be shared, we must copy the value
before sorting the slice in-place. Because individual instances of the
same resource may be encoded to state concurrently, failure to copy the
slice first can result in a data race.
Terraform's remote-exec provision hangs out when it execs on HTTP Proxy bacause it dosen't support SSH over HTTP Proxy. This commits enables Terraform's remote-exec to support SSH over HTTP Proxy.
* adds `proxy_*` fields to `connection` which add configuration for a proxy host
* if `proxy_host` set, connect to that proxy host via CONNECT method, then make the SSH connection to `host` or `bastion_host`
Using the any keyword with arguments (e.g. any(string, bool)) is
invalid, but any is not technically a "primitive type keyword". This
commit corrects the language in the diagnostic and updates the tests.
Previously the supported JSON plan and state formats included only
serialized output values, which was a lossy serialization of the
Terraform type system. This commit adds a type field in the usual cty
JSON format, which allows reconstitution of the original value.
For example, previously a list(string) and a set(string) containing the
same values were indistinguishable. This change serializes these as
follows:
{
"value": ["a","b","c"],
"type": ["list","string"]
}
and:
{
"value": ["a","b","c"],
"type": ["set","string"]
}
For non-interactive contexts, Terraform is typically executed with the flag -input=false.
However for runs that are not set to auto approve, the cloud integration will prompt a user for
approval input even with input being set to false. This commit enables the cloud integration to know
the value of the input flag and use it to determine whether or not to ask the user for input.
If -input is set to false and the run cannot be auto approved, the cloud integration will throw an error
stating run confirmation can no longer be handled in the CLI and that they must do so through the browser.
The plan graph does not contain all the information necessary to detect
cycles which may happen when building the apply graph. Once we have more
information from the plan we can build the complete apply graph with all
individual instances to verify that the apply can begin without errors.