If a resource is only destroying instances, there is no reason to
prepare the state and we can remove the Resource (prepare state) nodes.
They normally have pose no issue, but if the instances are being
destroyed along with their dependencies, the resource node may fail to
evaluate due to the missing dependencies (since destroy happens in the
reverse order).
These failures were previously blocked by there being a cycle when the
destroy nodes were directly attached to the resource nodes.
Destroy nodes do not need to be connected to the resource (prepare
state) node when adding them to the graph. Destroy nodes already have a
complete state in the graph (which is being destroyed), any references
will be added in the ReferenceTransformer, and the proper
connection to the create node will be added in the
DestroyEdgeTransformer.
Under normal circumstances this makes no difference, as create and
destroy nodes always have an dependency, so having the prepare state
handled before both only linearizes the operation slightly in the
normal destroy-then-create scenario.
However if there is a dependency on a resource being replaced in another
module, there will be a dependency between the destroy nodes in each
module (to complete the destroy ordering), while the resource node will
depend on the variable->output->resource chain. If both the destroy and
create nodes depend on the resource node, there will be a cycle.
The CBDEdgeTransformer tests worked on fake data structures, with a
synthetic graph, and configs that didn't match. Update them to generate
a more complete graph, with real node implementations, from real
configs.
The output graph is filtered down to instances, and the results still
functionally match the original expected test results, with some minor
additions due to using the real implementation.
When looking for dependencies to fix when handling
create_before_destroy, we need to look past more than one edge, as
dependencies may appear transitively through outputs and variables. Use
Descendants rather than UpEdges.
We have the full graph to use for the CBD transformation, so there's no
longer any need to create a temporary graph, which may differ from the
original.
Some commands don't use variables at all or use them in a way that doesn't
require them to all be fully valid and consistent. For those, we don't
want to fetch variable values from the remote system and try to validate
them because that's wasteful and likely to cause unnecessary error
messages.
Furthermore, the variables endpoint in Terraform Cloud and Enterprise only
works for personal access tokens, so it's important that we don't assume
we can _always_ use it. If we do, then we'll see problems when commands
are run inside Terraform Cloud and Enterprise remote execution contexts,
where the variables map always comes back as empty.
The remote backend uses backend.ParseVariableValues locally only to decide
if the user seems to be trying to use -var or -var-file options locally,
since those are not supported for the remote backend.
Other than detecting those, we don't actually have any need to use the
results of backend.ParseVariableValues, and so it's better for us to
ignore any errors it produces itself and prefer to just send a
potentially-invalid request to the remote system and let the remote system
be responsible for validating it.
This then avoids issues caused by the fact that when remote operations are
in use the local system does not have all of the required context: it
can't see which environment variables will be set in the remote execution
context nor which variables the remote system will set using its own
generated -var-file based on the workspace stored variables.
This tool is intended for analysis on Terraform's AWS provider, but that
provider is no longer developed in this repository and so this tool is
no longer functional.
Before this, the Terraform Puppet provisioner would error out in a
confusing way if the type attribute in a connection block was not given.
Apparently an omitted type leads to type having a value "" which must be
then assumed to mean "ssh".
Fixes#23004
`marshalPlannedValues` builds a map of modules to their children in
order to output the resource changes in a tree. The map was built from
the list of resource changes. However if a module had no resources
itself, and only called another module (a very normal case), that module
would not get added to the map causing none of its children to be
output in `planned_values`.
This PR adds a walk up through a given module's ancestors to ensure that
each module, even those without resources, would be added.
Our documentation for how to contribute was in quite a state of disrepair,
with some documents still describing things as they were before moving
providers into separate repositories, others making assumptions about
Go development that are no longer true in modules mode, and so forth.
This is an attempt at a reset to a good state that should work with the
codebase as it currently stands, and should hopefully serve as a basis
for iterative improvement from here.
These new instructions lean primarily on standard Go toolchain usage and
instruct using the Makefile only for some Terraform-specific situations
that the Go toolchain does not automatically handle. The idea here is that
this direct usage of primary commands in the Go toolchain is less likely
to be broken by changes in future Go releases, and should be immediately
familiar to anyone who has experience with Go development.
* command/validate: output a warning if unused flags are set
The -var and -var-file command line flags are accepted, but not used,
in `terraform validate`. This PR adds a warning for users who set either
of those flags, so they know that setting them has no effect.