Add `host_key` and `bastion_host_key` fields to the ssh communicator
config for strict host key checking.
Both fields expect the contents of an openssh formated public key. This
key can either be the remote host's public key, or the public key of the
CA which signed the remote host certificate.
Support for signed certificates is limited, because the provisioner
usually connects to a remote host by ip address rather than hostname, so
the certificate would need to be signed appropriately. Connecting via
a hostname needs to currently be done through a secondary provisioner,
like one attached to a null_resource.
Similar to NodeApplyableOuptut, NodeDestroyableOutputs also need to stay
in the graph if any ancestor nodes
Use the same GraphNodeTargetDownstream method to keep them from being
pruned, since they are dependent on the output node and all its
descendants.
The id attribute can be missing during the destroy operation.
While the new destroy-time ordering of outputs and locals should prevent
resources from having their id attributes set to an empty string,
there's no reason to error out if we have the canonical ID field
available.
This still interrogates the attributes map first to retain any previous
behavior, but in the future we should settle on a single ID location.
Since outputs and local nodes are always evaluated, if the reference a
resource form the configuration that isn't in the state, the
interpolation could fail.
Prune any local or output values that have no references in the graph.
Now that outputs are always evaluated, we still need a way to remove
them from state when they are destroyed.
Previously, outputs were removed during destroy from the same
"Applyable" node type that evaluates them. Now that we need to possibly
both evaluate and remove output during an apply, we add a new node -
NodeDestroyableOutput.
This new node is added to the graph by the DestroyOutputTransformer,
which make the new destroy node depend on all descendants of the output
node. This ensures that the output remains in the state as long as
everything which may interpolate the output still exists.
Always evaluate outputs during destroy, just like we did for locals.
This breaks existing tests, which we will handle separately.
Don't reverse output/local node evaluation order during destroy, as they
are both being evaluated.
Add a complex destroy provisioner testcase using locals, outputs and
variables.
Add that pesky "id" attribute to the instance states for interpolation.
Destroy-time provisioners require us to re-evaluate during destroy.
Rather than destroying local values, which doesn't do much since they
aren't persisted to state, we always evaluate them regardless of the
type of apply. Since the destroy-time local node is no longer a
"destroy" operation, the order of evaluation need to be reversed. Take
the existing DestroyValueReferenceTransformer and change it to reverse
the outgoing edges, rather than in incoming edges. This makes it so that
any dependencies of a local or output node are destroyed after
evaluation.
Having locals evaluated during destroy failed one other test, but that
was the odd case where we need `id` to exist as an attribute as well as
a field.
Previously we would interpolate the count config (ResourceConfig.RawCount)
only while preparing to dynamic-expand aggregate resource nodes. This is
problematic because we do not dynamic-expand any resource nodes during the
apply walk, and so previously the count value was not available for
interpolation during apply and would result in an error.
Now we interpolate RawCount once for each resource we visit during the
apply walk -- even though that redundantly interpolates the same config
multiple times when count > 1 -- to ensure that it's available by the
time we interpolate any remaining expressions in the config and any
expressions within "connection" and "provisioner" blocks.
This error was masked by us sharing a single RawConfig instance between
the plan and apply walks when "terraform apply" is run with no explicit
plan file argument, but was exposed by the workflow where the plan is
written first to disk since in that case the interpolation result from
during the plan phase is not present in the deflated plan object. For
this reason, the new context test serializes the plan into an in-memory
buffer and reloads it in order to simulate the effect of the two-step
workflow.
Containers (maps, lists, sets) in an InstanceDiff need to be handled in
their entirety. Unchanged values cannot be filtered out from diffs, as
providers expect attribute containers to be complete.
If a value in ignore_changes maps to a single key in an attribute
container, and there are other changes present, that ignored value must
be included in the diff as well.
There was no test checking that Close wsa called on the mock provider.
This fails now since the CloseProviderTransformer isn't using the fully
resolved provider name.
The interrupt tests for providers no longer check for the condition
during the diff operation. defer the lock so other test's DiffFns don't
need to be as carefull locking themselves.
The init command needs to parse the state to resolve providers, but
changes to the state format can cause that to fail with difficult to
understand errors. Check the terraform version during init and provide
the same error that would be returned by plan or apply.
The bounds checking in ResourceConfig.get() was insufficient: it detected when the index was greater than or equal to cv.Len() but not when the index was less than zero. If the user provided an (invalid) configuration that referenced "foo.-1.bar", the provider would panic.
Now it behaves the same way as if the index were too high.
Validation is the best time to return detailed diagnostics
to the user since we're much more likely to have source
location information, etc than we are in later operations.
This change doesn't actually add any detail to the messages
yet, but it changes the interface so that we can gradually
introduce more detailed diagnostics over time.
While here there are some minor adjustments to some of the
messages to improve their consistency with terminology we
use elsewhere.
The provider name coming from ProvidedBy may be resolved if it only
exists in the state. Make sure to strip the module and provider
prefixes for the provider name when adding missing providers.
It's become apparent that passing in a provider config for an implicitly
used provider would be very useful. While the ProviderConfigTransformer
efficiently added providers to the graph, the algorithm was reversed
from what would be needed to allow overriding implicit providers.
Change the ProviderConfigTransformer to fist add all configured
provider, even if they are empty stubs. Then run through all providers
being passed in from the parent, and replace the provider nodes we
created with proxies, and add implicit proxies where none existed. The
extra nodes will then be pruned later.
There was a bug where all references would be discarded in the case when
a self-reference was encountered. Since a module references all
descendants by it's own path, it returns a self-reference by definition.
Our new resource-to-provider matching is stricter about explicitly
matching aliases when config is present (no longer automatically
inherited) and with locating providers to destroy removed resources.
With this in mind, this is an attempt to expand slightly on this error
message now that users are more likely to see it.
In future it would be nice to do some explicit validation of this a bit
closer to the UI, so we can have room for more explanatory text, but this
additional messaging is intended to help users understand why they might
be seeing this message after removing a provider configuration block from
configuration, whether directly or as a side-effect of removing a module.
Remove the module entry from the state if a module is no longer in the
configuration. Modules are not removed if there are any existing
resources with the module path as a prefix. The only time this should be
the case is if a module was removed in the config, but the apply didn't
target that module.
Create a NodeModuleRemoved and an associated EvalDeleteModule to track
the module in the graph then remove it from the state. The
NodeModuleRemoved dependencies are simply any other node which contains
the module path as a prefix in its path.
This could have probably been done much easier as a step in pruning the
state, but modules are going to have to be promoted to full graph nodes
anyway in order to support count.
You can't find orphans by walking the config, because by definition
orphans aren't in the config.
Leaving the broken test for when empty modules are removed from the
state as well.