This was mostly unused now, since we no longer needed to interrupt a
series of eval node executions.
The exception was the stopHook, which is still used to halt execution
when there's an interrupt. Since interrupting execution should not
complete successfully, we use a normal opaque error to halt everything,
and return it to the UI.
We can work on coalescing or hiding these if necessary in a separate PR.
Some tests could not handle reading orphaned resources. It also turns
out the ReadResource mock never returned the correct state in the
default case at all.
This forces orphaned resources to be re-read during planning, removing
them from the state if they no longer exist.
This needs to be done for a bare `refresh` execution, since Terraform
should remove instances that don't exist and are not in the
configuration from the state. They should also be removed from state so
there is no Delete change planned, as not all providers will gracefully
handle a delete operation on a resource that does not exist.
If a change exists for a resource instance,
the After value is returned, however, this value
will not have its marks as it as been encoded.
This Marks the return value so the marks follow
that resource reference.
Because ignore_changes configuration can refer to resource arguments
which are assigned sensitive values, we need to unmark the resource
object before processing.
Use a single log writer instance for all std library logging.
Setup the std log writer in the logging package, and remove boilerplate
from test packages.
If the provisioner configuration includes sensitive values, it's a
reasonable assumption that we should suppress its log output. Obvious
examples where this makes sense include echoing a secret to a file using
local-exec or remote-exec.
This commit adds tests for both logging output from provisioners with
non-sensitive configuration, and suppressing logs for provisioners with
sensitive values in configuration.
Note that we do not suppress logs if connection info contains sensitive
information, as provisioners should not be logging connection
information under any circumstances.
If provisioner configuration or connection info includes sensitive
values, we need to unmark them before calling the provisioner. Failing
to do so causes serialization to error.
Unlike resources, we do not need to capture marked paths here, so we
just discard the marks.
A few tests were inadvertently renamed, causing them to be be skipped.
For some reason this is not caught by the `vet` pass that happens during
normal testing.
The ProviderConfigTransformer was using only the provider FQN to attach
a provider configuration to the provider, but what it needs to do is
find the local name for the given provider FQN (which may not match the
type name) and use that when searching for matching provider
configuration.
Fixes#26556
This will also be backported to the v0.13 branch.
The state is not loaded here with any marks, so we cannot rely on marks
alone for equality comparison. Compare both the state and the
configuration sensitivity before creating the OutputChange.
Since root outputs can now use the planned changes, we can directly
insert the correct applyable or destroyable node into the graph during
plan and apply, and it will remove the outputs if they are being
destroyed.
This builds on an experimental feature in the underlying cty library which
allows marking specific attribtues of an object type constraint as
optional, which in turn modifies how the cty conversion package handles
missing attributes in a source value: it will silently substitute a null
value of the appropriate type rather than returning an error.
In order to implement the experiment this commit temporarily forks the
HCL typeexpr extension package into a local internal/typeexpr package,
where I've extended the type constraint syntax to allow annotating object
type attributes as being optional using the HCL function call syntax.
If the experiment is successful -- both at the Terraform layer and in
the underlying cty library -- we'll likely send these modifications to
upstream HCL so that other HCL-based languages can potentially benefit
from this new capability.
Because it's experimental, the optional attribute modifier is allowed only
with an explicit opt-in to the module_variable_optional_attrs experiment.
We record output changes in the plan, but don't currently use them for
anything other than display. If we have a wholly known output value
stored in the plan, we should prefer that for apply in order to ensure
consistency with the planned values. This also avoids cases where
evaluation during apply cannot happen correctly, like when all resources
are being removed or we are executing a destroy.
We also need to record output Delete changes when the plan is for
destroy operation. Otherwise without a change, the apply step will
attempt to evaluate the outputs, causing errors, or leaving them in the
state with stale values.