Prior to Terraform 0.12 these two functions were the only way to construct
literal lists and maps (respectively) in HIL expressions. Terraform 0.12,
by switching to HCL 2, introduced first-class syntax for constructing
tuple and object values, which can then be converted into list and map
values using the tolist and tomap type conversion functions.
We marked both of these functions as deprecated in the Terraform v0.12
release and have since then mentioned in the docs that they will be
removed in a future Terraform version. The "terraform 0.12upgrade" tool
from Terraform v0.12 also included a rule to automatically rewrite uses
of these functions into equivalent new syntax.
The main motivation for removing these now is just to get this change made
prior to Terraform 1.0. as we'll be doing with various other deprecations.
However, a specific reason for these two functions in particular is that
their existence is what caused us to invent the idea of a "type expression"
as a distinct kind of expression in Terraform v0.12, and so removing them
now would allow potentially unifying type expressions with value
expressions in a future release.
We do not have any current specific plans to make that change, but one
potential motivation for doing so would be to take another attempt at a
generalized "convert" function which takes a type as one of its arguments.
Our previous attempt to implement such a function was foiled by the fact
that Terraform's expression validator doesn't have any way to know to
treat one argument of a particular function as special, and so it was
generating incorrect error messages. We won't necessarily do that, but
having these "list" and "map" functions out of the way leaves the option
open.
Core is only using the PrepareProviderConfig call for the validation
part of the method, but we should be re-validating the final config
immediately before Configure.
This change elects to not start using the PreparedConfig here, since
there is no useful reason for it at this point, and it would
introduce a functional difference between terraform releases that can be
avoided.
The validation for ignore_changes was too broad, and makes it difficult
to ignore changes in an attribute that the user does not want to set.
While the goal of ignore_changes is to prevent changes in the
configuration alone, we don't intend to break the use-case of ignoring
drift from the provider. Since we cannot easily narrow the validation to
only detect computed attributes at the moment, we can drop this error
altogether for now.
Our diagnostics model allows for optionally annotating an error or warning
with information about the expression and eval context it was generated
from, which the diagnostic renderer for the UI will then use to give the
user some additional hints about what values may have contributed to the
error.
We previously didn't have those annotations on the results of evaluating
for_each expressions though, because in that case we were using the helper
function to evaluate an expression in one shot and thus we didn't ever
have a reference to the EvalContext in order to include it in the
diagnostic values.
Now, at the expense of having to handle the evaluation at a slightly lower
level of abstraction, we'll annotate all of the for_each error messages
with source expression information. This is valuable because we see users
often confused as to how their complex for_each expressions ended up being
invalid, and hopefully giving some information about what the inputs were
will allow more users to self-solve.
This was mostly unused now, since we no longer needed to interrupt a
series of eval node executions.
The exception was the stopHook, which is still used to halt execution
when there's an interrupt. Since interrupting execution should not
complete successfully, we use a normal opaque error to halt everything,
and return it to the UI.
We can work on coalescing or hiding these if necessary in a separate PR.
Some tests could not handle reading orphaned resources. It also turns
out the ReadResource mock never returned the correct state in the
default case at all.
This forces orphaned resources to be re-read during planning, removing
them from the state if they no longer exist.
This needs to be done for a bare `refresh` execution, since Terraform
should remove instances that don't exist and are not in the
configuration from the state. They should also be removed from state so
there is no Delete change planned, as not all providers will gracefully
handle a delete operation on a resource that does not exist.
If a change exists for a resource instance,
the After value is returned, however, this value
will not have its marks as it as been encoded.
This Marks the return value so the marks follow
that resource reference.
Because ignore_changes configuration can refer to resource arguments
which are assigned sensitive values, we need to unmark the resource
object before processing.
Use a single log writer instance for all std library logging.
Setup the std log writer in the logging package, and remove boilerplate
from test packages.
If the provisioner configuration includes sensitive values, it's a
reasonable assumption that we should suppress its log output. Obvious
examples where this makes sense include echoing a secret to a file using
local-exec or remote-exec.
This commit adds tests for both logging output from provisioners with
non-sensitive configuration, and suppressing logs for provisioners with
sensitive values in configuration.
Note that we do not suppress logs if connection info contains sensitive
information, as provisioners should not be logging connection
information under any circumstances.
If provisioner configuration or connection info includes sensitive
values, we need to unmark them before calling the provisioner. Failing
to do so causes serialization to error.
Unlike resources, we do not need to capture marked paths here, so we
just discard the marks.
A few tests were inadvertently renamed, causing them to be be skipped.
For some reason this is not caught by the `vet` pass that happens during
normal testing.
The ProviderConfigTransformer was using only the provider FQN to attach
a provider configuration to the provider, but what it needs to do is
find the local name for the given provider FQN (which may not match the
type name) and use that when searching for matching provider
configuration.
Fixes#26556
This will also be backported to the v0.13 branch.
The state is not loaded here with any marks, so we cannot rely on marks
alone for equality comparison. Compare both the state and the
configuration sensitivity before creating the OutputChange.
Since root outputs can now use the planned changes, we can directly
insert the correct applyable or destroyable node into the graph during
plan and apply, and it will remove the outputs if they are being
destroyed.
This builds on an experimental feature in the underlying cty library which
allows marking specific attribtues of an object type constraint as
optional, which in turn modifies how the cty conversion package handles
missing attributes in a source value: it will silently substitute a null
value of the appropriate type rather than returning an error.
In order to implement the experiment this commit temporarily forks the
HCL typeexpr extension package into a local internal/typeexpr package,
where I've extended the type constraint syntax to allow annotating object
type attributes as being optional using the HCL function call syntax.
If the experiment is successful -- both at the Terraform layer and in
the underlying cty library -- we'll likely send these modifications to
upstream HCL so that other HCL-based languages can potentially benefit
from this new capability.
Because it's experimental, the optional attribute modifier is allowed only
with an explicit opt-in to the module_variable_optional_attrs experiment.
We record output changes in the plan, but don't currently use them for
anything other than display. If we have a wholly known output value
stored in the plan, we should prefer that for apply in order to ensure
consistency with the planned values. This also avoids cases where
evaluation during apply cannot happen correctly, like when all resources
are being removed or we are executing a destroy.
We also need to record output Delete changes when the plan is for
destroy operation. Otherwise without a change, the apply step will
attempt to evaluate the outputs, causing errors, or leaving them in the
state with stale values.
Replace the old mock provider test functions with modern equivalents.
There were a lot of inconsistencies in how they were used, so we needed
to update a lot of tests to match the correct behavior.
If sensitivity changes, we have an update plan,
but should avoid communicating with the provider
on the apply, as the values are equal (and otherwise
a NoOp plan)
* remove unused code
I've removed the provider-specific code under registry, and unused nil
backend, and replaced a call to helper from backend/oss (the other
callers of that func are provisioners scheduled to be deprecated).
I also removed the Dockerfile, as our build process uses a different
file.
Finally I removed the examples directory, which had outdated examples
and links. There are better, actively maintained examples available.
* command: remove various unused bits
* test wasn't running
* backend: remove unused err
When the sensitivity has changed, we want to write to
state and also display to the user that the sensitivity
has changed, so make this an update action. Also
clean up some assignments, since Contains traverses the val
anyway, this is a little tighter.
Auditing the graph builder to remove unused transformers (planning does
not need to close provisioners for example), and re-order them. While
many of the transformations are commutative, using the same order
ensures the same behavior between operations when the commutative
property is lost or changed.
Outputs were not being evaluated during import, because it was not added
to the walk filter.
Remove any unnecessary walk filters from all the Execute nodes.
We do not currently need to evaluate module variables in order to
import a resource.
This will likely change once we can select the import provider
automatically, and have a more dynamic method for dispatching providers
to module instances. In the meantime we can avoid the evaluation for now
and prevent a certain class of import errors.
The change was passed into the provisioner node because the normal
NodeApplyableResourceInstance overwrites the prior state with the new
state. This however doesn't matter here, because the resource destroy
node does not do this. Also, even if the updated state were to be used
for some reason with a create provisioner, it would be the correct state
to use at that point.
This new-ish package ended up under "helper" during the 0.12 cycle for
want of some other place to put it, but in retrospect that was an odd
choice because the "helper/" tree is otherwise a bunch of legacy code from
when the SDK lived in this repository.
Here we move it over into the "internal" directory just to distance it
from the guidance of not using "helper/" packages in new projects;
didyoumean is a package we actively use as part of error message hints.
Remove marks for object compatibility tests to allow apply
to continue. Adds a block to the test provider to use
in testing, and extends the sensitivity apply test to include a block
The test for this behavior did not work, because the old mock diff
function does not work correctly. Write a PlanResourceChange function to
return a correct plan.
Allow the evaluation of resource pending deleting only during a full
destroy. With this change we can ensure deposed instances are not
evaluated under normal circumstances, but can be references when needed.
This also allows us to remove the fixup transformer that added
connections so temporary values would evaluate in the correct order when
referencing destroy nodes.
In the majority of cases, we do not want to evaluate resources that are
pending deletion since configuration references only can refer to
resources that is intended to be managed by the configuration. An
exception to that rule is when Terraform is performing a full `destroy`
operation, and providers need to evaluate existing resources for their
configuration.
The loading of the initial instance state was inadvertently skipped when
-refresh=false, causing all resources to appear to be missing from the
state during plan.
If a data source refers to a indexed managed resource, we need to
re-target that reference to the containing resource for planning. Since
data sources use the same mechanism as depends_on for managed resource
references, they can only refer to resources as a whole.
There are situations when a user may want to keep or exclude a map key
using `ignore_changes` which may not be listed directly in the
configuration. This didn't work previously because the transformation
always started off with the configuration, and would never encounter a
key if it was only present in the prior value.
* Split node_resource_abstract.go into two files, putting
NodeAbstractResourceInstance methods in their own file - it was getting
large enough to be tricky for (my) human eyeballs.
* un-exported the functions that were created as part of the EvalTree()
refactor; they did not need to be public.