A few tests were inadvertently renamed, causing them to be be skipped.
For some reason this is not caught by the `vet` pass that happens during
normal testing.
The ProviderConfigTransformer was using only the provider FQN to attach
a provider configuration to the provider, but what it needs to do is
find the local name for the given provider FQN (which may not match the
type name) and use that when searching for matching provider
configuration.
Fixes#26556
This will also be backported to the v0.13 branch.
The state is not loaded here with any marks, so we cannot rely on marks
alone for equality comparison. Compare both the state and the
configuration sensitivity before creating the OutputChange.
Since root outputs can now use the planned changes, we can directly
insert the correct applyable or destroyable node into the graph during
plan and apply, and it will remove the outputs if they are being
destroyed.
This builds on an experimental feature in the underlying cty library which
allows marking specific attribtues of an object type constraint as
optional, which in turn modifies how the cty conversion package handles
missing attributes in a source value: it will silently substitute a null
value of the appropriate type rather than returning an error.
In order to implement the experiment this commit temporarily forks the
HCL typeexpr extension package into a local internal/typeexpr package,
where I've extended the type constraint syntax to allow annotating object
type attributes as being optional using the HCL function call syntax.
If the experiment is successful -- both at the Terraform layer and in
the underlying cty library -- we'll likely send these modifications to
upstream HCL so that other HCL-based languages can potentially benefit
from this new capability.
Because it's experimental, the optional attribute modifier is allowed only
with an explicit opt-in to the module_variable_optional_attrs experiment.
We record output changes in the plan, but don't currently use them for
anything other than display. If we have a wholly known output value
stored in the plan, we should prefer that for apply in order to ensure
consistency with the planned values. This also avoids cases where
evaluation during apply cannot happen correctly, like when all resources
are being removed or we are executing a destroy.
We also need to record output Delete changes when the plan is for
destroy operation. Otherwise without a change, the apply step will
attempt to evaluate the outputs, causing errors, or leaving them in the
state with stale values.
Replace the old mock provider test functions with modern equivalents.
There were a lot of inconsistencies in how they were used, so we needed
to update a lot of tests to match the correct behavior.
If sensitivity changes, we have an update plan,
but should avoid communicating with the provider
on the apply, as the values are equal (and otherwise
a NoOp plan)
* remove unused code
I've removed the provider-specific code under registry, and unused nil
backend, and replaced a call to helper from backend/oss (the other
callers of that func are provisioners scheduled to be deprecated).
I also removed the Dockerfile, as our build process uses a different
file.
Finally I removed the examples directory, which had outdated examples
and links. There are better, actively maintained examples available.
* command: remove various unused bits
* test wasn't running
* backend: remove unused err
When the sensitivity has changed, we want to write to
state and also display to the user that the sensitivity
has changed, so make this an update action. Also
clean up some assignments, since Contains traverses the val
anyway, this is a little tighter.
Auditing the graph builder to remove unused transformers (planning does
not need to close provisioners for example), and re-order them. While
many of the transformations are commutative, using the same order
ensures the same behavior between operations when the commutative
property is lost or changed.
Outputs were not being evaluated during import, because it was not added
to the walk filter.
Remove any unnecessary walk filters from all the Execute nodes.
We do not currently need to evaluate module variables in order to
import a resource.
This will likely change once we can select the import provider
automatically, and have a more dynamic method for dispatching providers
to module instances. In the meantime we can avoid the evaluation for now
and prevent a certain class of import errors.
The change was passed into the provisioner node because the normal
NodeApplyableResourceInstance overwrites the prior state with the new
state. This however doesn't matter here, because the resource destroy
node does not do this. Also, even if the updated state were to be used
for some reason with a create provisioner, it would be the correct state
to use at that point.
This new-ish package ended up under "helper" during the 0.12 cycle for
want of some other place to put it, but in retrospect that was an odd
choice because the "helper/" tree is otherwise a bunch of legacy code from
when the SDK lived in this repository.
Here we move it over into the "internal" directory just to distance it
from the guidance of not using "helper/" packages in new projects;
didyoumean is a package we actively use as part of error message hints.
Remove marks for object compatibility tests to allow apply
to continue. Adds a block to the test provider to use
in testing, and extends the sensitivity apply test to include a block
The test for this behavior did not work, because the old mock diff
function does not work correctly. Write a PlanResourceChange function to
return a correct plan.
Allow the evaluation of resource pending deleting only during a full
destroy. With this change we can ensure deposed instances are not
evaluated under normal circumstances, but can be references when needed.
This also allows us to remove the fixup transformer that added
connections so temporary values would evaluate in the correct order when
referencing destroy nodes.
In the majority of cases, we do not want to evaluate resources that are
pending deletion since configuration references only can refer to
resources that is intended to be managed by the configuration. An
exception to that rule is when Terraform is performing a full `destroy`
operation, and providers need to evaluate existing resources for their
configuration.
The loading of the initial instance state was inadvertently skipped when
-refresh=false, causing all resources to appear to be missing from the
state during plan.
If a data source refers to a indexed managed resource, we need to
re-target that reference to the containing resource for planning. Since
data sources use the same mechanism as depends_on for managed resource
references, they can only refer to resources as a whole.
There are situations when a user may want to keep or exclude a map key
using `ignore_changes` which may not be listed directly in the
configuration. This didn't work previously because the transformation
always started off with the configuration, and would never encounter a
key if it was only present in the prior value.
* Split node_resource_abstract.go into two files, putting
NodeAbstractResourceInstance methods in their own file - it was getting
large enough to be tricky for (my) human eyeballs.
* un-exported the functions that were created as part of the EvalTree()
refactor; they did not need to be public.
function
The original EvalReadState node is used only by `NodeAbstractResource`s,
so I've created a new method on NodeAbstractResource which does the same
thing as EvalReadState. When the EvalNode refactor project is complete,
EvalReadState will be removed entirely.
In order to ensure all the starting values agree, and since
ignore_changes is only meant to apply to the configuration, we need to
process the ignore_changes values on the config itself rather than the
proposed value.
This ensures the proposed new value and the config value seen by
providers are coordinated, and still allows us to use the rules laid out
by objchange.AssertPlanValid to compare the result to the configuration.
ignore_changes should only exclude changes to the resource arguments,
and not alter the returned value from PlanResourceChange. This would
effect very few providers, since most current providers don't actively
create their plan, and those that do should be generating computed
values here rather than modifying existing ones.
Sensitive values may not be used in outputs which are not also marked
as sensitive. This includes values nested within complex structures.
Note that sensitive values are unmarked before writing to state. This
means that sensitive values used in module outputs will have the
sensitive mark removed. At the moment, we have not implemented
sensitivity propagation from module outputs back to value marks.
This commit also reworks the tests for NodeApplyableOutput to cover
more existing behaviour, as well as this change.
A data source referencing another data source through depends_on should
not be forced to defer until apply. Data sources have no side effects,
so nothing should need to be applied. If the dependency has a
planned change due to a managed resource, the original data source will
also encounter that further down the list of dependencies.
This prevents a data source being read during plan for any reason from
causing other data sources to be deferred until apply. It does not
change the behavior noticeably in 0.14, but because 0.13 still had
separate refresh and plan phases which could read the data source, the
deferral could cause many things downstream to become unexpectedly
unknown until apply.
When a sensitive variable has a complex type, any traversal of the
variable should still result in a sensitive value. This test uses a
sensitive `map(string)` and verifies that both plan and state output
include the appropriate sensitive marks for the resource attribute.
NodePlannableResource and NodeApplyableResource EvalTree()s have been
replaced with Execute() nodes and straight-through code. Both called
EvalWriteResourceState and were the only functions to use it, so I chose
to replace EvalWriteResourceState entirely with straight-through code
(by copying the contents into the two locations).
* Add creation test and simplify in-place test
* Add deletion test
* Start adding marking from state
Start storing paths that should be marked
when pulled out of state. Implements deep
copy for attr paths. This commit also includes some
comment noise from investigations, and fixing the diff test
* Fix apply stripping marks
* Expand diff tests
* Basic apply test
* Update comments on equality checks to clarify current understanding
* Add JSON serialization for sensitive paths
We need to serialize a slice of cty.Path values to be used to re-mark
the sensitive values of a resource instance when loading the state file.
Paths consist of a list of steps, each of which may be either getting an
attribute value by name, or indexing into a collection by string or
number.
To serialize these without building a complex parser for a compact
string form, we render a nested array of small objects, like so:
[
[
{ type: "get_attr", value: "foo" },
{ type: "index", value: { "type": "number", "value": 2 } }
]
]
The above example is equivalent to a path `foo[2]`.
* Format diffs with map types
Comparisons need unmarked values to operate on,
so create unmarked values for those operations. Additionally,
change diff to cover map types
* Remove debugging printing
* Fix bug with marking non-sensitive values
When pulling a sensitive value from state,
we were previously using those marks to remark
the planned new value, but that new value
might *not* be sensitive, so let's not do that
* Fix apply test
Apply was not passing the second state
through to the third pass at apply
* Consistency in checking for length of paths vs inspecting into value
* In apply, don't mark with before paths
* AttrPaths test coverage for DeepCopy
* Revert format changes
Reverts format changes in format/diff for this
branch so those changes can be discussed on a separate PR
* Refactor name of AttrPaths to AttrSensitivePaths
* Rename AttributePaths/attributePaths for naming consistency
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
In order to save any changes to lifecycle options, we need to record
those changes during refresh, otherwise they would only be updated when
there is a change in the resource to be applied.
This evaluation was required when refresh ran in a separate walk and
managed resources were only partly handled by configuration. Now that we
have the correct dependency information available when refreshing
configured resources, we can update their state accordingly. Since
orphaned resources are not refreshed, they can retain their stored
dependencies for correct ordering.
This also prevents users from introducing cycles with nodes they can't
"see", since only orphaned nodes will retain their stored dependencies,
and the remaining nodes will be updated according to the configuration.
This adds a test for GetInputVariable, and includes
a variable with a "sensitive" attribute in configuration,
to test that that value is marked as sensitive
Now that we don't have to handle data sources that may or may not have
been updated during a refresh phase, and the plan phase can save the
data source to the refreshed state, we can remove a lot of the logic
involved in detecting whether the data source needs to be planned or
not.
When there is no separate refresh phase, we always must attempt to read
the data source during planning, and the only conditions are based on
having a known configuration, and not having any dependencies on which
we're waiting. If the data source is read during plan, we can now save
that directly to the refreshed state, and don't need to smuggle the
value as a change to be saved during apply.
Now that the planning process generates a refreshed state, and can
handle changes between the configuration and the state which the refresh
process cannot, we can use the plan for the refresh command.