The first step in only using the required provider nodes in a graph is
to be able to specifically add them from the configuration.
The MissingProviderTransformer was previously responsible for adding
all providers. Now it is really just adding any that are missing from
the config.
Needed to add more cases to support value comparison exceptions that the
rest of TF expects to work (this fixes tests in various places).
Also moved things to a switch block so that it's a little more compact.
A diff new needs to pass basic value checks to be considered the
"same". Several provisions have been added to ensure that the list, set,
and RequiresNew behaviours that have needed some exceptions in the past
are preserved in this new logic.
This ensures that we are checking for value equality as much as
possible, which will be more important when we transition to the
possibility of diffs being sourced from external data.
While merging the cached Input configs in the correct order prevents
overwriting existing config values, it doesn't prevent an earlier
provider from inserting unwanted values into later provider
configurations.
Diff the key-values returned by Input with the pre-input config, and
store only the "answers" that were added during the Input call.
Always call Input, even if we already have some values, since a
previously cached config may not be complete.
Previously when looking up cached provider input, the Input was taken in
its entirety, and only provider configuration fields that weren't in the
saved input were added. This would cause providers in modules to use the
entire configuration from parent modules, even if they themselves had
entirely different configs.
Note: this is only marginally beter than the old behavior. It may be
slightly more correct, but stil can't account for the user's intent, and
may be adding configured values from one provider into another.
Change the PathCacheKey to just join the path on a non-path character
(|), which makes for easier debugging.
Use the configured providers directly, rather than looking for inherited
provider configuration during graph evaluation.
First remove the provider config cache, and the associated
SetProviderConfig and ParentProviderConfig methods on the eval context.
Every provider must be configured, so there's no need to look for
configuration from other provider instances.
The config.ProviderConfig struct now has a Scope field which stores the
proper path for the interpolation scope. To get this metadata to the
interpolator, we add an EvalInterpolatProvider node which can carry the
ProviderConfig, and an InterpolateProvider context method to carry the
ProviderConfig.Scope into the InterplationScope.
Some of the tests could be adjusted to account for the new inheritance
behavior, and some were simply no longer valid and will be removed.
The remaining tests have questions on how they should work in practice.
This mostly concerns orphaned modules where there is no longer a way to
obtain a provider. In some cases we may require that a minimal provider
config be present to handle the destroy process, but we need further
testing.
All disabled code was commented out in this commit to record any
additional comments. The following commit will be a cleanup pass.
Update all references to the version values to use the new package.
The VersionString function was left in the terraform package
specifically for the aws provider, which is vendored. We can remove that
last call once the provider is updated.
In order to parse provider, resource and data source configuration from
HCL2 config files, we need to know the relevant configuration schema.
This new method allows Terraform Core to request these from a provider.
This is a breaking change to this interface, so all of its implementers
in this package are updated too. This includes concrete implementations
of the new method in helper/schema that use the schema conversion code
added in an earlier commit to produce a configschema.Block automatically.
Plugins compiled against prior versions of helper/schema will not have
support for this method, and so calls to them will fail. Callers of
this new method will therefore need to sniff for support using the
SchemaAvailable field added to both ResourceType and DataSource.
This careful handling will need to persist until next time we increment
the plugin protocol version, at which point we can make the breaking
change of requiring this information to be available.
DestroyValueReferenceTransformer is used during destroy to reverse the
edges for output and local values. Because destruction is going to
remove these from the state, nodes that depend on their value need to be
visited first.
When working on an existing plan, the context always used walkApply,
even if the plan was for a full destroy. Mark in the plan if it was
icreated for a destroy, and transfer that to the context when reading
the plan.
A Targeted graph may include outputs that were transitively included,
but if they are missing any dependencies they will fail to interpolate
later on.
Prune any outputs in the TargetsTransformer that have missing
dependencies, and are not depended on by any resource. This will
maintain the existing behavior of outputs failing silently ni most
cases, but allow errors to be surfaced where the output value is
required.
Module outputs may not have complete information during Input, because
it happens before refresh. Continue process on output interpolation
errors during the Input walk.
Remove the Input flag threaded through the input graph creation process
to prevent interpolation failures on module variables.
Use an EvalOpFilter instead to inset the correct EvalNode during
walkInput. Remove the EvalTryInterpolate type, and use the same
ContinueOnErr flag as the output node for consistency and to try and
keep the number possible eval node types down.
Locals don't need to be evaluated during destroy. Rather than simply
skipping them, remove them from the state as they are encountered. Even
though they are not persisted in the state, it keeps the state up to
date as the destroy happens, and we reduce the chance of other
inconstancies later on.
The fact that we clean up data source state by applying a "destroy" action
for them is an implementation detail, and so should not be visible to
outside callers or to the user.
Signalling these as real destroys creates confusion for users because
they see Terraform say things like:
data.template_file.foo: Refreshing state..."
...which, to an understandably-nervous sysadmin, might make them suspect
that the underlying object was deleted, rather than just Terraform's
record of it.
Previously the rendered plan output was constructed directly from the
core plan and then annotated with counts derived from the count hook.
At various places we applied little adjustments to deal with the fact that
the user-facing diff model is not identical to the internal diff model,
including the special handling of data source reads and destroys. Since
this logic was just muddled into the rendering code, it behaved
inconsistently with the tally of adds, updates and deletes.
This change reworks the plan formatter so that it happens in two stages:
- First, we produce a specialized Plan object that is tailored for use
in the UI. This applies all the relevant logic to transform the
physical model into the user model.
- Second, we do a straightforward visual rendering of the display-oriented
plan object.
For the moment this is slightly overkill since there's only one rendering
path, but it does give us the benefit of letting the counts be derived
from the same data as the full detailed diff, ensuring that they'll stay
consistent.
Later we may choose to have other UIs for plans, such as a
machine-readable output intended to drive a web UI. In that case, we'd
want the web UI to consume a serialization of the _display-oriented_ plan
so that it doesn't need to re-implement all of these UI special cases.
This introduces to core a new diff action type for "refresh". Currently
this is used _only_ in the UI layer, to represent data source reads.
Later it would be good to use this type for the core diff as well, to
improve consistency, but that is left for another day to keep this change
focused on the UI.
The implementation of ResourceAddress.Less was flawed because it was only
testing each field in the "less than" direction, and falling through in
cases where an earlier field compared greater than a later one.
Now we test for inequality first as the selector, and only fall through
if the two values for a given field are equal.
There is some additional, early validation on the "count" meta-argument
that verifies that only suitable variable types are used, and adding local
values to this whitelist was missed in the initial implementation.
It seems that this somehow got lost in the commit/rebase shuffle and
wasn't caught by the tests that _did_ make it because they were all using
just one file.
As a result of this bug, locals would fail to work correctly in any
configuration with more than one .tf file.
Along with restoring the append/merge behavior, this also reworks some of
the tests to exercise the multi-file case as better insurance against
regressions of this sort in future.
This fixes#15969.