Resources that are not yet created will not be in the graph during
refresh, and therefore cannot be attached to the data source nodes. In
this case we still need to indicate if there are depends_on entries
inherited from the module call, which we can do with the forceDependsOn
field.
We'll need this again for getting the transitive depends_on references
from parent module calls. This is needed to inform us how to handle data
sources during refresh and plan.
Resource destroy nodes can only depend on other resources. Connecting
them to their module expander can introduce cycles when the module
expander depends on resources in the destroyer's subgraph.
We don't need another node type for orphaned outptus, they are just
outputs being removed for a different reason than destroy. Use the
NodeDestroyableOutput implementation.
Destroy outputs also don't need to be referencers, since they are being
removed.
Rename DestroyOutputTransformer to destroyRootOutputTransformer, and add
an explanation as to why it is the only transformer that requires an
exception to know when it's involved from the destroy command.
simplification allows us to settle on a single interface,
graphNodeExpandsInstances for all types if instance expanders. The only
other specific class of resource we need to detect during pruning is the
nodeExpandApplyableResource node, which is already classified under the
GraphNodeResourceInstance interface.
ModulePath was incorrectly returning the parent module, because it did
not implement ReferenceOutside. With ReferenceOutside working correctly,
we can have ModulePath return the real path and remove the special case
for this during pruning.
Create a single transformer to remove all unused nodes from the apply
graph. This is similar to the combination of the resource pruning done
in the destroy edge transformer, and the unused values transformer. In
addition to resources, variables, locals, and outputs, we now need to
remove unused module expansion nodes as well. Since these can all be
interdependent, we need to process them as whole in a single
transformation.
In order for depends_on to work, modules need to implicitly depend on
their child modules. This will have little effect on terraform's
concurrency, as configuration trees are always much wider than they are
deep.
create interfaces that nodes can implement to declare whether they
expand into instances of some sort, using the instances.Expander, and/or
whether use the instances.Expander to find instances.
included is a rough transformer implementation to remove these nodes
from the apply graph.
All of the feedback from the experiment described enhancements that can
potentially be added later without breaking changes, so this change simply
removes the experiment gate from the feature as originally implemented
with no changes to its functionality.
Further enhancements may follow in later releases, but the goal of this
change is just to ship the feature exactly as it was under the experiment.
Most of the changes here are cleaning up the experiment opt-ins from our
test cases. The most important parts are in configs/experiments.go and in
experiments/experiment.go .
Connect references from depends_on in modules calls. This will "just
work" for a lot of cases, but data sources will be read too early in the
case where they require the dependencies to be created. While
data sources will be properly ordered behind the module head node, there
is nothing preventing them from being being evaluated during refresh.
The resource apply nodes need to be GraphNodeDestroyerCBD in order to
correctly inherit create_before_destroy. While the plan will have
recorded this to create the correct deposed nodes, the edges still need
to be transformed correctly.
We also need create_before_destroy to be saved to state for nodes that
inherited it, so that if they are removed from state the destroy will
happen in the correct order.
We need to run the force CBD transformer during apply too, both to
ensure we can rely on the `CreateBeforeDestroy()` status for dependants
during apply, but also to ensure that the correct status is stored into
state.
* addrs: replace NewLegacyProvider with NewDefaultProvider in ParseProviderSourceString
ParseProviderSourceString was still defaulting to NewLegacyProvider when
encountering single-part strings. This has been fixed.
This commit also adds a new function, IsProviderPartNormalized, which
returns a bool indicating if the string given is the same as a
normalized version (as normalized by ParseProviderPart) or an error.
This is intended for use by the configs package when decoding provider
configurations.
* terraform: fix provider local names in tests
* configs: validate that all provider names are normalized
The addrs package normalizes all source strings, but not the local
names. This caused very odd behavior if for e.g. a provider local name
was capitalized in one place and not another. We considered enabling
case-sensitivity for provider local names, but decided that since this
was not something that worked in previous versions of terraform (and we
have yet to encounter any use cases for this feature) we could generate
an error if the provider local name is not normalized. This error also
provides instructions on how to fix it.
* configs: refactor decodeProviderRequirements to consistently not set an FQN when there are errors
The new data source planning logic no longer needs a separate action,
and the apply status can be determined from whether the After value is
complete or not.
Ensure that a data source with depends_on not only plans to update
during refresh, but evaluates correctly in the plan ensuring
dependencies are planned accordingly.
The state was not being set, so the change was not evaluated correctly
for dependant resources.
Remove use of cty.NilVal in readDataSource, only one place was using it,
so the code could just be moved out.
Fix a bunch of places where warnings would be lost.
Rather than re-read the data source during every plan cycle, apply the
config to the prior state, and skip reading if there is no change.
Remove the TODOs, as we're going to accept that data-only changes will
still not be plan-able for the time being.
Fix the null data source test resource, as it had no computed fields at
all, even the id.
The logic for refresh, plan and apply are all subtly different, so
rather than trying to manage that complex flow through a giant 300 line
method, break it up somewhat into 3 different types that can share the
types and a few helpers.