EvalRefreshDependencies is used to update resource dependencies when
they don't exist, allow broken or old states to be updated. While
appending any newly found dependencies is tempting to have the largest
set available, changes to the config could conflict with the prior
dependencies causing cycles.
Refresh should load any new dependencies found because of configuration
or state changes, but retain any dependencies already in the state.
Orphaned resources would not be in config, but we do not want to lose
the destroy ordering for the later apply.
Make use of the new Dependencies field in the instance state.
The inter-instance dependencies will be determined from the complete
reference graph, so that absolute addresses can be stored, rather than
just references within a module. The Dependencies are added to the node
in the same manner as state, i.e. via an "attacher" interface and
transformer. This is because dependencies are calculated from the graph
itself, and not from the config.
The old logic for `depends_on` was to short-circuit evaluation of the
data source, but that prevented a plan and state from being recorded.
Use the (currently unused) ForcePlanRead to ensure that the plan is
recorded when the config contains `depends_on`.
This does not fix the fact that depends on does not work with data
sources, and will still produce a perpetual diff. This is only to fix
evaluation errors when an indexed data source is evaluated during
refresh.
The count for a data resource can potentially depend on a managed resource
that isn't recorded in the state yet, in which case references to it will
always return unknown.
Ideally we'd do the data refreshes during the plan phase as discussed in
#17034, which would avoid this problem by planning the managed resources
in the same walk, but for now we'll just skip refreshing any data
resources with an unknown count during refresh and defer that work to the
apply phase, just as we'd do if there were unknown values in the main
configuration for the data resource.
Our post-refresh safety check had the constraint and real type inverted,
so previously any refresh of a resource type with a dynamically-typed
attribute would fail this type check.
Also includes a small tweak to the error message from this check since the
old one was a little awkward to read in practice when the error is a
cty.PathError rendered with an attribute path prefix.
If an instance object in state has an earlier schema version number then
it is likely that the schema we're holding won't be able to decode the
raw data that is stored. Instead, we must ask the provider to upgrade it
for us first, which might also include translating it from flatmap form
if it was last updated with a Terraform version earlier than v0.12.
This ends up being a "seam" between our use of int64 for schema versions
in the providers package and uint64 everywhere else. We intend to
standardize on int64 everywhere eventually, but for now this remains
consistent with existing usage in each layer to keep the type conversion
noise contained here and avoid mass-updates to other Terraform components
at this time.
This also includes a minor change to the test helpers for the
backend/local package, which were inexplicably setting a SchemaVersion of
1 on the basic test state but setting the mock schema version to zero,
creating an invalid situation where the state would need to be downgraded.
Since the refresh walk creates a partial plan to account for objects that
are yet to be created, we need to provide at least a basic mock of the
PlanProviderChange provider method.
For now we're using the old-style "DiffFn" shim interface since that's
already available for use in other tests.
Significant changes to the provider interface left a lot of the
tests in a non-buildable state. This set of changes gets the
tests building again but does not attempt to make them run to
completion or pass.
After this commit, it is possible to build a test program for
the ./terraform package but it will panic during its run. That
will be addressed in subsequent commits.
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
The "config" package is no longer used and will be removed as part
of the 0.12 release cleanup. Since configschema is part of the
"new world" of configuration modelling, it makes more sense for
it to live as a subdirectory of the newer "configs" package.
We now fetch all of the necessary schemas during context creation, so we
can just thread that repository of schemas through into EvalContext and
Evaluator and access the schemas as needed without any further fetching.
This requires updating a few tests to have a valid Provider address in
their state objects, because we need that in order to trigger the loading
of the relevant schema.
There are still some other issues with some of these tests right now, but
all the ones that need to have schema should now have it.
It seems that there is a bug with the evaluation of child module input
variables where they can't find their schema even when a mock is provided.
Will attack this in a subsequent commit.
After the refactoring to integrate HCL2 many of the tests were no longer
using correct types, attribute names, etc.
This is a bulk update of all of the tests to make them compile again, with
minimal changes otherwise. Although the tests now compile, many of them
do not yet pass. The tests will be gradually repaired in subsequent
commits, as we continue to complete the refactoring and retrofit work.
Validation is the best time to return detailed diagnostics
to the user since we're much more likely to have source
location information, etc than we are in later operations.
This change doesn't actually add any detail to the messages
yet, but it changes the interface so that we can gradually
introduce more detailed diagnostics over time.
While here there are some minor adjustments to some of the
messages to improve their consistency with terminology we
use elsewhere.
Added a new test that ensures that pre/post-diff hooks are not called
when EvalDiff is run with Stub set, tested through a full refresh run.
This helps test the expected behaviour of EvalDiff itself, versus the
end result of the diff being counted in a plan, which is what the
TestLocal_planScaleOutNoDupeCount test in backend/local checks.
Rather than providing an already-resolved map of plugins to core, we now
provide a "provider resolver" which knows how to resolve a set of provider
dependencies, to be determined later, and produce that map.
This requires the context to be instantiated in a different way, so this
very noisy diff is a mostly-mechanical update of all of the existing
places where contexts get created for testing, using some adapted versions
of the pre-existing utilities for passing in mock providers.
Previously the Type of a ResourceState was generally ignored, but we're
now starting to use it to figure out which providers are needed to
support the resources in state so our tests need to set it accurately
in order to get the expected result.
Fixes#12836
Realistically, these should be caught during validation anyways. In this
case, this was causing 12386 because refresh with a data source will
attempt to use module variables. I don't see any clear logic to prune
those module variables or not add them so its easier to return unknown
to cause the data to be computed and not run.
In cases where we construct state directly rather than reading it via
the usual methods, we need to ensure that the necessary maps are
initialized correctly.
This commit forward ports the changes made for 0.6.17, in order to store
the type and sensitive flag against outputs.
It also refactors the logic of the import for V0 to V1 state, and
fixes up the call sites of the new format for outputs in V2 state.
Finally we fix up tests which did not previously set a state version
where one is required.
This commit adds the groundwork for supporting module outputs of types
other than string. In order to do so, the state version is increased
from 1 to 2 (though the "public-facing" state version is actually as the
first state file was binary).
Tests are added to ensure that V2 (1) state is upgraded to V3 (2) state,
though no separate read path is required since the V2 JSON will
unmarshal correctly into the V3 structure.
Outputs in a ModuleState are now of type map[string]interface{}, and a
test covers round-tripping string, []string and map[string]string, which
should cover all of the types in question.
Type switches have been added where necessary to deal with the
interface{} value, but they currently default to panicking when the input
is not a string.
In #2884, Terraform would hang on graphs with an orphaned resource
depended on an orphaned module.
This is because orphan module nodes (which are dependable) were getting
expanded (replaced) with GraphNodeBasicSubgraph nodes (which are _not_
dependable).
The old `graph.Replace()` code resulted in GraphNodeBasicSubgraph being
entered into the lookaside table, even though it is not dependable.
This resulted in an untraversable edge in the graph, so the graph would
hang and wait forever.
Now, we remove entries from the lookaside table when a dependable node
is being replaced with a non-dependable node. This means we lose an
edge, but we can move forward. It's ~probably~ never correct to be
replacing depenable nodes with non-dependable ones, but this tweak
seemed preferable to tossing a panic in there.
The context_test file has gotten pretty unruly. Let's split it up into
a few files so we can be nicer to our editors and our own sanity.
Definitely lots more we can do to clean up, but with changes like this
I'd rather do small, focused, clear steps instead of one big "cleaned up
lots of stuff" PR.