The loading of the initial instance state was inadvertently skipped when
-refresh=false, causing all resources to appear to be missing from the
state during plan.
In order to save any changes to lifecycle options, we need to record
those changes during refresh, otherwise they would only be updated when
there is a change in the resource to be applied.
This evaluation was required when refresh ran in a separate walk and
managed resources were only partly handled by configuration. Now that we
have the correct dependency information available when refreshing
configured resources, we can update their state accordingly. Since
orphaned resources are not refreshed, they can retain their stored
dependencies for correct ordering.
This also prevents users from introducing cycles with nodes they can't
"see", since only orphaned nodes will retain their stored dependencies,
and the remaining nodes will be updated according to the configuration.
This only changes the refreshed state stored in the plan file. Since the
change is stored in the plan, the applied result would be the same, but
we should still store the refreshed data in the plan file for tools that
consume the plan file.
This will also be needed in order to implement a new refresh command
based on the plan itself.
All resources use EvalWriteState to store their state, so we need a way
to switch the states when the resource is refreshing vs when it is
planning. (this will likely change once we factor out the EvalNode pattern)
The logic for refresh, plan and apply are all subtly different, so
rather than trying to manage that complex flow through a giant 300 line
method, break it up somewhat into 3 different types that can share the
types and a few helpers.
In order to udpate data sources correctly when their configuration
changes, they need to be evaluated during plan. Since the plan working
state isn't saved, store any data source reads as plan changes to be
applied later. This is currently abusing the Update plan action to
indicate that the data source was read and needs to be applied to state.
We can possibly add a Store action for data sources if this approach
works out. The Read action still indicates that the data source was
deferred to the Apply phase.
We also fully handle any data source depends_on changes. Now that all
the transitive resource dependencies are known at the time of
evaluation, we can check the plan to determine if there are any changes
in the dependencies and selectively defer reading the data source.
Make the interface name reflect the new return type of the method.
Remove the confusingly named and unused ResourceAddress method from the
resource nodes as well.
Implement a new provider_meta block in the terraform block of modules, allowing provider-keyed metadata to be communicated from HCL to provider binaries.
Bundled in this change for minimal protocol version bumping is the addition of markdown support for attribute descriptions and the ability to indicate when an attribute is deprecated, so this information can be shown in the schema dump.
Co-authored-by: Paul Tyng <paul@paultyng.net>
Make use of the new Dependencies field in the instance state.
The inter-instance dependencies will be determined from the complete
reference graph, so that absolute addresses can be stored, rather than
just references within a module. The Dependencies are added to the node
in the same manner as state, i.e. via an "attacher" interface and
transformer. This is because dependencies are calculated from the graph
itself, and not from the config.
If a datasource's dependencies have planned changes, then we need to
plan a read for the data source, because the config may change once the
dependencies have been applied.
In the reorganization of the data source read code we missed the fact that
the plan phase must _always_ generate a read data diff, never directly
read, because the state generated during plan is throwaway.
This only matters in the -refresh=false case, since normally refresh has
already taken care of this, but that is still an important case, covered
by the TestContext2Apply_dataBasic test.
Previously we used a single plan action "Replace" to represent both the
destroy-before-create and the create-before-destroy variants of replacing.
However, this forces the apply graph builder to jump through a lot of
hoops to figure out which nodes need it forced on and rebuild parts of
the graph to represent that.
If we instead decide between these two cases at plan time, the actual
determination of it is more straightforward because each resource is
represented by only one node in the plan graph, and then we can ensure
we put the right nodes in the graph during DiffTransformer and thus avoid
the logic for dealing with deposed instances being spread across various
different transformers and node types.
As a nice side-effect, this also allows us to show the difference between
destroy-then-create and create-then-destroy in the rendered diff in the
CLI, although this change doesn't fully implement that yet.
We no longer use strings to represent addresses, so this method was a
leftover outlier from previous refactoring efforts.
At this time the result is not actually being used due to the state type
refactoring, which is a bug we'll address in a subsequent commit.
This is no longer a call into the provider, since all of the data diff
logic is standard for all data sources anyway. Instead, we just compute
the planned new value and construct a planned change from that as-is.
Previously the provider could, in principle, customize the read diff. In
practice there is no real reason to do that and the existing SDK didn't
pass that possibility through to provider code, so we can safely change
this without impacting provider compatibility.
Chaange ResourceProvider to providers.Interface starting from the
context, and fix all type errors.
This only replaced some of method calls directly applicable to the
providers themselves. The resource methods will follow.
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
Previously we would attempt to DynamicExpand during the validate walk and
then validate each expanded instance separately. However, this meant that
we would not be able to validate the contents of a block where count = 0
or if count is not yet known.
Here we instead do a more static validation pass against the resource
configuration itself, setting count.index to cty.UnknownVal(cty.Number) so
we can type-check everything inside the block as being correct regardless
of the final count.
This is another step towards repairing the "validate" command for our
changed assumptions in a world where we have a more sophisticated type
checker.
This doesn't yet address the remaining problem that the expression
evaluator can't, with the current state structures, distinguish between
a completed resource with count = 0 and a resource that doesn't exist
at all (during validate), and so we'll still get errors if an expression
elsewhere in configuration refers to a dynamic index of a resource with
"count" set. That's a pre-existing condition that's no longer being masked
by _this_ problem, but can't be addressed until we've introduced the new
state types (states.State, etc) and thus we _can_ distinguish these two
situations. That will therefore be addressed in a later commit.
The state after EvalReadDataDiff is no longer nil during plan, which
means that we can't use that as a proxy for requiring the diff.
Rather than exiting early to save the EvalWriteState and EvalWriteDiff
evaluations, continue normally regardless to ensure we have the latest
diff and state after the plan. This also aligns the data data source
handling with that of the managed resource.
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.
The three main goals here are:
- Use the configuration models from the "configs" package instead of the
older models in the "config" package, which is now deprecated and
preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
new "lang" package, instead of the Interpolator type and related
functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
rather than hand-constructed strings. This is not critical to support
the above, but was a big help during the implementation of these other
points since it made it much more explicit what kind of address is
expected in each context.
Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.
I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
Use the ResourceState.Provider field to store the full name of the
provider used during apply. This field is only used when a resource is
removed from the config, and will allow that resource to be removed by
the exact same provider with which it was created.
Modify the locations which might accept the alue of the
ResourceState.Provider field to detect that the name is resolved.
This turned out to be a big messy commit, since the way providers are
referenced is tightly coupled throughout the code. That starts to unify
how providers are referenced, using the format output node Name method.
Add a new field to the internal resource data types called
ResolvedProvider. This is set by a new setter method SetProvider when a
resource is connected to a provider during graph creation. This allows
us to later lookup the provider instance a resource is connected to,
without requiring it to have the same module path.
The InitProvider context method now takes 2 arguments, one if the
provider type and the second is the full name of the provider. While the
provider type could still be parsed from the full name, this makes it
more explicit and, and changes to the name format won't effect this
code.
Fixes#10338
The destruction step for a resource was included the deposed resources
for _all_ resources with that name (ignoring the "index"). For example:
`aws_instance.foo.0` was including destroying deposed for
`aws_instance.foo.1`.
This changes the config to the deposed transformer to properly include
that index.
This change includes a larger change of changing `stateId` to include
the index. This affected more parts but was ultimately the issue in
question.