Resource timeouts were a separate config block, but did not exist in the
resource schema. Insert any defined timeouts when generating the
configshema.Block so that the fields can be accepted and validated by
core.
This ensures more HCL1/HIL-like behaviors when dealing with nested blocks
that are not set in the configuration, which is important for
compatibility with helper/schema's validation logic.
There are several steps here and a number of them can include reaching out
to remote servers or executing local processes, so it's helpful to have
some trace logs to better narrow down causes of errors and hangs during
this step.
The legacy test wasn't really testing anything useful, just the plan
behavior with no diff in a module returned a module with no diff.
There's no reason to convert this to the new plans, since there's no
legacy behavior to match.
In the reorganization of the data source read code we missed the fact that
the plan phase must _always_ generate a read data diff, never directly
read, because the state generated during plan is throwaway.
This only matters in the -refresh=false case, since normally refresh has
already taken care of this, but that is still an important case, covered
by the TestContext2Apply_dataBasic test.
When we're being asked to destroy everything, we ideally want to end up
with a totally empty state. Normally we will conservatively keep around
the "husks" of resources (what's left after all of the instances have been
destroyed) unless they are configured without count or for_each, but in
this special case we'll prune those out.
The implication of this is that in "weird" expression contexts that happen
before the next "terraform plan", such as evaluation in
"terraform console" or expressions in data resources and provider blocks
that get evaluated during the refresh walk, we will see these results
as unknown rather than as empty lists of objects. We accept that weirdness
for now because in a future release we are likely to remove "refresh" as
a separate walk anyway, doing all of that work during the plan walk where
we can ensure that these values are properly re-populated before trying
to use them.
Some mock objects will not have any mock behavior configured for the
GetSchema method, so we should just return a valid-but-empty schema in
that case, rather than panicking as we did before.
These are similar to the symbols of the same name in package "providers".
terraform.ProvisionerFactory is now an alias for provisioners.Factory, so
we can defer updating all of the existing users of it.
There are still some errors left, because our expression evaluator now
does more validation than before and so we'll need to (in a subsequent
commit) actually use a fixture configuration for these tests so that the
validations will allow the expressions to be validated.
This test was occasionally failing due to a missing graph edge causing it
to be non-deterministic.
The graph edge was missing because our standard schema doesn't quite match
the config fixture and so the reference checker was not finding the "blah"
argument on aws_instance.a.
This change also includes an 100ms pause for the b node just to make this
potential race more likely to hit the "wrong" ordering when the graph is
not complete.
The weird special-case behaviors of testDiffFn were interfering with the
outcome of this test, but we don't actually need any of those special
behaviors here so we'll use a very simple PlanResourceChangeFn
implementation instead, just letting the built-in merge behavior in core
take care of it.
Since we started using experimental Go Modules our editor tooling hasn't
been fully functional, apparently including format-on-save support. This
is a catchup to get everything back straight again.
One of the assumptions this test was checking no longer holds: we don't
retain outputs for non-root modules in persistent state, because we can
always re-populate these on a future run by evaluating the configuration.
The old-fashioned formatting of "Provider" on the shimmed state here was
causing the state upgrade code to treat it like a module-relative address,
rather than an absolute address as we now expect.
This error message appears in a situation that is often confusing for
users, since the connection between resources and their providers in the
state is not something we draw attention to in the user experience of
Terraform.
This new error message tries to be a bit clearer about what the user must
do to resolve it. It's still not perfect since it doesn't cover the
variant of this problem where an entire module containing a provider block
and resources has been removed at the same time, but since there isn't
an easily-summarizable way to continue in that state this will need to
do for the moment, until we find a way to file off that rough edge in
the workflow.
Configuration blocks can contain sensitive information, so better to just
talk about them by reference (in this case, source location) rather than
embedding them directly, to reduce the risk of accidental information
leakage through sharing logs for debug purposes.
This was already updated for the new state types earlier, but since then
we adjusted how deposed instances are written out in the old string
representation of state, and so this regressed.
These tests show that we're still not fully pruning modules from the state
in all cases, due to us not being able to fully prune out modules that
contain resources with count set after a destroy, but this is no worse
than before so we'll accept it for now and address this separately later.
A module heading without "<no state>" but also without any instances
listed is the rendering for a module containing a resource that has no
instances, since our old string rendering of state doesn't represent
resources themselves.
We previously had mechanisms to clean up only individual instance states,
leaving behind empty resource husks in the state after they were all
destroyed.
This takes care of it in the "orphan" case. It does not yet do it in the
"terraform destroy" or "terraform plan -destroy" cases because we don't
have anywhere to record in the plan that we're actually destroying and so
the resource configurations should be ignored and _everything_ should be
cleaned. We'll let the state be not-quite-empty in that case for now,
since it doesn't really hurt; cleaning up orphans is the main case because
the state will live on afterwards and so leftover cruft will accumulate
over the course of many changes.
Since this one has a situation where there are two deposed objects for
the same instance at once, we can't rely on comparing state strings: they
are not deterministic when multiple deposed objects are present.
Instead, we do more surgical comparisons directly on the state model
objects, which is not quite as robust but still gets us the main stuff we
care about here, to be followed up by another checkStateString further
down for the final state.
Tainted objects now also remember which provider they belong to (via the
resource state they are attached to) and so the stringified state output
here is slightly different.
This was relying on a no-longer-valid mechanism for accessing the "count"
value from a resource block. The original issue this test was written to
cover is not really such a sharp edge anymore, since the length is taken
from the state during apply rather than from configuration, but it's still
a good case to cover.
This test was incorrectly amended on the first pass to create a
configuration snapshot from the step zero configuration, rather than the
step one configuration that the save plan is built from.
Along with that, it needed various other minor updates to match with
details that have shifted:
- "id" and "type" attributes must be explicitly declared in schema
- template_file.parent has count = 1, which now causes it to get an index
and be a list where before it did not.