Commit Graph

186 Commits

Author SHA1 Message Date
James Bardin
e2fc9a19f5 use ResourceInstanceReplaceByTriggers
Set ResourceInstanceReplaceByTriggers in the change.
2022-04-20 09:17:10 -04:00
James Bardin
91121aa856 limit replace_triggered_by to same module instance
replace_triggered_by references are scoped to the current module, so we
need to filter changes for the current module instance. Rather than
creating a ConfigResource and filtering the result, make a
Changes.InstancesForAbsResource method to get only the AbsResource
changes.
2022-04-20 09:17:10 -04:00
James Bardin
fb6fcf783b Fix replace_triggered_by criteria
Only immediate changes to the resource are considered.
2022-04-20 09:17:10 -04:00
James Bardin
6670b71a2e context test demonstrating replace_triggered_by 2022-04-20 09:17:10 -04:00
James Bardin
7598665c90 check for replacement via replace_triggered_by
Check for triggered resource replacement in the plan. While the
functionality of the feature works here, we ill want to follow up with a
way to indicate in the plan _why_ the resource was replaced.
2022-04-20 09:17:10 -04:00
James Bardin
4f2195af2b collect references from replace_triggered_by
The replace_triggered_by expressions create edges in the graph, so must
be returned in the References method.
2022-04-20 09:17:10 -04:00
James Bardin
4d43d6f699 Use the EvalContext to lookup trigger changes
The EvalContext is the only place with all the information to be able to
complete the evaluation of the replace_triggered_by expressions. These
need to be evaluated into a reference, which is then looked up in the
pending changes which the context has access too. On top of needing the
plan changes, we also need access to all providers and schemas to decode
the changes if we need to traverse the resource values for individual
attributes.
2022-04-20 09:17:10 -04:00
James Bardin
8b4c89bdaf evaluate replace_triggered_by expressions
Evaluate the expressions stored in replace_triggered_by into the
*addrs.Reference needed to lookup changes in the plan.
2022-04-20 09:17:10 -04:00
James Bardin
74885b1108 remove data sources from state read and upgrade
Data sources should not require reading the previous versions. While we
previously skipped the decoding if it were to fail, this removes the
need for any prior state at all.

The only place where the prior state was functionally used was in the
destroy path. Because a data source destroy is only for cleanup purposes
to clean out the state using the same code paths as a managed resource,
we can substitute the prior state in the change change with a null value
to maintain the same behavior.
2022-04-11 11:55:53 -04:00
James Bardin
29ecac0808 remove the use of data source prior state entirely
After data source handling was moved from a separate refresh phase into
the planning phase, reading the existing state was only used for
informational purposes. This had been reduced to reporting warnings when
the provider returned an unexpected value to try and help locate legacy
provider bugs, but any actual issues located from those warnings were
very few and far between.

Because the prior state cannot be reliably decoded when faced with
incompatible provider schema upgrades, and there is no longer any
significant reason to try and get the prior state at all, we can skip
the process entirely.
2022-04-11 10:19:45 -04:00
James Bardin
a7987dec9f remove redundant readResourceInstanceState 2022-04-11 10:12:44 -04:00
James Bardin
01628f0d50 data schema changes may prevent state decoding
Data sources do not have state migrations, so there may be no way to
decode the prior state when faced with incompatible type changes.

Because prior state is only informational to the plan, and its existence
should not effect the planning process, we can skip decoding when faced
with errors.
2022-04-11 09:45:10 -04:00
Eng Zer Jun
fedd315275
test: use T.TempDir to create temporary test directory (#30803)
This commit replaces `ioutil.TempDir` with `t.TempDir` in tests. The
directory created by `t.TempDir` is automatically removed when the test
and all its subtests complete.

Prior to this commit, temporary directory created using `ioutil.TempDir`
needs to be removed manually by calling `os.RemoveAll`, which is omitted
in some tests. The error handling boilerplate e.g.
	defer func() {
		if err := os.RemoveAll(dir); err != nil {
			t.Fatal(err)
		}
	}
is also tedious, but `t.TempDir` handles this for us nicely.

Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2022-04-08 17:34:16 +01:00
Alisdair McDiarmid
bb35f02c95 Conclude preconditions/postconditions experiment 2022-04-04 15:54:40 -04:00
Alisdair McDiarmid
c5d10bdef1 core: Store condition block results in plan
In order to include condition block results in the JSON plan output, we
must store them in the plan and its serialization.

Terraform can evaluate condition blocks multiple times, so we must be
able to update the result. Accordingly, the plan.Conditions object is a
map with keys representing the condition block's address. Condition
blocks are not referenceable in any other context, so this address form
cannot be used anywhere in the configuration.

The commit includes a new test case for the JSON output of a
refresh-only plan, which is currently the only way for a failing
condition result to be rendered through this path.
2022-04-04 15:36:29 -04:00
Martin Atkins
49d7c879ac Fix problems caught by staticcheck v0.3.0
This will allow us to upgrade to this version in a later commit without
causing the our build checks to fail.
2022-04-04 08:12:44 -07:00
Anna Winkler
4ca508294c
Update comment for this transformer
Remove extra word and add link to Wikipedia article
2022-03-22 17:17:56 -06:00
James Bardin
fef66f9a60
Merge pull request #30486 from hashicorp/jbardin/drift
Only show external changes which contributed to the plan
2022-03-18 14:19:46 -04:00
James Bardin
c02e8bc5b3 change plan to store individual relevant attrs
Storing individual contributing attributes will allow finer tuning of
the plan rendering.

add contributing to outputs
2022-03-17 09:35:36 -04:00
Alisdair McDiarmid
b5cfc0bb8b core: Fix sensitive variable validation errors
Variable validation error message expressions which generated sensitive
values would previously crash. This commit updates the logic to align
with preconditions and postconditions, eliding sensitive error message
values and adding a separate diagnostic explaining why.
2022-03-11 13:45:04 -05:00
Alisdair McDiarmid
6db174e210 core: Fix crash for sensitive values in conditions
Precondition and postcondition blocks which evaluated expressions
resulting in sensitive values would previously crash. This commit fixes
the crashes, and adds an additional diagnostic if the error message
expression produces a sensitive value (which we also elide).
2022-03-11 13:45:04 -05:00
Alisdair McDiarmid
6cd0876596
Merge pull request #30658 from hashicorp/alisdair/preconditions-postconditions-refresh-only
core: Eval pre/postconditions in refresh-only mode
2022-03-11 13:44:51 -05:00
Alisdair McDiarmid
a103c65140 core: Eval pre/postconditions in refresh-only mode
Evaluate precondition and postcondition blocks in refresh-only mode, but
report any failures as warnings instead of errors. This ensures that any
deviation from the contract defined by condition blocks is reported as
early as possible, without preventing the completion of a state refresh
operation.

Prior to this commit, Terraform evaluated output preconditions and data
source pre/postconditions as normal in refresh-only mode, while managed
resource pre/postconditions were not evaluated at all. This omission
could lead to confusing partial condition errors, or failure to detect
undesired changes which would otherwise cause resources to become
invalid.

Reporting the failures as errors also meant that changes retrieved
during refresh could cause the refresh operation to fail. This is also
undesirable, as the primary purpose of the operation is to update local
state. Precondition/postcondition checks are still valuable here, but
should be informative rather than blocking.
2022-03-11 13:32:40 -05:00
James Bardin
45e2a410f7
Merge pull request #30656 from hashicorp/jbardin/always-validate
Always validate the graph
2022-03-11 10:37:30 -05:00
James Bardin
b1de94a176 make sure CBD test graphs are valid
The graphs used for the CBD tests wouldn't validate because they skipped
adding the root module node. Re add the root module transformer and
transitive reduction transformer to the build steps, and match the new
reduced output in the test fixtures.
2022-03-11 10:20:50 -05:00
James Bardin
0bc69d64ec always validate all graphs
Complete the removal of the Validate option for graph building. There is
no case where we want to allow an invalid graph, as the primary reason
for validation is to ensure we have no cycles, and we can't walk a graph
with cycles. The only code which specifically relied on there being no
validation was a test to ensure the Validate flag prevented it.
2022-03-11 10:20:50 -05:00
Alisdair McDiarmid
ef0d859af7 core: Refactor stub repetition data generation 2022-03-10 13:52:48 -05:00
Alisdair McDiarmid
ad995322e1 core: Fix expanded condition block validation
The previous precondition/postcondition block validation implementation
failed if the enclosing resource was expanded. This commit fixes this by
generating appropriate placeholder instance data for the resource,
depending on whether `count` or `for_each` is used.
2022-03-10 13:47:17 -05:00
James Bardin
05a10f06d1 remove PreDiff and PostDiff hook calls
PreDiff and PostDiff hooks were designed to be called immediately before
and after the PlanResourceChange calls to the provider. Probably due to
the confusing legacy naming of the hooks, these were scattered about the
nodes involved with planning, causing the hooks to be called in a number
of places where they were designed, including data sources and destroy
plans. Since these hooks are not used at all any longer anyway, we can
removed the extra calls with no effect.

If we choose in the future to call PlanResourceChange for resource
destroy plans, the hooks can be re-inserted (even though they currently
are unused) into the new code path which must diverge from the current
combined path of managed and data sources.
2022-03-08 13:48:41 -05:00
James Bardin
dc668dff38 ensure UI hooks are called for data sources
The UI hooks for data source reads were missed during planning. Move the
hook calls to immediatley before and after the ReadDataSource calls to
ensure they are called during both plan and apply.
2022-03-08 13:06:30 -05:00
James Bardin
a02d7cc96a account for diagnostics when fetching schemas
Maybe we can ensure schemas are all loaded at this point, but we can
tackle that later.
2022-03-04 15:51:36 -05:00
James Bardin
6d33de8a9d fixup analysis calls from rebase 2022-03-04 15:51:36 -05:00
Martin Atkins
1425374371 providers: A type for all schemas for a particular provider
Previously the "providers" package contained only a type for representing
the schema of a particular object within a provider, and the terraform
package had the responsibility of aggregating many of those together to
describe the entire surface area of a provider.

Here we move what was previously terraform.ProviderSchema to instead be
providers.Schemas, retaining its existing API otherwise, and leave behind
a type alias to allow us to gradually update other references over time.

We've gradually been shrinking down the responsibilities of the
"terraform" package to just representing the graph components and
behaviors anyway, but the specific motivation for doing this _now_ is to
allow for other packages to both be called by the terraform package _and_
work with provider schemas at the same time, without creating a package
dependency cycle: instead, these other packages can just import the
"providers" package and not need to import the "terraform" package at all.

For now this does still leave the responsibility for _building_ a
providers.Schemas object over in the "terraform" package, because it's
currently doing that as part of some larger work that isn't easily
separable, and so reorganizing that would be a more involved and riskier
change than just moving the existing type elsewhere.
2022-03-04 15:51:36 -05:00
Alisdair McDiarmid
45d0c04707 core: Add fallback for JSON syntax error messages
Custom variable validations specified using JSON syntax would always
parse error messages as string literals, even if they included template
expressions. We need to be as backwards compatible with this behaviour
as possible, which results in this complex fallback logic. More detail
about this in the extensive code comments.
2022-03-04 15:39:31 -05:00
Alisdair McDiarmid
b59bffada6 core: Evaluate pre/postconditions during validate
During the validation walk, we attempt to proactively evaluate check
rule condition and error message expressions. This will help catch some
errors as early as possible.

At present, resource values in the validation walk are of dynamic type.
This means that any references to resources will cause validation to be
delayed, rather than presenting useful errors. Validation may still
catch other errors, and any future changes which cause better type
propagation will result in better validation too.
2022-03-04 15:39:31 -05:00
Alisdair McDiarmid
b06fe04621 core: Check rule error message expressions
Error messages for preconditions, postconditions, and custom variable
validations have until now been string literals. This commit changes
this to treat the field as an HCL expression, which must evaluate to a
string. Most commonly this will either be a string literal or a template
expression.

When the check rule condition is evaluated, we also evaluate the error
message. This means that the error message should always evaluate to a
string value, even if the condition passes. If it does not, this will
result in an error diagnostic.

If the condition fails, and the error message also fails to evaluate, we
fall back to a default error message. This means that the check rule
failure will still be reported, alongside diagnostics explaining why the
custom error message failed to render.

As part of this change, we also necessarily remove the heuristic about
the error message format. This guidance can be readded in future as part
of a configuration hint system.
2022-03-04 15:35:39 -05:00
kmoe
161374725c
Warn when ignore_changes includes a Computed attribute (#30517)
* ignore_changes attributes must exist in schema

Add a test verifying that attempting to add a nonexistent attribute to
ignore_changes throws an error.

* ignore_changes cannot be used with Computed attrs

Return a warning if a Computed attribute is present in ignore_changes,
unless the attribute is also Optional.

ignore_changes on a non-Optional Computed attribute is a no-op, so the user
likely did not want to set this in config.
An Optional Computed attribute, however, is still subject to ignore_changes
behaviour, since it is possible to make changes in the configuration that
Terraform must ignore.
2022-02-18 10:38:29 +00:00
Alisdair McDiarmid
639eb5212f core: Remove unused PlanOpts.Validate
This vestigial field was written to but never read.
2022-02-03 14:16:25 -05:00
Alisdair McDiarmid
cdae6d4396 core: Add context tests for pre/post conditions 2022-01-31 15:38:26 -05:00
Alisdair McDiarmid
a95ad997e1 core: Document postconditions as valid use of self
This is not currently gated by the experiment only because it is awkward
to do so in the context of evaluationStateData, which doesn't have any
concept of experiments at the moment.
2022-01-31 14:34:35 -05:00
Martin Atkins
5573868cd0 core: Check pre- and postconditions for resources and output values
If the configuration contains preconditions and/or postconditions for any
objects, we'll check them during evaluation of those objects and generate
errors if any do not pass.

The handling of post-conditions is particularly interesting here because
we intentionally evaluate them _after_ we've committed our record of the
resulting side-effects to the state/plan, with the intent that future
plans against the same object will keep failing until the problem is
addressed either by changing the object so it would pass the precondition
or changing the precondition to accept the current object. That then
avoids the need for us to proactively taint managed resources whose
postconditions fail, as we would for provisioner failures: instead, we can
leave the resolution approach up to the user to decide.

Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
2022-01-31 14:02:53 -05:00
Martin Atkins
c827c049fe terraform: Precondition and postcondition blocks generate dependencies
If a resource or output value has a precondition or postcondition rule
then anything the condition depends on is a dependency of the object,
because the condition rules will be evaluated as part of visiting the
relevant graph node.
2022-01-28 11:00:29 -05:00
Martin Atkins
4f41a0a1fe configs: Generalize "VariableValidation" as "CheckRule"
This construct of a block containing a condition and an error message will
be useful for other sorts of blocks defining expectations or contracts, so
we'll give it a more generic name in anticipation of it being used in
other situations.
2022-01-28 11:00:29 -05:00
Alisdair McDiarmid
e21d5d36f4 core: Fix plan write/state write ordering bug 2022-01-26 15:33:18 -05:00
James Bardin
684ed7505d remove synthetic default expression for variables
Now that variable evaluation checks for a nil expression the graph
transformer does not need to generate a synthetic expression for
variable defaults. This means that all default handling is now located
in one place, and we are not surprised by a configuration expression
showing up which doesn't actually exist in the configuration.

Rename nodeModuleVariable.evalModuleCallArgument to evalModuleVariable.
This method is no longer handling only the module call argument, it is
also dealing with the variable declaration defaults and validation
statements.

Add an additional tests for validation with a non-nullable variable.
2022-01-10 16:22:33 -05:00
Martin Atkins
9ebc3e1cd2 core: More accurate error message for invalid variable values
In earlier Terraform versions we had an extra validation step prior to
the graph walk which tried to partially validate root module input
variable values (just checking their type constraints) and then return
error messages which specified as accurately as possible where the value
had originally come from.

We're now handling that sort of validation exclusively during the graph
walk so that we can share the main logic between both root module and
child module variable values, but previously that shared code wasn't
able to generate such specific information about where the values had
originated, because it was adapted from code originally written to only
deal with child module variables.

Here then we restore a similar level of detail as before, when we're
processing root module variables. For child module variables, we use
synthetic InputValue objects which state that the value was declared
in the configuration, thus causing us to produce a similar sort of error
message as we would've before which includes a source range covering
the argument expression in the calling module block.
2022-01-10 12:26:54 -08:00
Martin Atkins
36c4d4c241 core and backend: remove redundant handling of default variable values
Previously we had three different layers all thinking they were
responsible for substituting a default value for an unset root module
variable:
 - the local backend, via logic in backend.ParseVariableValues
 - the context.Plan function (and other similar functions) trying to
   preprocess the input variables using
   terraform.mergeDefaultInputVariableValues .
 - the newer prepareFinalInputVariableValue, which aims to centralize all
   of the variable preparation logic so it can be common to both root and
   child module variables.

The second of these was also trying to handle type constraint checking,
which is also the responsibility of the central function and not something
we need to handle so early.

Only the last of these consistently handles both root and child module
variables, and so is the one we ought to keep. The others are now
redundant and are causing prepareFinalInputVariableValue to get a slightly
corrupted view of the caller's chosen variable values.

To rectify that, here we remove the two redundant layers altogether and
have unset root variables pass through as cty.NilVal all the way to the
central prepareFinalInputVariableValue function, which will then handle
them in a suitable way which properly respects the "nullable" setting.

This commit includes some test changes in the terraform package to make
those tests no longer rely on the mergeDefaultInputVariableValues logic
we've removed, and to instead explicitly set cty.NilVal for all unset
variables to comply with our intended contract for PlanOpts.SetVariables,
and similar. (This is so that we can more easily catch bugs in callers
where they _don't_ correctly handle input variables; it allows us to
distinguish between the caller explicitly marking a variable as unset vs.
not describing it at all, where the latter is a bug in the caller.)
2022-01-10 12:26:54 -08:00
Martin Atkins
37b1413ab3 core: Handle root and child module input variables consistently
Previously we had a significant discrepancy between these two situations:
we wrote the raw root module variables directly into the EvalContext and
then applied type conversions only at expression evaluation time, while
for child modules we converted and validated the values while visiting
the variable graph node and wrote only the _final_ value into the
EvalContext.

This confusion seems to have been the root cause for #29899, where
validation rules for root module variables were being applied at the wrong
point in the process, prior to type conversion.

To fix that bug and also make similar mistakes less likely in the future,
I've made the root module variable handling more like the child module
variable handling in the following ways:
 - The "raw value" (exactly as given by the user) lives only in the graph
   node representing the variable, which mirrors how the _expression_
   for a child module variable lives in its graph node. This means that
   the flow for the two is the same except that there's no expression
   evaluation step for root module variables, because they arrive as
   constant values from the caller.
 - The set of variable values in the EvalContext is always only "final"
   values, after type conversion is complete. That in turn means we no
   longer need to do "just in time" conversion in
   evaluationStateData.GetInputVariable, and can just return the value
   exactly as stored, which is consistent with how we handle all other
   references between objects.

This diff is noisier than I'd like because of how much it takes to wire
a new argument (the raw variable values) through to the plan graph builder,
but those changes are pretty mechanical and the interesting logic lives
inside the plan graph builder itself, in NodeRootVariable, and
the shared helper functions in eval_variable.go.

While here I also took the opportunity to fix a historical API wart in
EvalContext, where SetModuleCallArguments was built to take a set of
variable values all at once but our current caller always calls with only
one at a time. That is now just SetModuleCallArgument singular, to match
with the new SetRootModuleArgument to deal with root module variables.
2022-01-10 12:26:54 -08:00
Martin Atkins
483c38aca1 core: Remove TestContext2Validate_PlanGraphBuilder
This test seems to be a holdover from the many-moons-ago switch from one
graph for all operations to separate graphs for plan and apply. It is
effectively just a copy of a subset of the content of the Context.Validate
function and is a maintainability hazard because it tends to lag behind
updates to that function unless changes there happen to make it fail.

This test doesn't cover anything that the other validate context tests
don't exercise as an implementation detail of calling Context.Validate,
so I've just removed it with no replacement.
2022-01-10 12:26:54 -08:00
Martin Atkins
171e7ef6d9 core: Invalid for_each argument messaging improvements
Our original messaging here was largely just lifted from the equivalent
message for unknown values in "count", and it didn't really include any
specific advice on how to update a configuration to make for_each valid,
instead focusing only on the workaround of using the -target planning
option.

It's tough to pack in a fully-actionable suggestion here since unknown
values in for_each keys tends to be a gnarly architectural problem rather
than a local quirk -- when data flows between modules it can sometimes be
unclear whether it'll end up being used in a context which allows unknown
values.

I did my best to summarize the advice we've been giving in community forum
though, in the hope that more people will be able to address this for
themselves without asking for help, until we're one day able to smooth
this out better with a mechanism such as "partial apply".
2022-01-10 12:23:13 -08:00
James Bardin
58e001c22a return graph errors from Context.Apply
errors from building during apply were lost
2021-12-17 14:00:57 -05:00
James Bardin
b213386a73 InstancesForModule should not panic
instances.Set is only used after all instances have been processes, so
it should therefor only handle known instances and not panic when given
an address that traverses an unexpanded module.
2021-12-17 13:31:41 -05:00
James Bardin
a5017bff2f failing test moved with target 2021-12-16 18:20:49 -05:00
James Bardin
e22ab70e03
Merge pull request #30171 from hashicorp/jbardin/revert-validate-for-each
use `cty.DynamicVal` for expanded resources during validation
2021-12-15 08:57:43 -05:00
James Bardin
645fcc5f12 test for lookup regression during validation 2021-12-15 08:43:37 -05:00
James Bardin
71b9682e8c ensure there is always a valid return value 2021-12-14 18:02:57 -05:00
James Bardin
d469e86331 revert 6b8b0617
Revert the evaluation change from #29862.
While returning a dynamic value for all expanded resources during
validation is not optimal, trying to work around this using unknown maps
and lists is causing other undesirable behaviors during evaluation.
2021-12-14 17:58:10 -05:00
Martin Atkins
ec6fe93fa8 instances: Non-existing module instance has no resource instances
Previously we were treating it as a programming error to ask for the
instances of a resource inside an instance of a module that is declared
but whose declaration doesn't include the given instance key.

However, that's actually a valid situation which can arise if, for
example, the user has changed the repetition/expansion mode for an
existing module call and so now all of the resource instances addresses it
previously contained are "orphaned".

To represent that, we'll instead say that an invalid instance key of a
declared module behaves as if it contains no resource instances at all,
regardless of the configurations of any resources nested inside. This
then gives the result needed to successfully detect all of the former
resource instances as "orphaned" and plan to destroy them.

However, this then introduces a new case for
NodePlannableResourceInstanceOrphan.deleteActionReason to deal with: the
resource configuration still exists (because configuration isn't aware of
individual module/resource instances) but the module instance does not.
This actually allows us to resolve, at least partially, a previous missing
piece of explaining to the user why the resource instances are planned
for deletion in that case, finally allowing us to be explicit to the user
that it's because of the module instance being removed, which
internally we call plans.ResourceInstanceDeleteBecauseNoModule.

Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
2021-12-13 10:03:50 -05:00
Alisdair McDiarmid
5ddf5e1f7d core: Fix schema loading for deleted resources
Resource instances removed from the configuration would previously use
the implied provider address. This is correct for default providers, but
incorrect for those from other namespaces or hosts. The fix here is to
use the stored provider config if it is present.
2021-11-24 15:23:20 -05:00
James Bardin
9d19086c8e test for null map and fix lost map marks
Add a test to ensure ignore_changes does not change null maps to empty
maps.

Fix when a marked map revers the changes ignored within that map.
2021-11-11 10:44:39 -05:00
James Bardin
40174b634a ignore_changes changing a null map to empty
Fix an error where using `ignore_changes` would cause some null maps to
be saved as an empty map.
2021-11-11 10:44:05 -05:00
kmoe
64635edfb9
Merge pull request #29885 from hashicorp/kmoe/more-init-interrupts
command: make module installation interruptible
2021-11-11 12:37:49 +00:00
kmoe
40ec62c139
command: make module installation interruptible
Earlier work to make "terraform init" interruptible made the getproviders
package context-aware in order to allow provider installation to be cancelled.

Here we make a similar change for module installation, which is now also
cancellable with SIGINT. This involves plumbing context through initwd and
getmodules. Functions which can make network requests now include a context
parameter whose cancellation cancels those requests.

Since the module installation code is shared, "terraform get" is now
also interruptible during module installation.
2021-11-11 12:28:10 +00:00
James Bardin
647d36062c we don't need an abs provider address
We only lookup providers by provider type to get the schema, so there's
no reason to generate anything more specific.
2021-11-02 15:38:53 -04:00
James Bardin
6b8b0617bb generate precise resource types during validate
Allow `GetResource` to return correct types values during validation,
rather than relying on `cty.DynamicVal` as a placeholder. This allows
other dependent expressions to be more correctly evaluated.
2021-11-02 15:38:36 -04:00
James Bardin
b91d9435ea
Merge pull request #29832 from hashicorp/jbardin/nullable-variable
configs: explicitly nullable variable values
2021-11-01 12:46:31 -04:00
James Bardin
6d050c166e udpate diagnostic message 2021-11-01 12:28:55 -04:00
Martin Atkins
94cbc8fb5d experiments: config_driven_move has concluded
Based on feedback during earlier alpha releases, we've decided to move
forward with the current design for the first phase of config-driven
refactoring.

Therefore here we've marked the experiment as concluded with no changes
to the most recent incarnation of the functionality. The other changes
here are all just updating test fixtures to no longer declare that they
are using experimental features.
2021-11-01 08:46:15 -07:00
James Bardin
1f873776e8 update null variable error text 2021-11-01 09:12:28 -04:00
James Bardin
b71b393cf6 move nullable check to variable input evaluation 2021-11-01 09:02:32 -04:00
James Bardin
f0a64eb456 configs: explicitly nullable variable values
The current behavior of module input variables is to allow users to
override a default by assigning `null`, which works contrary to the
behavior of resource attributes, and prevents explicitly accepting a
default when the input must be defined in the configuration.

Add a new variable attribute called `nullable` will allow explicitly
defining when a variable can be set to null or not. The current default
behavior is that of `nullable=true`.

Setting `nullable=false` in a variable block indicates that the variable
value can never be null. This either requires a non-null input value, or
a non-null default value. In the case of the latter, we also opt-in to
the new behavior of a `null` input value taking the default rather than
overriding it.

In a future language edition where we make `nullable=false` the default,
setting `nullable=true` will allow the legacy behavior of `null`
overriding a default value. The only future configuration in which this
would be required even if the legacy behavior were not desired is when
setting an optional+nullable value. In that case `default=null` would
also be needed and we could therefor imply `nullable=true` without
requiring it in the configuration.
2021-10-29 13:59:46 -04:00
James Bardin
d03a037567 insert panic handlers 2021-10-28 11:51:39 -04:00
Martin Atkins
bee7403f3e command/workspace_delete: Allow deleting a workspace with empty husks
Previously we would reject attempts to delete a workspace if its state
contained any resources at all, even if none of the resources had any
resource instance objects associated with it.

Nowadays there isn't any situation where the normal Terraform workflow
will leave behind resource husks, and so this isn't as problematic as it
might've been in the v0.12 era, but nonetheless what we actually care
about for this check is whether there might be any remote objects that
this state is tracking, and for that it's more precise to look for
non-nil resource instance objects, rather than whole resources.

This also includes some adjustments to our error messaging to give more
information about the problem and to use terminology more consistent with
how we currently talk about this situation in our documentation and
elsewhere in the UI.

We were also using the old State.HasResources method as part of some of
our tests. I considered preserving it to avoid changing the behavior of
those tests, but the new check seemed close enough to the intent of those
tests that it wasn't worth maintaining this method that wouldn't be used
in any main code anymore. I've therefore updated those tests to use
the new HasResourceInstanceObjects method instead.
2021-10-13 13:54:11 -07:00
James Bardin
87e852c832
Merge pull request #29741 from hashicorp/jbardin/test-fix
fix test fixture with multiple provider instances
2021-10-13 09:53:02 -04:00
James Bardin
73b1263a86 wrap multiple provider creations into a factory fn
When a test uses multiple instances of the same provider, we may need to
have separate objects to prevent overwriting of the MockProvider state.
Create a completely new MockProvider in each factory function call
rather than re-using the original provider value.
2021-10-12 17:47:50 -04:00
James Bardin
903ae5edfd fix test fixtures with multiple providers
Allow these to share the same backing MockProvider.
2021-10-12 14:34:34 -04:00
James Bardin
656f03b250 fix test fixture had the instance in the wrong mod
Make the state match the fixture config. The old test was not
technically invalid, but because it caused multiple instances of the
provider to be created, they were backed by the same MockProvider value
resulting in the `*Called` fields interfering.
2021-10-12 13:53:02 -04:00
James Bardin
3f4b680f1c Check for nil change during apply
Because NodeAbstractResourceInstance.readDiff can return a nil change,
we must check for that in all callers.
2021-10-08 16:46:29 -04:00
James Bardin
a036109bc1 add comment about when we call ConfigureProvider 2021-10-08 15:23:36 -04:00
James Bardin
03f71c2f06 fixup tests for MockProvider changes
Resetting the *Called fields and enforcing configuration broke a few
tests.
2021-10-08 08:42:06 -04:00
James Bardin
22b400b8de skip refreshing deposed during destroy plan
The destroy plan should not require a configured provider (the complete
configuration is not evaluated, so they cannot be configured).

Deposed instances were being refreshed during the destroy plan, because
this instance type is only ever destroyed and shares the same
implementation between plan and walkPlanDestroy. Skip refreshing during
walkPlanDestroy.
2021-10-07 16:51:48 -04:00
James Bardin
ad9944e523 test that providers are configured for calls
Have the MockProvider ensure that Configure is always called before any
methods that may require a configured provider.

Ensure the MockProvider *Called fields are zeroed out when re-using the
provider instance.
2021-10-07 16:48:56 -04:00
James Bardin
c68422d125
Merge pull request #29682 from hashicorp/jbardin/less-data-depends_on
Refine data depends_on dependency checks
2021-10-04 11:39:21 -04:00
Martin Atkins
6a98e4720c plans/planfile: Create takes most arguments via a struct type
Previously the planfile.Create function had accumulated probably already
too many positional arguments, and I'm intending to add another one in
a subsequent commit and so this is preparation to make the callsites more
readable (subjectively) and make it clearer how we can extend this
function's arguments to include further components in a plan file.

There's no difference in observable functionality here. This is just
passing the same set of arguments in a slightly different way.
2021-10-01 14:43:58 -07:00
Martin Atkins
8d193ad268 core: Simplify and centralize plugin availability checks
Historically the responsibility for making sure that all of the available
providers are of suitable versions and match the appropriate checksums has
been split rather inexplicably over multiple different layers, with some
of the checks happening as late as creating a terraform.Context.

We're gradually iterating towards making that all be handled in one place,
but in this step we're just cleaning up some old remnants from the
main "terraform" package, which is now no longer responsible for any
version or checksum verification and instead just assumes it's been
provided with suitable factory functions by its caller.

We do still have a pre-check here to make sure that we at least have a
factory function for each plugin the configuration seems to depend on,
because if we don't do that up front then it ends up getting caught
instead deep inside the Terraform runtime, often inside a concurrent
graph walk and thus it's not deterministic which codepath will happen to
catch it on a particular run.

As of this commit, this actually does leave some holes in our checks: the
command package is using the dependency lock file to make sure we have
exactly the provider packages we expect (exact versions and checksums),
which is the most crucial part, but we don't yet have any spot where
we make sure that the lock file is consistent with the current
configuration, and we are no longer preserving the provider checksums as
part of a saved plan.

Both of those will come in subsequent commits. While it's unusual to have
a series of commits that briefly subtracts functionality and then adds
back in equivalent functionality later, the lock file checking is the only
part that's crucial for security reasons, with everything else mainly just
being to give better feedback when folks seem to be using Terraform
incorrectly. The other bits are therefore mostly cosmetic and okay to be
absent briefly as we work towards a better design that is clearer about
where that responsibility belongs.
2021-10-01 14:43:58 -07:00
James Bardin
618e9cf8ec test for unexpected data reads 2021-09-30 17:13:33 -04:00
James Bardin
016463ea9c don't check all ancestors for data depends_on
Only depends_on ancestors for transitive dependencies when we're not
pointed directly at a resource. We can't be much more precise here,
since in order to maintain our guarantee that data sources will wait for
explicit dependencies, if those dependencies happen to be a module,
output, or variable, we have to find some upstream managed resource in
order to check for a planned change.
2021-09-30 16:43:09 -04:00
Martin Atkins
f60d55d6ad core: Emit only one warning for move collisions in destroy-plan mode
Our current implementation of destroy planning includes secretly running a
normal plan first, in order to get its effect of refreshing the state.

Previously our warning about colliding moves would betray that
implementation detail because we'd return it from both of our planning
operations here and thus show the message twice. That would also have
happened in theory for any other warnings emitted by both plan operations,
but it's the move collision warning that made it immediately visible.

We'll now only return warnings from the initial plan if we're also
returning errors from that plan, and thus the warnings of both plans can
never mix together into the same diags and thus we'll avoid duplicating
any warnings.

This does mean that we'd lose any warnings which might hypothetically
emerge only from the hidden normal plan and not from the subsequent
destroy plan, but we'll accept that as an okay tradeoff here because those
warnings are likely to not be super relevant to the destroy case anyway,
or else we'd emit them from the destroy-plan walk too.
2021-09-27 15:46:36 -07:00
Alisdair McDiarmid
f1e9d88ddc
Merge pull request #29640 from hashicorp/alisdair/fix-refresh-only-with-orphans
core: Fix refresh-only interaction with orphans
2021-09-24 09:25:46 -04:00
Martin Atkins
d97ef10bb8 core: Don't return other errors if move statements are invalid
Because our validation rules depend on some dynamic results produced by
actually running the plan, we deal with moves in a "backwards" order where
we try to apply them first -- ignoring anything strange we might find --
and then validate the original statements only after planning.

An unfortunate consequence of that approach is that when the move
statements are invalid it's likely that move execution will not fully
complete, and so the generated plan is likely to be incorrect and might
well include errors resulting from the unresolved moves.

To mitigate that, here we let any move validation errors supersede all
other diagnostics that the plan phase might've generated, in the hope that
it'll help the user focus on fixing the incorrect move statements without
creating confusing by reporting errors that only appeared as a quick of
how Terraform worked around the invalid move statements earlier.
2021-09-23 14:37:08 -07:00
Martin Atkins
1bff623fd9 core: Report a warning if any moves get blocked
In most cases Terraform will be able to automatically fully resolve all
of the pending move statements before creating a plan, but there are some
edge cases where we can end up wanting to move one object to a location
where another object is already declared.

One relatively-obvious example is if someone uses "terraform state mv" in
order to create a set of resource instance bindings that could never have
arising in normal Terraform use.

A less obvious example arises from the interactions between moves at
different levels of granularity. If we are both moving a module to a new
address and moving a resource into an instance of the new module at the
same time, the old module might well have already had a resource of the
same name and so the resource move will be unresolvable.

In these situations Terraform will move the objects as far as possible,
but because it's never valid for a move "from" address to still be
declared in the configuration Terraform will inevitably always plan to
destroy the objects that didn't find a final home. To give some additional
explanation for that result, here we'll add a warning which describes
what happened.

This is not a particularly actionable warning because we don't really
have enough information to guess what the user intended, but we do at
least prompt that they might be able to use the "terraform state" family
of subcommands to repair the ambiguous situation before planning, if they
want a different result than what Terraform proposed.
2021-09-23 14:37:08 -07:00
Martin Atkins
a1a713cf28 core: Report ActionReasons when we plan to delete "orphans"
There are a few different reasons why a resource instance tracked in the
prior state might be considered an "orphan", but previously we reported
them all identically in the planned changes.

In order to help users understand the reason for a surprising planned
delete, we'll now try to specify an additional reason for the planned
deletion, covering all of the main reasons why that could happen.

This commit only introduces the new detail to the plans.Changes result,
though it also incidentally exposes it as part of the JSON plan result
in order to keep that working without returning errors in these new
cases. We'll expose this information in the human-oriented UI output in
a subsequent commit.
2021-09-23 14:37:08 -07:00
Alisdair McDiarmid
ceb580ec40 core: Fix refresh-only interaction with orphans
When planning in refresh-only mode, we must not remove orphaned
resources due to changed count or for_each values from the planned
state. This was previously happening because we failed to pass through
the plan's skip-plan-changes flag to the instance orphan node.
2021-09-23 16:38:08 -04:00
Martin Atkins
83f0376673 refactoring: ApplyMoves new return type
When we originally stubbed ApplyMoves we didn't know yet how exactly we'd
be using the result, so we made it a double-indexed map allowing looking
up moves in both directions.

However, in practice we only actually need to look up old addresses by new
addresses, and so this commit first removes the double indexing so that
each move is only represented by one element in the map.

We also need to describe situations where a move was blocked, because in
a future commit we'll generate some warnings in those cases. Therefore
ApplyMoves now returns a MoveResults object which contains both a map of
changes and a map of blocks. The map of blocks isn't used yet as of this
commit, but we'll use it in a later commit to produce warnings within
the "terraform" package.
2021-09-22 09:01:10 -07:00
Martin Atkins
f0034beb33 core: refactoring.ImpliedMoveStatements replaces NodeCountBoundary
Going back a long time we've had a special magic behavior which tries to
recognize a situation where a module author either added or removed the
"count" argument from a resource that already has instances, and to
silently rename the zeroth or no-key instance so that we don't plan to
destroy and recreate the associated object.

Now we have a more general idea of "move statements", and specifically
the idea of "implied" move statements which replicates the same heuristic
we used to use for this behavior, we can treat this magic renaming rule as
just another "move statement", special only in that Terraform generates it
automatically rather than it being written out explicitly in the
configuration.

In return for wiring that in, we can now remove altogether the
NodeCountBoundary graph node type and its associated graph transformer,
CountBoundaryTransformer. We handle moves as a preprocessing step before
building the plan graph, so we no longer need to include any special nodes
in the graph to deal with that situation.

The test updates here are mainly for the graph builders themselves, to
acknowledge that indeed we're no longer inserting the NodeCountBoundary
vertices. The vertices that NodeCountBoundary previously depended on now
become dependencies of the special "root" vertex, although in many cases
here we don't see that explicitly because of the transitive reduction
algorithm, which notices when there's already an equivalent indirect
dependency chain and removes the redundant edge.

We already have plenty of test coverage for these "count boundary" cases
in the context tests whose names start with TestContext2Plan_count and
TestContext2Apply_resourceCount, all of which continued to pass here
without any modification and so are not visible in the diff. The test
functions particularly relevant to this situation are:
 - TestContext2Plan_countIncreaseFromNotSet
 - TestContext2Plan_countDecreaseToOne
 - TestContext2Plan_countOneIndex
 - TestContext2Apply_countDecreaseToOneCorrupted

The last of those in particular deals with the situation where we have
both a no-key instance _and_ a zero-key instance in the prior state, which
is interesting here because to exercises an intentional interaction
between refactoring.ImpliedMoveStatements and refactoring.ApplyMoves,
where we intentionally generate an implied move statement that produces
a collision and then expect ApplyMoves to deal with it in the same way as
it would deal with all other collisions, and thus ensure we handle both
the explicit and implied collisions in the same way.

This does affect some UI-level tests, because a nice side-effect of this
new treatment of this old feature is that we can now report explicitly
in the UI that we're assigning new addresses to these objects, whereas
before we just said nothing and hoped the user would just guess what had
happened and why they therefore weren't seeing a diff.

The backend/local plan tests actually had a pre-existing bug where they
were using a state with a different instance key than the config called
for but getting away with it because we'd previously silently fix it up.
That's still fixed up, but now done with an explicit mention in the UI
and so I made the state consistent with the configuration here so that the
tests would be able to recognize _real_ differences where present, as
opposed to the errant difference caused by that inconsistency.
2021-09-20 09:06:22 -07:00
Alisdair McDiarmid
638784b195 cli: Omit move-only drift, except for refresh-only
The set of drifted resources now includes move-only changes, where the
object value is identical but a move has been executed. In normal
operation, we previousl displayed these moves twice: once as part of
drift output, and once as part of planned changes.

As of this commit we omit move-only changes from drift display, except
for refresh-only plans. This fixes the redundant output.
2021-09-17 14:47:00 -04:00
Alisdair McDiarmid
61b2d8e3fe core: Plan drift includes move-only changes
Previously, drifted resources included only updates and deletes. To
correctly display the full changes which would result as part of a
refresh-only apply, the drifted resources must also include move-only
changes.
2021-09-17 14:47:00 -04:00
Alisdair McDiarmid
d425c26d77
Merge pull request #29589 from hashicorp/alisdair/planfile-drifted-resources
core: Compute resource drift during plan phase, store in plan file
2021-09-17 14:23:04 -04:00
James Bardin
1bd5987a8c
Merge pull request #29573 from hashicorp/renamed-typo
Correct terraform.env deprecation message typo
2021-09-16 17:02:20 -04:00
Alisdair McDiarmid
bebf1ad23a core: Compute resource drift after plan walk
Rather than delaying resource drift detection until it is ready to be
presented, here we perform that computation after the plan walk has
completed. The resulting drift is represented like planned resource
changes, using a slice of ResourceInstanceChangeSrc values.
2021-09-16 15:22:37 -04:00