Commit Graph

25 Commits

Author SHA1 Message Date
Martin Atkins
72dd14ca5c core: Do everything except the actual action for plans.NoOp
Previously we tried to early-exit before doing anything at all for any
no-op changes, but that means we also skip some ancillary steps like
evaluating any preconditions/postconditions.

Now we'll skip only the main action itself for plans.NoOp, and still run
through all of the other side-steps.

Since one of those other steps is emitting events through the hooks
interface, this means that now no-op actions are visible to hooks, whereas
before we always filtered them out before calling. I therefore added some
additional logic to the hooks to filter them out at the UI layer instead;
the decision for whether or not to report that we visited a particular
object and found no action required seems defensible as a UI-level concern
anyway.
2022-07-22 15:27:15 -07:00
James Bardin
acba1159e4 add provider metas to destroy plan 2022-07-06 13:47:35 -04:00
James Bardin
ccd2c7820a plan resource destroy
Call PlanResourceDestroy during a destroy plan.

This allows providers two new abilities:
- They can evaluate if the plan is valid, notifying users of any
  potential errors before an apply is started, which may not be able to
  complete.
- They can inspect and modify their private data during a destroy plan
  just like they can with an other plan operation.
2022-07-06 13:47:35 -04:00
James Bardin
48309835b7 add unknown paths to diags for debugging
When a user reports a "Configuration contains unknown value" error,
there is no information on what might have been unknown during apply.
Add unknown attribute paths to the diagnostic message to provide some
more information when a reproduction may not be possible. Sine this is
one of those "should never happen" types of errors which will be
reported to the developers directly, we can leave the format as the raw
internal representation for simplicity.
2022-05-23 13:28:30 -04:00
Martin Atkins
289bb60ce1 core: Defer on transitive dependencies for data resources with conditions
When a data resource is used for the purposes of verifying a condition
about an object managed elsewhere (e.g. if the managed resource doesn't
directly export all of the information required for the condition) it's
important that we defer the data resource read to the apply step if the
corresponding managed resource has any changes pending.

Typically we'd expect that to come "for free" but unfortunately we have
a pragmatic special case in our handling of data resources where we
normally defer to the apply step only if a _direct_ dependency of the data
resource has a change pending, and allow a plan-time read if there's
a pending change in an indirect dependency. This allowed us to preserve
some compatibility with the questionable historical behavior of always
reading data resources proactively unless the configuration contains
unknown values, since the arguably-more-correct behavior would've been a
regression for anyone who had been depending on that before.

Since preconditions and postconditions didn't exist until now, we are not
constrained in the same way by backward compatibility, and so we can adopt
the more correct behavior in the case where a data resource has conditions
specified. This does unfortunately make the handling of data resources
with conditions subtly inconsistent with those that don't, but this is
a better situation than the alternative where it would be easy to get into
a trapped situation where the remote system is invalid and it's impossible
to plan the change that would make it valid again because the conditions
evaluate too soon, prior to the fix being applied.
2022-05-11 11:01:38 -07:00
Martin Atkins
4cffff24b1 core: Report reason for deferring data read until apply
We have two different reasons why a data resource might be read only
during apply, rather than during planning as usual: the configuration
contains unknown values, or the data resource as a whole depends on a
managed resource which itself has a change pending.

However, we didn't previously distinguish these two in a way that allowed
the UI to describe the difference, and so we confusingly reported both
as "config refers to values not yet known", which in turn led to a number
of reasonable questions about why Terraform was claiming that but then
immediately below showing the configuration entirely known.

Now we'll use our existing "ActionReason" mechanism to tell the UI layer
which of the two reasons applies to a particular data resource instance.
The "dependency pending" situation tends to happen in conjunction with
"config unknown", so we'll prefer to refer that the configuration is
unknown if both are true.
2022-05-09 11:12:47 -07:00
James Bardin
29ecac0808 remove the use of data source prior state entirely
After data source handling was moved from a separate refresh phase into
the planning phase, reading the existing state was only used for
informational purposes. This had been reduced to reporting warnings when
the provider returned an unexpected value to try and help locate legacy
provider bugs, but any actual issues located from those warnings were
very few and far between.

Because the prior state cannot be reliably decoded when faced with
incompatible provider schema upgrades, and there is no longer any
significant reason to try and get the prior state at all, we can skip
the process entirely.
2022-04-11 10:19:45 -04:00
Alisdair McDiarmid
c5d10bdef1 core: Store condition block results in plan
In order to include condition block results in the JSON plan output, we
must store them in the plan and its serialization.

Terraform can evaluate condition blocks multiple times, so we must be
able to update the result. Accordingly, the plan.Conditions object is a
map with keys representing the condition block's address. Condition
blocks are not referenceable in any other context, so this address form
cannot be used anywhere in the configuration.

The commit includes a new test case for the JSON output of a
refresh-only plan, which is currently the only way for a failing
condition result to be rendered through this path.
2022-04-04 15:36:29 -04:00
Alisdair McDiarmid
a103c65140 core: Eval pre/postconditions in refresh-only mode
Evaluate precondition and postcondition blocks in refresh-only mode, but
report any failures as warnings instead of errors. This ensures that any
deviation from the contract defined by condition blocks is reported as
early as possible, without preventing the completion of a state refresh
operation.

Prior to this commit, Terraform evaluated output preconditions and data
source pre/postconditions as normal in refresh-only mode, while managed
resource pre/postconditions were not evaluated at all. This omission
could lead to confusing partial condition errors, or failure to detect
undesired changes which would otherwise cause resources to become
invalid.

Reporting the failures as errors also meant that changes retrieved
during refresh could cause the refresh operation to fail. This is also
undesirable, as the primary purpose of the operation is to update local
state. Precondition/postcondition checks are still valuable here, but
should be informative rather than blocking.
2022-03-11 13:32:40 -05:00
James Bardin
05a10f06d1 remove PreDiff and PostDiff hook calls
PreDiff and PostDiff hooks were designed to be called immediately before
and after the PlanResourceChange calls to the provider. Probably due to
the confusing legacy naming of the hooks, these were scattered about the
nodes involved with planning, causing the hooks to be called in a number
of places where they were designed, including data sources and destroy
plans. Since these hooks are not used at all any longer anyway, we can
removed the extra calls with no effect.

If we choose in the future to call PlanResourceChange for resource
destroy plans, the hooks can be re-inserted (even though they currently
are unused) into the new code path which must diverge from the current
combined path of managed and data sources.
2022-03-08 13:48:41 -05:00
James Bardin
dc668dff38 ensure UI hooks are called for data sources
The UI hooks for data source reads were missed during planning. Move the
hook calls to immediatley before and after the ReadDataSource calls to
ensure they are called during both plan and apply.
2022-03-08 13:06:30 -05:00
kmoe
161374725c
Warn when ignore_changes includes a Computed attribute (#30517)
* ignore_changes attributes must exist in schema

Add a test verifying that attempting to add a nonexistent attribute to
ignore_changes throws an error.

* ignore_changes cannot be used with Computed attrs

Return a warning if a Computed attribute is present in ignore_changes,
unless the attribute is also Optional.

ignore_changes on a non-Optional Computed attribute is a no-op, so the user
likely did not want to set this in config.
An Optional Computed attribute, however, is still subject to ignore_changes
behaviour, since it is possible to make changes in the configuration that
Terraform must ignore.
2022-02-18 10:38:29 +00:00
Martin Atkins
5573868cd0 core: Check pre- and postconditions for resources and output values
If the configuration contains preconditions and/or postconditions for any
objects, we'll check them during evaluation of those objects and generate
errors if any do not pass.

The handling of post-conditions is particularly interesting here because
we intentionally evaluate them _after_ we've committed our record of the
resulting side-effects to the state/plan, with the intent that future
plans against the same object will keep failing until the problem is
addressed either by changing the object so it would pass the precondition
or changing the precondition to accept the current object. That then
avoids the need for us to proactively taint managed resources whose
postconditions fail, as we would for provisioner failures: instead, we can
leave the resolution approach up to the user to decide.

Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
2022-01-31 14:02:53 -05:00
Alisdair McDiarmid
5ddf5e1f7d core: Fix schema loading for deleted resources
Resource instances removed from the configuration would previously use
the implied provider address. This is correct for default providers, but
incorrect for those from other namespaces or hosts. The fix here is to
use the stored provider config if it is present.
2021-11-24 15:23:20 -05:00
James Bardin
9d19086c8e test for null map and fix lost map marks
Add a test to ensure ignore_changes does not change null maps to empty
maps.

Fix when a marked map revers the changes ignored within that map.
2021-11-11 10:44:39 -05:00
James Bardin
40174b634a ignore_changes changing a null map to empty
Fix an error where using `ignore_changes` would cause some null maps to
be saved as an empty map.
2021-11-11 10:44:05 -05:00
Martin Atkins
83f0376673 refactoring: ApplyMoves new return type
When we originally stubbed ApplyMoves we didn't know yet how exactly we'd
be using the result, so we made it a double-indexed map allowing looking
up moves in both directions.

However, in practice we only actually need to look up old addresses by new
addresses, and so this commit first removes the double indexing so that
each move is only represented by one element in the map.

We also need to describe situations where a move was blocked, because in
a future commit we'll generate some warnings in those cases. Therefore
ApplyMoves now returns a MoveResults object which contains both a map of
changes and a map of blocks. The map of blocks isn't used yet as of this
commit, but we'll use it in a later commit to produce warnings within
the "terraform" package.
2021-09-22 09:01:10 -07:00
Martin Atkins
a59c2fe1b9 core: EvalContextBuiltin no longer has a "Schemas"
By tolerating ProviderSchema and ProvisionerSchema potentially returning
errors, we can slightly simplify EvalContextBuiltin by having it retrieve
individual schemas when needed directly from the "Plugins" object.

EvalContextBuiltin already needs to be holding a contextPlugins instance
for other reasons anyway, so this allows us to get the same result with
fewer moving parts.
2021-09-10 14:56:49 -07:00
Martin Atkins
89b05050ec core: Functional-style API for terraform.Context
Previously terraform.Context was built in an unfortunate way where all of
the data was provided up front in terraform.NewContext and then mutated
directly by subsequent operations. That made the data flow hard to follow,
commonly leading to bugs, and also meant that we were forced to take
various actions too early in terraform.NewContext, rather than waiting
until a more appropriate time during an operation.

This (enormous) commit changes terraform.Context so that its fields are
broadly just unchanging data about the execution context (current
workspace name, available plugins, etc) whereas the main data Terraform
works with arrives via individual method arguments and is returned in
return values.

Specifically, this means that terraform.Context no longer "has-a" config,
state, and "planned changes", instead holding on to those only temporarily
during an operation. The caller is responsible for propagating the outcome
of one step into the next step so that the data flow between operations is
actually visible.

However, since that's a change to the main entry points in the "terraform"
package, this commit also touches every file in the codebase which
interacted with those APIs. Most of the noise here is in updating tests
to take the same actions using the new API style, but this also affects
the main-code callers in the backends and in the command package.

My goal here was to refactor without changing observable behavior, but in
practice there are a couple externally-visible behavior variations here
that seemed okay in service of the broader goal:
 - The "terraform graph" command is no longer hooked directly into the
   core graph builders, because that's no longer part of the public API.
   However, I did include a couple new Context functions whose contract
   is to produce a UI-oriented graph, and _for now_ those continue to
   return the physical graph we use for those operations. There's no
   exported API for generating the "validate" and "eval" graphs, because
   neither is particularly interesting in its own right, and so
   "terraform graph" no longer supports those graph types.
 - terraform.NewContext no longer has the responsibility for collecting
   all of the provider schemas up front. Instead, we wait until we need
   them. However, that means that some of our error messages now have a
   slightly different shape due to unwinding through a differently-shaped
   call stack. As of this commit we also end up reloading the schemas
   multiple times in some cases, which is functionally acceptable but
   likely represents a performance regression. I intend to rework this to
   use caching, but I'm saving that for a later commit because this one is
   big enough already.

The proximal reason for this change is to resolve the chicken/egg problem
whereby there was previously no single point where we could apply "moved"
statements to the previous run state before creating a plan. With this
change in place, we can now do that as part of Context.Plan, prior to
forking the input state into the three separate state artifacts we use
during planning.

However, this is at least the third project in a row where the previous
API design led to piling more functionality into terraform.NewContext and
then working around the incorrect order of operations that produces, so
I intend that by paying the cost/risk of this large diff now we can in
turn reduce the cost/risk of future projects that relate to our main
workflow actions.
2021-08-30 13:59:14 -07:00
Martin Atkins
4faac6ee43 core: Record move result information in the plan
Here we wire through the "move results" into the graph walk data
structures so that all of the the nodes which produce
plans.ResourceInstanceChange values can capture the "PrevRunAddr" for
each resource instance.

This doesn't actually quite work yet, because the logic in Context.Plan
isn't actually correct and so the updated state from
refactoring.ApplyMoves isn't actually visible as the "previous run state".
For that reason, the context test in this commit is currently skipped,
with the intent of re-enabling it once the updated state is properly
propagating into the plan graph walk and thus we can actually react to
the result of the move while choosing actions for those addresses.
2021-08-30 13:59:14 -07:00
Martin Atkins
22b36d1f4c Field for the previous address of each resource instance in the plan
In order to expose the effect of any relevant "moved" statements we dealt
with prior to creating the plan, we'll record with each
ResourceInstanceChange both is current address and the address it was
tracked at for the previous run.

To save consumers of these objects from having to special-case the
situation where there _was_ no previous run (e.g. because this is a Create
change), we'll just pretend the previous run address was the same as the
current address in that case, the same as for an update without any
renaming in effect.

This includes a breaking change to the plan file format, but one that
doesn't require a version number increment because there is no ambiguity
between the two formats and so mismatched parsers will already fail with
an error message.

As of this commit we've just added the new field but not yet populated it
with any useful information: it always just matches Addr. A future commit
will wire this up to the result of applying the moves so that we can
populate it correctly. We also don't yet expose this new information
anywhere in the UI layer.
2021-08-30 13:59:14 -07:00
James Bardin
ea077cbd90 handle marks within ignore_changes
Up until now marks were not considered by `ignore_changes`, that however
means changes to sensitivity within a configuration cannot ignored, even
though they are planned as changes.

Rather than separating the marks and tracking their paths, we can easily
update the processIgnoreChanges routine to handle the marked values
directly. Moving the `processIgnoreChanges` call also cleans up some of
the variable naming, making it more consistent through the body of the
function.
2021-07-19 16:42:26 -04:00
James Bardin
aaaeeffa81 a noop change with no state has no private data
There is no change here to use the private data, and currentState may be
nil.
2021-05-20 09:34:25 -04:00
James Bardin
bafa44b958 return diagnostics from provisioners
Do not convert provisioner diagnostics to errors so that users can get
context from provisioner failures.

Return diagnostics from the builtin provisioners that can be annotated
with configuration context and instance addresses.
2021-05-19 11:24:54 -04:00
Martin Atkins
36d0a50427 Move terraform/ to internal/terraform/
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.

If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
2021-05-17 14:09:07 -07:00