Commit Graph

36 Commits

Author SHA1 Message Date
James Bardin
95f30451d9 get rid of EvalEarlyExitError
This was mostly unused now, since we no longer needed to interrupt a
series of eval node executions.

The exception was the stopHook, which is still used to halt execution
when there's an interrupt. Since interrupting execution should not
complete successfully, we use a normal opaque error to halt everything,
and return it to the UI.

We can work on coalescing or hiding these if necessary in a separate PR.
2020-10-28 14:40:30 -04:00
James Bardin
988059d533 make GraphNodeExecutable return diagnostics 2020-10-28 13:47:04 -04:00
James Bardin
c81fd833bb add diags to eval_state 2020-10-28 12:23:03 -04:00
James Bardin
64491df856 add diags to data eval 2020-10-28 11:57:45 -04:00
James Bardin
b42aad5856 add diags to eval_diff 2020-10-28 11:46:07 -04:00
James Bardin
477111e6b6 change apply Eval methods to use diags 2020-10-27 18:16:28 -04:00
James Bardin
e35524c7f0 use existing State rather than Change.Before
The change was passed into the provisioner node because the normal
NodeApplyableResourceInstance overwrites the prior state with the new
state. This however doesn't matter here, because the resource destroy
node does not do this. Also, even if the updated state were to be used
for some reason with a create provisioner, it would be the correct state
to use at that point.
2020-10-05 10:40:14 -04:00
James Bardin
95197f0324 use EvalSelfBlock for destroy provisioners
Evaluate destroy provisioner configurations using only the last resource
state value, and the static instance key data.
2020-10-02 12:38:51 -04:00
Kristin Laemmert
90588c036b
terraform: minor cleanup from EvalTree() refactor (#26429)
* Split node_resource_abstract.go into two files, putting
NodeAbstractResourceInstance methods in their own file - it was getting
large enough to be tricky for (my) human eyeballs.

* un-exported the functions that were created as part of the EvalTree()
refactor; they did not need to be public.
2020-10-01 08:12:10 -04:00
Kristin Laemmert
3bb64e80d5 apply refactor 2020-09-29 13:26:50 -04:00
James Bardin
a6dffa89a3 cleanup unused CBD code
Remove the check for CreateBeforeDestroyOverride which can't happen in a
destroy node.

Remove the unnecessary GraphNodeAttachDestroyer interface, since we
don't use it now that plans can record the create+destroy order.
2020-09-16 11:14:36 -04:00
James Bardin
ec231c7616 apply the stored plan CreateThenDelete action
When applying a plan, a forced CreateBeforeDestroy may not be set during
the apply walk when downstream resources are no longer present in the
graph. We still need to stick to that plan, and both the
NodeApplyableResourceInstance EvalTree and the individual Eval nodes
need to operate on that planned value.

Ensure that we always check for an existing plan when determining
CreateBeforeDestroy status. This must happen in 2 different code paths
due to the eval node pattern currently in-use. Future refactoring may be
able to unify these code-paths to make this less fragile.
2020-09-09 17:02:28 -04:00
James Bardin
e690fa1363
Merge pull request #24904 from hashicorp/jbardin/plan-data-sources
Evaluate data sources in plan when necessary
2020-05-20 10:00:32 -04:00
James Bardin
7731441beb Make sure CBD is correct during apply, and saved
The resource apply nodes need to be GraphNodeDestroyerCBD in order to
correctly inherit create_before_destroy. While the plan will have
recorded this to create the correct deposed nodes, the edges still need
to be transformed correctly.

We also need create_before_destroy to be saved to state for nodes that
inherited it, so that if they are removed from state the destroy will
happen in the correct order.
2020-05-14 15:46:08 -04:00
James Bardin
7b8f13862c un-export new data eval nodes 2020-05-13 13:58:11 -04:00
James Bardin
6ca252faab refactor EvalReadData
The logic for refresh, plan and apply are all subtly different, so
rather than trying to manage that complex flow through a giant 300 line
method, break it up somewhat into 3 different types that can share the
types and a few helpers.
2020-05-13 13:58:11 -04:00
James Bardin
4a92b7888f start to refactor EvalReadData
Remove extra fields, remove the depends_on logic from
NodePlannableResourceInstnace, and start breaking up the massive Eval
method.
2020-05-13 13:58:11 -04:00
James Bardin
b3fc0dab94 use addrs.ConfigResource for dependency tracking
We can't get module instances during transformation, so we need to
reduce the Dependencies to using `addrs.ConfigResource` for now.
2020-03-25 17:03:06 -04:00
James Bardin
d905b990a5 s/GraphNodeResource/GraphNodeConfigResource/
Make the interface name reflect the new return type of the method.
Remove the confusingly named and unused ResourceAddress method from the
resource nodes as well.
2020-03-16 11:16:23 -04:00
Paddy
e6592dc710
Add support for provider metadata to modules. (#22583)
Implement a new provider_meta block in the terraform block of modules, allowing provider-keyed metadata to be communicated from HCL to provider binaries.

Bundled in this change for minimal protocol version bumping is the addition of markdown support for attribute descriptions and the ability to indicate when an attribute is deprecated, so this information can be shown in the schema dump.

Co-authored-by: Paul Tyng <paul@paultyng.net>
2020-03-05 16:53:24 -08:00
Martin Atkins
7f8e087ce3 core: Don't panic if EvalMaybeResourceDeposedObject has no DeposedKey
This is a "should never happen" case, but we have reports of it actually
happening. In order to try to collect a bit more data about what's going
on here, we're changing what was previously a hard panic into a normal
error message that can include the address of the instance we were working
on and the action we were trying to do to it at the time.

The hope is to narrow down what situations can trigger this in order to
find a reliable reproduction case in order to debug further. This also
means that for those who _do_ encounter this problem in the meantime
Terraform will have a chance to shut down cleanly and therefore be more
likely to be able to recover on a subsequent plan/apply cycle.

Further investigation of this will follow once we see a report or two of
this updated error message.
2020-01-06 10:22:51 -08:00
James Bardin
84b5de9ae4 simplify EvalMaybeTainted logic
The EvalMaybeTainted logic was confusing, with deep nesting and unneeded
duplicate fields.
2019-11-08 10:29:01 -05:00
James Bardin
5e16e8eece append dependencies during refresh
Refresh should load any new dependencies found because of configuration
or state changes, but retain any dependencies already in the state.
Orphaned resources would not be in config, but we do not want to lose
the destroy ordering for the later apply.
2019-11-07 17:49:03 -05:00
James Bardin
42bb4a644c make use of the new state Dependencies
Make use of the new Dependencies field in the instance state.

The inter-instance dependencies will be determined from the complete
reference graph, so that absolute addresses can be stored, rather than
just references within a module. The Dependencies are added to the node
in the same manner as state, i.e. via an "attacher" interface and
transformer.  This is because dependencies are calculated from the graph
itself, and not from the config.
2019-11-07 17:49:03 -05:00
Pam Selle
7d905f6777 Resource for_each 2019-07-22 10:51:16 -04:00
Martin Atkins
bec4641867 core: Don't panic if NodeApplyableResourceInstance has no config
This is a "should never happen" case, because we shouldn't ever have
resources in the plan that aren't in the configuration, but since we've
got a report of a crash here (which went away before we got a chance to
debug it) here's just an extra guard to ensure that we'll still exit
gracefully in that case.

If we see this error crop up again in future, it'd be nice to gather a
full trace log so we can see what GraphNodeAttachResourceConfig did and
why it did not attach a configuration.
2019-05-14 16:54:12 -07:00
Martin Atkins
dd8b3ab722 core: Reinstate state-based tracking of data resource dependencies
This was inadvertently lost in the consoliation of EvalReadDataDiff and
EvalReadDataApply into a single EvalReadData.
2018-10-16 19:14:11 -07:00
Martin Atkins
67a8757b69 core: Properly handle deferral (or non-deferral) of data resources
(this is a WIP prototype)
2018-10-16 19:14:11 -07:00
Martin Atkins
b229264bd6 core: A "go fmt" catchup
Since we started using experimental Go Modules our editor tooling hasn't
been fully functional, apparently including format-on-save support. This
is a catchup to get everything back straight again.
2018-10-16 19:14:11 -07:00
Martin Atkins
a43b7df282 core: Handle forced-create_before_destroy during the plan walk
Previously we used a single plan action "Replace" to represent both the
destroy-before-create and the create-before-destroy variants of replacing.
However, this forces the apply graph builder to jump through a lot of
hoops to figure out which nodes need it forced on and rebuild parts of
the graph to represent that.

If we instead decide between these two cases at plan time, the actual
determination of it is more straightforward because each resource is
represented by only one node in the plan graph, and then we can ensure
we put the right nodes in the graph during DiffTransformer and thus avoid
the logic for dealing with deposed instances being spread across various
different transformers and node types.

As a nice side-effect, this also allows us to show the difference between
destroy-then-create and create-then-destroy in the rendered diff in the
CLI, although this change doesn't fully implement that yet.
2018-10-16 19:14:11 -07:00
Martin Atkins
faddb83a92 core: If create leg of create_before_destroy fails, restore deposed
I misunderstood the logic here on the first pass of porting to the new
provider and state types: EvalUndeposeState is supposed to return the
deposed object back to being current again, so we can undo the deposing
in the case where the create leg fails.

If we don't do this, we end up leaving the instance with no current object
at all and with its prior object deposed, and then the later destroy
node deletes that deposed object, leaving the user with no object at all.

For safety we skip this restoration if there _is_ a new current object,
since a failed create can still produce a partial result which we need
to keep to avoid losing track of any remote objects that were successfully
created.
2018-10-16 19:14:11 -07:00
Martin Atkins
334c6f1c2c core: Be more explicit in how we handle create_before_destroy
Previously our handling of create_before_destroy -- and of deposed objects
in particular -- was rather "implicit" and spread over various different
subsystems. We'd quietly just destroy every deposed object during a
destroy operation, without any user-visible plan to do so.

Here we make things more explicit by tracking each deposed object
individually by its pseudorandomly-allocated key. There are two different
mechanisms at play here, building on the same concepts:

- During a replace operation with create_before_destroy, we *pre-allocate*
  a DeposedKey to use for the prior object in the "apply" node and then
  pass that exact id to the destroy node, ensuring that we only destroy
  the single object we planned to destroy. In the happy path here the
  user never actually sees the allocated deposed key because we use it and
  then immediately destroy it within the same operation. However, that
  destroy may fail, which brings us to the second mechanism:

- If any deposed objects are already present in state during _plan_, we
  insert a destroy change for them into the plan so that it's explicit to
  the user that we are going to destroy these additional objects, and then
  create an individual graph node for each one in DiffTransformer.

The main motivation here is to be more careful in how we handle these
destroys so that from a user's standpoint we never destroy something
without the user knowing about it ahead of time.

However, this new organization also hopefully makes the code itself a
little easier to follow because the connection between the create and
destroy steps of a Replace is reprseented in a single place (in
DiffTransformer) and deposed instances each have their own explicit graph
node rather than being secretly handled as part of the main instance-level
graph node.
2018-10-16 19:14:11 -07:00
Martin Atkins
e9e11955a8 core: EvalDiff must handle Create/Replace as a special case
When we re-run EvalDiff during apply, we may have already completed the
destroy leg of a replace operation, leaving us in a different situation
than we were when we made the original planned change.

Therefore as a special case we will allow a create to turn back into a
replace if there was an earlier diff that requested that.
2018-10-16 19:14:11 -07:00
Martin Atkins
9eb32c4536 core: Reinstaint instance tainting, but without mutating objects
Our previous mechanism for dealing with tainting relied on directly
mutating the InstanceState object to mark it as such. In our new state
models we consider the instance objects to be immutable by convention, and
so we frequently copy them. As a result, the taint flagging was no longer
making it all the way through the apply evaluation process.

Here we now implement tainting as a separate step in the evaluation
process, creating a copy of the object with a tainted status if there were
any errors during creation.

This introduces a new behavior where any provider-level errors during
creation will also cause an instance to be marked as tainted if any object
is returned at all. Create-time errors _normally_ result in no object at
all, but the provider might return an object if the failure occurred at
a subsequent step of a multi-step creation process and so left behind a
remote object that needs to be cleaned up on a future run.
2018-10-16 19:14:11 -07:00
Martin Atkins
f561c9c226 core: Populate Dependencies of ResourceInstanceObject during apply
Previously we kept the dependencies one level higher on the resource
instance itself, which meant that updating it was handled in a different
EvalNode, but now we consider these to be dependencies of the object
itself (derived from the configuration that was current at the time it
was created), so we must handle this during EvalApply.

The subtle difference here is that if an object is moved to "deposed"
during a create_before_destroy replace then it will retain the
dependencies it had on its last apply, rather than them being replaced
by the dependencies of the newly-created object.
2018-10-16 19:14:11 -07:00
Martin Atkins
0a97daf3de core: Always update resource metadata in state during apply
Previously we had a bug where we would fail to populate resource-level
metadata in the state during apply when count = 0, because the apply
graph would contain only instance nodes, not whole-resource nodes.

To address this, we add to the apply graph a node for each resource in
the configuration alongside the separate resource instance nodes. This
node's job is just to populate the state metadata for the resource, which
ensures it gets updated correctly even when count = 0.

When count is not zero this ends up doing some redundant work that
would've happened as a side-effect of applying individual resource
instances anyway, but it's harmless and makes the updating of our
resource-level metadata more explicit.
2018-10-16 19:14:11 -07:00