Commit Graph

15 Commits

Author SHA1 Message Date
James Bardin
84b5de9ae4 simplify EvalMaybeTainted logic
The EvalMaybeTainted logic was confusing, with deep nesting and unneeded
duplicate fields.
2019-11-08 10:29:01 -05:00
James Bardin
5e16e8eece append dependencies during refresh
Refresh should load any new dependencies found because of configuration
or state changes, but retain any dependencies already in the state.
Orphaned resources would not be in config, but we do not want to lose
the destroy ordering for the later apply.
2019-11-07 17:49:03 -05:00
James Bardin
42bb4a644c make use of the new state Dependencies
Make use of the new Dependencies field in the instance state.

The inter-instance dependencies will be determined from the complete
reference graph, so that absolute addresses can be stored, rather than
just references within a module. The Dependencies are added to the node
in the same manner as state, i.e. via an "attacher" interface and
transformer.  This is because dependencies are calculated from the graph
itself, and not from the config.
2019-11-07 17:49:03 -05:00
Pam Selle
7d905f6777 Resource for_each 2019-07-22 10:51:16 -04:00
Martin Atkins
bec4641867 core: Don't panic if NodeApplyableResourceInstance has no config
This is a "should never happen" case, because we shouldn't ever have
resources in the plan that aren't in the configuration, but since we've
got a report of a crash here (which went away before we got a chance to
debug it) here's just an extra guard to ensure that we'll still exit
gracefully in that case.

If we see this error crop up again in future, it'd be nice to gather a
full trace log so we can see what GraphNodeAttachResourceConfig did and
why it did not attach a configuration.
2019-05-14 16:54:12 -07:00
Martin Atkins
dd8b3ab722 core: Reinstate state-based tracking of data resource dependencies
This was inadvertently lost in the consoliation of EvalReadDataDiff and
EvalReadDataApply into a single EvalReadData.
2018-10-16 19:14:11 -07:00
Martin Atkins
67a8757b69 core: Properly handle deferral (or non-deferral) of data resources
(this is a WIP prototype)
2018-10-16 19:14:11 -07:00
Martin Atkins
b229264bd6 core: A "go fmt" catchup
Since we started using experimental Go Modules our editor tooling hasn't
been fully functional, apparently including format-on-save support. This
is a catchup to get everything back straight again.
2018-10-16 19:14:11 -07:00
Martin Atkins
a43b7df282 core: Handle forced-create_before_destroy during the plan walk
Previously we used a single plan action "Replace" to represent both the
destroy-before-create and the create-before-destroy variants of replacing.
However, this forces the apply graph builder to jump through a lot of
hoops to figure out which nodes need it forced on and rebuild parts of
the graph to represent that.

If we instead decide between these two cases at plan time, the actual
determination of it is more straightforward because each resource is
represented by only one node in the plan graph, and then we can ensure
we put the right nodes in the graph during DiffTransformer and thus avoid
the logic for dealing with deposed instances being spread across various
different transformers and node types.

As a nice side-effect, this also allows us to show the difference between
destroy-then-create and create-then-destroy in the rendered diff in the
CLI, although this change doesn't fully implement that yet.
2018-10-16 19:14:11 -07:00
Martin Atkins
faddb83a92 core: If create leg of create_before_destroy fails, restore deposed
I misunderstood the logic here on the first pass of porting to the new
provider and state types: EvalUndeposeState is supposed to return the
deposed object back to being current again, so we can undo the deposing
in the case where the create leg fails.

If we don't do this, we end up leaving the instance with no current object
at all and with its prior object deposed, and then the later destroy
node deletes that deposed object, leaving the user with no object at all.

For safety we skip this restoration if there _is_ a new current object,
since a failed create can still produce a partial result which we need
to keep to avoid losing track of any remote objects that were successfully
created.
2018-10-16 19:14:11 -07:00
Martin Atkins
334c6f1c2c core: Be more explicit in how we handle create_before_destroy
Previously our handling of create_before_destroy -- and of deposed objects
in particular -- was rather "implicit" and spread over various different
subsystems. We'd quietly just destroy every deposed object during a
destroy operation, without any user-visible plan to do so.

Here we make things more explicit by tracking each deposed object
individually by its pseudorandomly-allocated key. There are two different
mechanisms at play here, building on the same concepts:

- During a replace operation with create_before_destroy, we *pre-allocate*
  a DeposedKey to use for the prior object in the "apply" node and then
  pass that exact id to the destroy node, ensuring that we only destroy
  the single object we planned to destroy. In the happy path here the
  user never actually sees the allocated deposed key because we use it and
  then immediately destroy it within the same operation. However, that
  destroy may fail, which brings us to the second mechanism:

- If any deposed objects are already present in state during _plan_, we
  insert a destroy change for them into the plan so that it's explicit to
  the user that we are going to destroy these additional objects, and then
  create an individual graph node for each one in DiffTransformer.

The main motivation here is to be more careful in how we handle these
destroys so that from a user's standpoint we never destroy something
without the user knowing about it ahead of time.

However, this new organization also hopefully makes the code itself a
little easier to follow because the connection between the create and
destroy steps of a Replace is reprseented in a single place (in
DiffTransformer) and deposed instances each have their own explicit graph
node rather than being secretly handled as part of the main instance-level
graph node.
2018-10-16 19:14:11 -07:00
Martin Atkins
e9e11955a8 core: EvalDiff must handle Create/Replace as a special case
When we re-run EvalDiff during apply, we may have already completed the
destroy leg of a replace operation, leaving us in a different situation
than we were when we made the original planned change.

Therefore as a special case we will allow a create to turn back into a
replace if there was an earlier diff that requested that.
2018-10-16 19:14:11 -07:00
Martin Atkins
9eb32c4536 core: Reinstaint instance tainting, but without mutating objects
Our previous mechanism for dealing with tainting relied on directly
mutating the InstanceState object to mark it as such. In our new state
models we consider the instance objects to be immutable by convention, and
so we frequently copy them. As a result, the taint flagging was no longer
making it all the way through the apply evaluation process.

Here we now implement tainting as a separate step in the evaluation
process, creating a copy of the object with a tainted status if there were
any errors during creation.

This introduces a new behavior where any provider-level errors during
creation will also cause an instance to be marked as tainted if any object
is returned at all. Create-time errors _normally_ result in no object at
all, but the provider might return an object if the failure occurred at
a subsequent step of a multi-step creation process and so left behind a
remote object that needs to be cleaned up on a future run.
2018-10-16 19:14:11 -07:00
Martin Atkins
f561c9c226 core: Populate Dependencies of ResourceInstanceObject during apply
Previously we kept the dependencies one level higher on the resource
instance itself, which meant that updating it was handled in a different
EvalNode, but now we consider these to be dependencies of the object
itself (derived from the configuration that was current at the time it
was created), so we must handle this during EvalApply.

The subtle difference here is that if an object is moved to "deposed"
during a create_before_destroy replace then it will retain the
dependencies it had on its last apply, rather than them being replaced
by the dependencies of the newly-created object.
2018-10-16 19:14:11 -07:00
Martin Atkins
0a97daf3de core: Always update resource metadata in state during apply
Previously we had a bug where we would fail to populate resource-level
metadata in the state during apply when count = 0, because the apply
graph would contain only instance nodes, not whole-resource nodes.

To address this, we add to the apply graph a node for each resource in
the configuration alongside the separate resource instance nodes. This
node's job is just to populate the state metadata for the resource, which
ensures it gets updated correctly even when count = 0.

When count is not zero this ends up doing some redundant work that
would've happened as a side-effect of applying individual resource
instances anyway, but it's harmless and makes the updating of our
resource-level metadata more explicit.
2018-10-16 19:14:11 -07:00