opentofu/internal/terraform/node_resource_destroy_deposed.go

334 lines
13 KiB
Go
Raw Normal View History

core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
package terraform
import (
"fmt"
"log"
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/dag"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/tfdiags"
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
)
// ConcreteResourceInstanceDeposedNodeFunc is a callback type used to convert
// an abstract resource instance to a concrete one of some type that has
// an associated deposed object key.
type ConcreteResourceInstanceDeposedNodeFunc func(*NodeAbstractResourceInstance, states.DeposedKey) dag.Vertex
type GraphNodeDeposedResourceInstanceObject interface {
DeposedInstanceObjectKey() states.DeposedKey
}
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
// NodePlanDeposedResourceInstanceObject represents deposed resource
// instance objects during plan. These are distinct from the primary object
// for each resource instance since the only valid operation to do with them
// is to destroy them.
//
// This node type is also used during the refresh walk to ensure that the
// record of a deposed object is up-to-date before we plan to destroy it.
type NodePlanDeposedResourceInstanceObject struct {
*NodeAbstractResourceInstance
DeposedKey states.DeposedKey
core: Treat deposed objects the same as orphaned current objects In many ways a deposed object is equivalent to an orphaned current object in that the only action we can take with it is to destroy it. However, we do still need to take some preparation steps in both cases: first, we must ensure we track the upgraded version of the existing object so that we'll be able to successfully render our plan, and secondly we must refresh the existing object to make sure it still exists in the remote system. We were previously doing these extra steps for orphan objects but not for deposed ones, which meant that the behavior for deposed objects would be subtly different and violate the invariants our callers expect in order to display a plan. This also created the risk that a deposed object already deleted in the remote system would become "stuck" because Terraform would still plan to destroy it, which might cause the provider to return an error when it tries to delete an already-absent object. This also makes the deposed object planning take into account the "skipPlanChanges" flag, which is important to get a correct result in the "refresh only" planning mode. It's a shame that we have almost identical code handling both the orphan and deposed situations, but they differ in that the latter must call different functions to interact with the deposed rather than the current objects in the state. Perhaps a later change can improve on this with some more refactoring, but this commit is already a little more disruptive than I'd like and so I'm intentionally deferring that for another day.
2021-05-12 17:18:25 -05:00
// skipRefresh indicates that we should skip refreshing individual instances
skipRefresh bool
// skipPlanChanges indicates we should skip trying to plan change actions
// for any instances.
skipPlanChanges bool
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
}
var (
_ GraphNodeDeposedResourceInstanceObject = (*NodePlanDeposedResourceInstanceObject)(nil)
_ GraphNodeConfigResource = (*NodePlanDeposedResourceInstanceObject)(nil)
_ GraphNodeResourceInstance = (*NodePlanDeposedResourceInstanceObject)(nil)
_ GraphNodeReferenceable = (*NodePlanDeposedResourceInstanceObject)(nil)
_ GraphNodeReferencer = (*NodePlanDeposedResourceInstanceObject)(nil)
_ GraphNodeExecutable = (*NodePlanDeposedResourceInstanceObject)(nil)
_ GraphNodeProviderConsumer = (*NodePlanDeposedResourceInstanceObject)(nil)
_ GraphNodeProvisionerConsumer = (*NodePlanDeposedResourceInstanceObject)(nil)
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
)
func (n *NodePlanDeposedResourceInstanceObject) Name() string {
return fmt.Sprintf("%s (deposed %s)", n.ResourceInstanceAddr().String(), n.DeposedKey)
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
}
func (n *NodePlanDeposedResourceInstanceObject) DeposedInstanceObjectKey() states.DeposedKey {
return n.DeposedKey
}
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
// GraphNodeReferenceable implementation, overriding the one from NodeAbstractResourceInstance
func (n *NodePlanDeposedResourceInstanceObject) ReferenceableAddrs() []addrs.Referenceable {
// Deposed objects don't participate in references.
return nil
}
// GraphNodeReferencer implementation, overriding the one from NodeAbstractResourceInstance
func (n *NodePlanDeposedResourceInstanceObject) References() []*addrs.Reference {
// We don't evaluate configuration for deposed objects, so they effectively
// make no references.
return nil
}
// GraphNodeEvalable impl.
func (n *NodePlanDeposedResourceInstanceObject) Execute(ctx EvalContext, op walkOperation) (diags tfdiags.Diagnostics) {
core: Treat deposed objects the same as orphaned current objects In many ways a deposed object is equivalent to an orphaned current object in that the only action we can take with it is to destroy it. However, we do still need to take some preparation steps in both cases: first, we must ensure we track the upgraded version of the existing object so that we'll be able to successfully render our plan, and secondly we must refresh the existing object to make sure it still exists in the remote system. We were previously doing these extra steps for orphan objects but not for deposed ones, which meant that the behavior for deposed objects would be subtly different and violate the invariants our callers expect in order to display a plan. This also created the risk that a deposed object already deleted in the remote system would become "stuck" because Terraform would still plan to destroy it, which might cause the provider to return an error when it tries to delete an already-absent object. This also makes the deposed object planning take into account the "skipPlanChanges" flag, which is important to get a correct result in the "refresh only" planning mode. It's a shame that we have almost identical code handling both the orphan and deposed situations, but they differ in that the latter must call different functions to interact with the deposed rather than the current objects in the state. Perhaps a later change can improve on this with some more refactoring, but this commit is already a little more disruptive than I'd like and so I'm intentionally deferring that for another day.
2021-05-12 17:18:25 -05:00
log.Printf("[TRACE] NodePlanDeposedResourceInstanceObject: planning %s deposed object %s", n.Addr, n.DeposedKey)
// Read the state for the deposed resource instance
Mildwonkey/eval apply (#27222) * rename files for consistency with contents * terraform: refactor EvalValidateSelfref The EvalValidateSelfref eval node implementation was removed in favor of a regular function. * terraform: refactor EvalValidateProvisioner EvalValidateProvisioner is now a method on NodeValidatableResource. * terraform: refactor EvalValidateResource EvalValidateResource is now a method on NodeValidatableResource, and the functions called by (the new) validateResource are now standalone functions. This particular refactor gets the prize for "most complicated test refactoring". * terraform: refactor EvalMaybeTainted EvalMaybeTainted was a relatively simple operation which never returned an error, so I've refactored it into a plain function and moved it into the only file its called from. * terraform: eval-related cleanup De-exported preApplyHook, which got missed in my general cleanup sweeps. Removed resourceHasUserVisibleApply in favor of moving the logic inline - it was a single-line check so calling the function was (nearly) as much code as just checking if the resource was managed. * terraform: refactor EvalApplyProvisioners EvalApplyProvisioners.Eval is now a method on NodeResourceAbstractInstance. There were two "apply"ish functions, so I named the first "evalApplyProvisioners" since it mainly determined if provisioners should be run before passing off execution to applyProvisioners. * terraform: refactor EvalApply EvalApply is now a method on NodeAbstractResourceInstance. This was one of the trickier Eval()s to refactor, and my goal was to change as little as possible to avoid unintended side effects. One notable change: there was a createNew boolean that was only used in NodeApplyableResourceInstance.managedResourceExecute, and that boolean was populated from the change (which was available from managedResourceExecute), so I removed it from apply entirely. Out of an abundance of caution I assigned the value to createNew in (roughtly) the same spot, in case I was missing some place where the change might get modified. TODO: Destroy nodes passed nil configs into apply, and I am curious if we can get the same functionality by checking if the planned change is a destroy, instead of passing a config into apply. That felt too risky for this refactor but it is something I would like to explore at a future point. There are also a few updates to log output in this PR, since I spent some time staring at logs and noticed various spots I missed.
2020-12-10 07:05:53 -06:00
state, err := n.readResourceInstanceStateDeposed(ctx, n.Addr, n.DeposedKey)
diags = diags.Append(err)
2020-10-28 11:23:03 -05:00
if diags.HasErrors() {
return diags
}
core: Treat deposed objects the same as orphaned current objects In many ways a deposed object is equivalent to an orphaned current object in that the only action we can take with it is to destroy it. However, we do still need to take some preparation steps in both cases: first, we must ensure we track the upgraded version of the existing object so that we'll be able to successfully render our plan, and secondly we must refresh the existing object to make sure it still exists in the remote system. We were previously doing these extra steps for orphan objects but not for deposed ones, which meant that the behavior for deposed objects would be subtly different and violate the invariants our callers expect in order to display a plan. This also created the risk that a deposed object already deleted in the remote system would become "stuck" because Terraform would still plan to destroy it, which might cause the provider to return an error when it tries to delete an already-absent object. This also makes the deposed object planning take into account the "skipPlanChanges" flag, which is important to get a correct result in the "refresh only" planning mode. It's a shame that we have almost identical code handling both the orphan and deposed situations, but they differ in that the latter must call different functions to interact with the deposed rather than the current objects in the state. Perhaps a later change can improve on this with some more refactoring, but this commit is already a little more disruptive than I'd like and so I'm intentionally deferring that for another day.
2021-05-12 17:18:25 -05:00
// Note any upgrades that readResourceInstanceState might've done in the
// prevRunState, so that it'll conform to current schema.
diags = diags.Append(n.writeResourceInstanceStateDeposed(ctx, n.DeposedKey, state, prevRunState))
if diags.HasErrors() {
return diags
}
// Also the refreshState, because that should still reflect schema upgrades
// even if not refreshing.
diags = diags.Append(n.writeResourceInstanceStateDeposed(ctx, n.DeposedKey, state, refreshState))
2020-10-28 10:46:07 -05:00
if diags.HasErrors() {
return diags
}
// We don't refresh during the planDestroy walk, since that is only adding
// the destroy changes to the plan and the provider will not be configured
// at this point. The other nodes use separate types for plan and destroy,
// while deposed instances are always a destroy operation, so the logic
// here is a bit overloaded.
if !n.skipRefresh && op != walkPlanDestroy {
core: Treat deposed objects the same as orphaned current objects In many ways a deposed object is equivalent to an orphaned current object in that the only action we can take with it is to destroy it. However, we do still need to take some preparation steps in both cases: first, we must ensure we track the upgraded version of the existing object so that we'll be able to successfully render our plan, and secondly we must refresh the existing object to make sure it still exists in the remote system. We were previously doing these extra steps for orphan objects but not for deposed ones, which meant that the behavior for deposed objects would be subtly different and violate the invariants our callers expect in order to display a plan. This also created the risk that a deposed object already deleted in the remote system would become "stuck" because Terraform would still plan to destroy it, which might cause the provider to return an error when it tries to delete an already-absent object. This also makes the deposed object planning take into account the "skipPlanChanges" flag, which is important to get a correct result in the "refresh only" planning mode. It's a shame that we have almost identical code handling both the orphan and deposed situations, but they differ in that the latter must call different functions to interact with the deposed rather than the current objects in the state. Perhaps a later change can improve on this with some more refactoring, but this commit is already a little more disruptive than I'd like and so I'm intentionally deferring that for another day.
2021-05-12 17:18:25 -05:00
// Refresh this object even though it is going to be destroyed, in
// case it's already been deleted outside of Terraform. If this is a
// normal plan, providers expect a Read request to remove missing
// resources from the plan before apply, and may not handle a missing
// resource during Delete correctly. If this is a simple refresh,
// Terraform is expected to remove the missing resource from the state
// entirely
refreshedState, refreshDiags := n.refresh(ctx, n.DeposedKey, state)
diags = diags.Append(refreshDiags)
if diags.HasErrors() {
return diags
}
diags = diags.Append(n.writeResourceInstanceStateDeposed(ctx, n.DeposedKey, refreshedState, refreshState))
if diags.HasErrors() {
return diags
}
// If we refreshed then our subsequent planning should be in terms of
// the new object, not the original object.
state = refreshedState
}
if !n.skipPlanChanges {
var change *plans.ResourceInstanceChange
change, destroyPlanDiags := n.planDestroy(ctx, state, n.DeposedKey)
diags = diags.Append(destroyPlanDiags)
if diags.HasErrors() {
return diags
}
// NOTE: We don't check prevent_destroy for deposed objects, even
// though we would do so here for a "current" object, because
// if we've reached a point where an object is already deposed then
// we've already planned and partially-executed a create_before_destroy
// replace and we would've checked prevent_destroy at that point. We're
// now just need to get the deposed object destroyed, because there
// should be a new object already serving as its replacement.
diags = diags.Append(n.writeChange(ctx, change, n.DeposedKey))
if diags.HasErrors() {
return diags
}
diags = diags.Append(n.writeResourceInstanceStateDeposed(ctx, n.DeposedKey, nil, workingState))
} else {
// The working state should at least be updated with the result
// of upgrading and refreshing from above.
diags = diags.Append(n.writeResourceInstanceStateDeposed(ctx, n.DeposedKey, state, workingState))
}
return diags
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
}
// NodeDestroyDeposedResourceInstanceObject represents deposed resource
// instance objects during apply. Nodes of this type are inserted by
// DiffTransformer when the planned changeset contains "delete" changes for
// deposed instance objects, and its only supported operation is to destroy
// and then forget the associated object.
type NodeDestroyDeposedResourceInstanceObject struct {
*NodeAbstractResourceInstance
DeposedKey states.DeposedKey
}
var (
_ GraphNodeDeposedResourceInstanceObject = (*NodeDestroyDeposedResourceInstanceObject)(nil)
_ GraphNodeConfigResource = (*NodeDestroyDeposedResourceInstanceObject)(nil)
_ GraphNodeResourceInstance = (*NodeDestroyDeposedResourceInstanceObject)(nil)
_ GraphNodeDestroyer = (*NodeDestroyDeposedResourceInstanceObject)(nil)
_ GraphNodeDestroyerCBD = (*NodeDestroyDeposedResourceInstanceObject)(nil)
_ GraphNodeReferenceable = (*NodeDestroyDeposedResourceInstanceObject)(nil)
_ GraphNodeReferencer = (*NodeDestroyDeposedResourceInstanceObject)(nil)
_ GraphNodeExecutable = (*NodeDestroyDeposedResourceInstanceObject)(nil)
_ GraphNodeProviderConsumer = (*NodeDestroyDeposedResourceInstanceObject)(nil)
_ GraphNodeProvisionerConsumer = (*NodeDestroyDeposedResourceInstanceObject)(nil)
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
)
func (n *NodeDestroyDeposedResourceInstanceObject) Name() string {
2019-11-19 16:30:27 -06:00
return fmt.Sprintf("%s (destroy deposed %s)", n.ResourceInstanceAddr(), n.DeposedKey)
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
}
func (n *NodeDestroyDeposedResourceInstanceObject) DeposedInstanceObjectKey() states.DeposedKey {
return n.DeposedKey
}
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
// GraphNodeReferenceable implementation, overriding the one from NodeAbstractResourceInstance
func (n *NodeDestroyDeposedResourceInstanceObject) ReferenceableAddrs() []addrs.Referenceable {
// Deposed objects don't participate in references.
return nil
}
// GraphNodeReferencer implementation, overriding the one from NodeAbstractResourceInstance
func (n *NodeDestroyDeposedResourceInstanceObject) References() []*addrs.Reference {
// We don't evaluate configuration for deposed objects, so they effectively
// make no references.
return nil
}
// GraphNodeDestroyer
func (n *NodeDestroyDeposedResourceInstanceObject) DestroyAddr() *addrs.AbsResourceInstance {
addr := n.ResourceInstanceAddr()
return &addr
}
// GraphNodeDestroyerCBD
func (n *NodeDestroyDeposedResourceInstanceObject) CreateBeforeDestroy() bool {
// A deposed instance is always CreateBeforeDestroy by definition, since
// we use deposed only to handle create-before-destroy.
return true
}
// GraphNodeDestroyerCBD
func (n *NodeDestroyDeposedResourceInstanceObject) ModifyCreateBeforeDestroy(v bool) error {
if !v {
// Should never happen: deposed instances are _always_ create_before_destroy.
return fmt.Errorf("can't deactivate create_before_destroy for a deposed instance")
}
return nil
}
// GraphNodeExecutable impl.
func (n *NodeDestroyDeposedResourceInstanceObject) Execute(ctx EvalContext, op walkOperation) (diags tfdiags.Diagnostics) {
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
var change *plans.ResourceInstanceChange
// Read the state for the deposed resource instance
Mildwonkey/eval apply (#27222) * rename files for consistency with contents * terraform: refactor EvalValidateSelfref The EvalValidateSelfref eval node implementation was removed in favor of a regular function. * terraform: refactor EvalValidateProvisioner EvalValidateProvisioner is now a method on NodeValidatableResource. * terraform: refactor EvalValidateResource EvalValidateResource is now a method on NodeValidatableResource, and the functions called by (the new) validateResource are now standalone functions. This particular refactor gets the prize for "most complicated test refactoring". * terraform: refactor EvalMaybeTainted EvalMaybeTainted was a relatively simple operation which never returned an error, so I've refactored it into a plain function and moved it into the only file its called from. * terraform: eval-related cleanup De-exported preApplyHook, which got missed in my general cleanup sweeps. Removed resourceHasUserVisibleApply in favor of moving the logic inline - it was a single-line check so calling the function was (nearly) as much code as just checking if the resource was managed. * terraform: refactor EvalApplyProvisioners EvalApplyProvisioners.Eval is now a method on NodeResourceAbstractInstance. There were two "apply"ish functions, so I named the first "evalApplyProvisioners" since it mainly determined if provisioners should be run before passing off execution to applyProvisioners. * terraform: refactor EvalApply EvalApply is now a method on NodeAbstractResourceInstance. This was one of the trickier Eval()s to refactor, and my goal was to change as little as possible to avoid unintended side effects. One notable change: there was a createNew boolean that was only used in NodeApplyableResourceInstance.managedResourceExecute, and that boolean was populated from the change (which was available from managedResourceExecute), so I removed it from apply entirely. Out of an abundance of caution I assigned the value to createNew in (roughtly) the same spot, in case I was missing some place where the change might get modified. TODO: Destroy nodes passed nil configs into apply, and I am curious if we can get the same functionality by checking if the planned change is a destroy, instead of passing a config into apply. That felt too risky for this refactor but it is something I would like to explore at a future point. There are also a few updates to log output in this PR, since I spent some time staring at logs and noticed various spots I missed.
2020-12-10 07:05:53 -06:00
state, err := n.readResourceInstanceStateDeposed(ctx, n.Addr, n.DeposedKey)
if err != nil {
return diags.Append(err)
}
if state == nil {
diags = diags.Append(fmt.Errorf("missing deposed state for %s (%s)", n.Addr, n.DeposedKey))
return diags
}
Eval() Refactor: Plan Edition (#27177) * terraforn: refactor EvalRefresh EvalRefresh.Eval(ctx) is now Refresh(evalRefreshReqest, ctx). While none of the inner logic of the function has changed, it now returns a states.ResourceInstanceObject instead of updating a pointer. This is a human-centric change, meant to make the logic flow (in the calling functions) easier to follow. * terraform: refactor EvalReadDataPlan and Apply This is a very minor refactor that removes the (currently) redundant types EvalReadDataPlan and EvalReadDataApply in favor of using EvalReadData with a Plan and Apply functions. This is in effect an aesthetic change; since there is no longer an Eval() abstraction we can rename functions to make their functionality as obvious as possible. * terraform: refactor EvalCheckPlannedChange EvalCheckPlannedChange was only used by NodeApplyableResourceInstance and has been refactored into a method on that type called checkPlannedChange. * terraform: refactor EvalDiff.Eval EvalDiff.Eval is now a method on NodeResourceAbstracted called Plan which takes as a parameter an EvalPlanRequest. Instead of updating pointers it returns a new plan and state. I removed as many redundant fields from the original EvalDiff struct as possible. * terraform: refactor EvalReduceDiff EvalReduceDiff is now reducePlan, a regular function (without a method) that returns a value. * terraform: refactor EvalDiffDestroy EvalDiffDestroy.Eval is now NodeAbstractResourceInstance.PlanDestroy which takes ctx, state and optional DeposedKey and returns a change. I've removed the state return value since it was only ever returning a nil state. * terraform: refactor EvalWriteDiff EvalWriteDiff.Eval is now NodeAbstractResourceInstance.WriteChange. * rename files to something more logical * terrafrom: refresh refactor, continued! I had originally made Refresh a stand-alone function since it was (obnoxiously) called from a graphNodeImportStateSub, but after some (greatly appreciated) prompting in the PR I instead made it a method on the NodeAbstractResourceInstance, in keeping with the other refactored eval nodes, and then built a NodeAbstractResourceInstance inside import. Since I did that I could also remove my duplicated 'writeState' code inside graphNodeImportStateSub and use n.writeResourceInstanceState, so double thanks! * unexport eval methods * re-refactor Plan, it made more sense on NodeAbstractResourceInstance. Sorry * Remove uninformative `Eval`s from EvalReadData, consolidate to a single file, and rename file to match function names. * manual rebase
2020-12-08 07:50:30 -06:00
change, destroyPlanDiags := n.planDestroy(ctx, state, n.DeposedKey)
diags = diags.Append(destroyPlanDiags)
2020-10-28 10:46:07 -05:00
if diags.HasErrors() {
return diags
}
// Call pre-apply hook
Mildwonkey/eval apply (#27222) * rename files for consistency with contents * terraform: refactor EvalValidateSelfref The EvalValidateSelfref eval node implementation was removed in favor of a regular function. * terraform: refactor EvalValidateProvisioner EvalValidateProvisioner is now a method on NodeValidatableResource. * terraform: refactor EvalValidateResource EvalValidateResource is now a method on NodeValidatableResource, and the functions called by (the new) validateResource are now standalone functions. This particular refactor gets the prize for "most complicated test refactoring". * terraform: refactor EvalMaybeTainted EvalMaybeTainted was a relatively simple operation which never returned an error, so I've refactored it into a plain function and moved it into the only file its called from. * terraform: eval-related cleanup De-exported preApplyHook, which got missed in my general cleanup sweeps. Removed resourceHasUserVisibleApply in favor of moving the logic inline - it was a single-line check so calling the function was (nearly) as much code as just checking if the resource was managed. * terraform: refactor EvalApplyProvisioners EvalApplyProvisioners.Eval is now a method on NodeResourceAbstractInstance. There were two "apply"ish functions, so I named the first "evalApplyProvisioners" since it mainly determined if provisioners should be run before passing off execution to applyProvisioners. * terraform: refactor EvalApply EvalApply is now a method on NodeAbstractResourceInstance. This was one of the trickier Eval()s to refactor, and my goal was to change as little as possible to avoid unintended side effects. One notable change: there was a createNew boolean that was only used in NodeApplyableResourceInstance.managedResourceExecute, and that boolean was populated from the change (which was available from managedResourceExecute), so I removed it from apply entirely. Out of an abundance of caution I assigned the value to createNew in (roughtly) the same spot, in case I was missing some place where the change might get modified. TODO: Destroy nodes passed nil configs into apply, and I am curious if we can get the same functionality by checking if the planned change is a destroy, instead of passing a config into apply. That felt too risky for this refactor but it is something I would like to explore at a future point. There are also a few updates to log output in this PR, since I spent some time staring at logs and noticed various spots I missed.
2020-12-10 07:05:53 -06:00
diags = diags.Append(n.preApplyHook(ctx, change))
2020-10-27 17:16:28 -05:00
if diags.HasErrors() {
return diags
}
Mildwonkey/eval apply (#27222) * rename files for consistency with contents * terraform: refactor EvalValidateSelfref The EvalValidateSelfref eval node implementation was removed in favor of a regular function. * terraform: refactor EvalValidateProvisioner EvalValidateProvisioner is now a method on NodeValidatableResource. * terraform: refactor EvalValidateResource EvalValidateResource is now a method on NodeValidatableResource, and the functions called by (the new) validateResource are now standalone functions. This particular refactor gets the prize for "most complicated test refactoring". * terraform: refactor EvalMaybeTainted EvalMaybeTainted was a relatively simple operation which never returned an error, so I've refactored it into a plain function and moved it into the only file its called from. * terraform: eval-related cleanup De-exported preApplyHook, which got missed in my general cleanup sweeps. Removed resourceHasUserVisibleApply in favor of moving the logic inline - it was a single-line check so calling the function was (nearly) as much code as just checking if the resource was managed. * terraform: refactor EvalApplyProvisioners EvalApplyProvisioners.Eval is now a method on NodeResourceAbstractInstance. There were two "apply"ish functions, so I named the first "evalApplyProvisioners" since it mainly determined if provisioners should be run before passing off execution to applyProvisioners. * terraform: refactor EvalApply EvalApply is now a method on NodeAbstractResourceInstance. This was one of the trickier Eval()s to refactor, and my goal was to change as little as possible to avoid unintended side effects. One notable change: there was a createNew boolean that was only used in NodeApplyableResourceInstance.managedResourceExecute, and that boolean was populated from the change (which was available from managedResourceExecute), so I removed it from apply entirely. Out of an abundance of caution I assigned the value to createNew in (roughtly) the same spot, in case I was missing some place where the change might get modified. TODO: Destroy nodes passed nil configs into apply, and I am curious if we can get the same functionality by checking if the planned change is a destroy, instead of passing a config into apply. That felt too risky for this refactor but it is something I would like to explore at a future point. There are also a few updates to log output in this PR, since I spent some time staring at logs and noticed various spots I missed.
2020-12-10 07:05:53 -06:00
// we pass a nil configuration to apply because we are destroying
state, applyDiags := n.apply(ctx, state, change, nil, false)
diags = diags.Append(applyDiags)
// don't return immediately on errors, we need to handle the state
// Always write the resource back to the state deposed. If it
// was successfully destroyed it will be pruned. If it was not, it will
// be caught on the next run.
writeDiags := n.writeResourceInstanceState(ctx, state)
diags.Append(writeDiags)
if diags.HasErrors() {
return diags
}
diags = diags.Append(n.postApplyHook(ctx, state, diags.Err()))
Mildwonkey/eval apply (#27222) * rename files for consistency with contents * terraform: refactor EvalValidateSelfref The EvalValidateSelfref eval node implementation was removed in favor of a regular function. * terraform: refactor EvalValidateProvisioner EvalValidateProvisioner is now a method on NodeValidatableResource. * terraform: refactor EvalValidateResource EvalValidateResource is now a method on NodeValidatableResource, and the functions called by (the new) validateResource are now standalone functions. This particular refactor gets the prize for "most complicated test refactoring". * terraform: refactor EvalMaybeTainted EvalMaybeTainted was a relatively simple operation which never returned an error, so I've refactored it into a plain function and moved it into the only file its called from. * terraform: eval-related cleanup De-exported preApplyHook, which got missed in my general cleanup sweeps. Removed resourceHasUserVisibleApply in favor of moving the logic inline - it was a single-line check so calling the function was (nearly) as much code as just checking if the resource was managed. * terraform: refactor EvalApplyProvisioners EvalApplyProvisioners.Eval is now a method on NodeResourceAbstractInstance. There were two "apply"ish functions, so I named the first "evalApplyProvisioners" since it mainly determined if provisioners should be run before passing off execution to applyProvisioners. * terraform: refactor EvalApply EvalApply is now a method on NodeAbstractResourceInstance. This was one of the trickier Eval()s to refactor, and my goal was to change as little as possible to avoid unintended side effects. One notable change: there was a createNew boolean that was only used in NodeApplyableResourceInstance.managedResourceExecute, and that boolean was populated from the change (which was available from managedResourceExecute), so I removed it from apply entirely. Out of an abundance of caution I assigned the value to createNew in (roughtly) the same spot, in case I was missing some place where the change might get modified. TODO: Destroy nodes passed nil configs into apply, and I am curious if we can get the same functionality by checking if the planned change is a destroy, instead of passing a config into apply. That felt too risky for this refactor but it is something I would like to explore at a future point. There are also a few updates to log output in this PR, since I spent some time staring at logs and noticed various spots I missed.
2020-12-10 07:05:53 -06:00
return diags.Append(updateStateHook(ctx))
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
}
// GraphNodeDeposer is an optional interface implemented by graph nodes that
// might create a single new deposed object for a specific associated resource
// instance, allowing a caller to optionally pre-allocate a DeposedKey for
// it.
type GraphNodeDeposer interface {
// SetPreallocatedDeposedKey will be called during graph construction
// if a particular node must use a pre-allocated deposed key if/when it
// "deposes" the current object of its associated resource instance.
SetPreallocatedDeposedKey(key states.DeposedKey)
}
// graphNodeDeposer is an embeddable implementation of GraphNodeDeposer.
// Embed it in a node type to get automatic support for it, and then access
// the field PreallocatedDeposedKey to access any pre-allocated key.
type graphNodeDeposer struct {
PreallocatedDeposedKey states.DeposedKey
}
func (n *graphNodeDeposer) SetPreallocatedDeposedKey(key states.DeposedKey) {
n.PreallocatedDeposedKey = key
}
func (n *NodeDestroyDeposedResourceInstanceObject) writeResourceInstanceState(ctx EvalContext, obj *states.ResourceInstanceObject) error {
absAddr := n.Addr
key := n.DeposedKey
state := ctx.State()
if key == states.NotDeposed {
// should never happen
return fmt.Errorf("can't save deposed object for %s without a deposed key; this is a bug in Terraform that should be reported", absAddr)
}
if obj == nil {
// No need to encode anything: we'll just write it directly.
state.SetResourceInstanceDeposed(absAddr, key, nil, n.ResolvedProvider)
log.Printf("[TRACE] writeResourceInstanceStateDeposed: removing state object for %s deposed %s", absAddr, key)
return nil
}
_, providerSchema, err := getProvider(ctx, n.ResolvedProvider)
if err != nil {
return err
}
if providerSchema == nil {
// Should never happen, unless our state object is nil
panic("writeResourceInstanceStateDeposed used with no ProviderSchema object")
}
schema, currentVersion := providerSchema.SchemaForResourceAddr(absAddr.ContainingResource().Resource)
if schema == nil {
// It shouldn't be possible to get this far in any real scenario
// without a schema, but we might end up here in contrived tests that
// fail to set up their world properly.
return fmt.Errorf("failed to encode %s in state: no resource type schema available", absAddr)
}
src, err := obj.Encode(schema.ImpliedType(), currentVersion)
if err != nil {
return fmt.Errorf("failed to encode %s in state: %s", absAddr, err)
}
log.Printf("[TRACE] writeResourceInstanceStateDeposed: writing state object for %s deposed %s", absAddr, key)
state.SetResourceInstanceDeposed(absAddr, key, src, n.ResolvedProvider)
return nil
}