opentofu/internal/terraform/graph_builder_apply_test.go

752 lines
19 KiB
Go
Raw Normal View History

package terraform
import (
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
"fmt"
"strings"
"testing"
core: refactoring.ImpliedMoveStatements replaces NodeCountBoundary Going back a long time we've had a special magic behavior which tries to recognize a situation where a module author either added or removed the "count" argument from a resource that already has instances, and to silently rename the zeroth or no-key instance so that we don't plan to destroy and recreate the associated object. Now we have a more general idea of "move statements", and specifically the idea of "implied" move statements which replicates the same heuristic we used to use for this behavior, we can treat this magic renaming rule as just another "move statement", special only in that Terraform generates it automatically rather than it being written out explicitly in the configuration. In return for wiring that in, we can now remove altogether the NodeCountBoundary graph node type and its associated graph transformer, CountBoundaryTransformer. We handle moves as a preprocessing step before building the plan graph, so we no longer need to include any special nodes in the graph to deal with that situation. The test updates here are mainly for the graph builders themselves, to acknowledge that indeed we're no longer inserting the NodeCountBoundary vertices. The vertices that NodeCountBoundary previously depended on now become dependencies of the special "root" vertex, although in many cases here we don't see that explicitly because of the transitive reduction algorithm, which notices when there's already an equivalent indirect dependency chain and removes the redundant edge. We already have plenty of test coverage for these "count boundary" cases in the context tests whose names start with TestContext2Plan_count and TestContext2Apply_resourceCount, all of which continued to pass here without any modification and so are not visible in the diff. The test functions particularly relevant to this situation are: - TestContext2Plan_countIncreaseFromNotSet - TestContext2Plan_countDecreaseToOne - TestContext2Plan_countOneIndex - TestContext2Apply_countDecreaseToOneCorrupted The last of those in particular deals with the situation where we have both a no-key instance _and_ a zero-key instance in the prior state, which is interesting here because to exercises an intentional interaction between refactoring.ImpliedMoveStatements and refactoring.ApplyMoves, where we intentionally generate an implied move statement that produces a collision and then expect ApplyMoves to deal with it in the same way as it would deal with all other collisions, and thus ensure we handle both the explicit and implied collisions in the same way. This does affect some UI-level tests, because a nice side-effect of this new treatment of this old feature is that we can now report explicitly in the UI that we're assigning new addresses to these objects, whereas before we just said nothing and hoped the user would just guess what had happened and why they therefore weren't seeing a diff. The backend/local plan tests actually had a pre-existing bug where they were using a state with a different instance key than the config called for but getting away with it because we'd previously silently fix it up. That's still fixed up, but now done with an explicit mention in the UI and so I made the state consistent with the configuration here so that the tests would be able to recognize _real_ differences where present, as opposed to the errant difference caused by that inconsistency.
2021-09-17 17:32:32 -05:00
"github.com/google/go-cmp/cmp"
"github.com/zclconf/go-cty/cty"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/states"
)
func TestApplyGraphBuilder_impl(t *testing.T) {
var _ GraphBuilder = new(ApplyGraphBuilder)
}
func TestApplyGraphBuilder(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.create"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Create,
},
},
{
Addr: mustResourceInstanceAddr("test_object.other"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
},
},
{
Addr: mustResourceInstanceAddr("module.child.test_object.create"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Create,
},
},
{
Addr: mustResourceInstanceAddr("module.child.test_object.other"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Create,
},
},
},
}
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-basic"),
Changes: changes,
Plugins: simpleMockPluginLibrary(),
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
if g.Path.String() != addrs.RootModuleInstance.String() {
t.Fatalf("wrong path %q", g.Path.String())
}
core: refactoring.ImpliedMoveStatements replaces NodeCountBoundary Going back a long time we've had a special magic behavior which tries to recognize a situation where a module author either added or removed the "count" argument from a resource that already has instances, and to silently rename the zeroth or no-key instance so that we don't plan to destroy and recreate the associated object. Now we have a more general idea of "move statements", and specifically the idea of "implied" move statements which replicates the same heuristic we used to use for this behavior, we can treat this magic renaming rule as just another "move statement", special only in that Terraform generates it automatically rather than it being written out explicitly in the configuration. In return for wiring that in, we can now remove altogether the NodeCountBoundary graph node type and its associated graph transformer, CountBoundaryTransformer. We handle moves as a preprocessing step before building the plan graph, so we no longer need to include any special nodes in the graph to deal with that situation. The test updates here are mainly for the graph builders themselves, to acknowledge that indeed we're no longer inserting the NodeCountBoundary vertices. The vertices that NodeCountBoundary previously depended on now become dependencies of the special "root" vertex, although in many cases here we don't see that explicitly because of the transitive reduction algorithm, which notices when there's already an equivalent indirect dependency chain and removes the redundant edge. We already have plenty of test coverage for these "count boundary" cases in the context tests whose names start with TestContext2Plan_count and TestContext2Apply_resourceCount, all of which continued to pass here without any modification and so are not visible in the diff. The test functions particularly relevant to this situation are: - TestContext2Plan_countIncreaseFromNotSet - TestContext2Plan_countDecreaseToOne - TestContext2Plan_countOneIndex - TestContext2Apply_countDecreaseToOneCorrupted The last of those in particular deals with the situation where we have both a no-key instance _and_ a zero-key instance in the prior state, which is interesting here because to exercises an intentional interaction between refactoring.ImpliedMoveStatements and refactoring.ApplyMoves, where we intentionally generate an implied move statement that produces a collision and then expect ApplyMoves to deal with it in the same way as it would deal with all other collisions, and thus ensure we handle both the explicit and implied collisions in the same way. This does affect some UI-level tests, because a nice side-effect of this new treatment of this old feature is that we can now report explicitly in the UI that we're assigning new addresses to these objects, whereas before we just said nothing and hoped the user would just guess what had happened and why they therefore weren't seeing a diff. The backend/local plan tests actually had a pre-existing bug where they were using a state with a different instance key than the config called for but getting away with it because we'd previously silently fix it up. That's still fixed up, but now done with an explicit mention in the UI and so I made the state consistent with the configuration here so that the tests would be able to recognize _real_ differences where present, as opposed to the errant difference caused by that inconsistency.
2021-09-17 17:32:32 -05:00
got := strings.TrimSpace(g.String())
want := strings.TrimSpace(testApplyGraphBuilderStr)
if diff := cmp.Diff(want, got); diff != "" {
t.Fatalf("wrong result\n%s", diff)
}
}
// This tests the ordering of two resources where a non-CBD depends
// on a CBD. GH-11349.
func TestApplyGraphBuilder_depCbd(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.A"),
ChangeSrc: plans.ChangeSrc{
Action: plans.CreateThenDelete,
},
},
{
Addr: mustResourceInstanceAddr("test_object.B"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
},
},
},
}
state := states.NewState()
root := state.EnsureModule(addrs.RootModuleInstance)
root.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.A").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"A"}`),
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
root.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.B").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"B","test_list":["x"]}`),
Dependencies: []addrs.ConfigResource{mustConfigResourceAddr("test_object.A")},
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-dep-cbd"),
Changes: changes,
Plugins: simpleMockPluginLibrary(),
State: state,
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
if g.Path.String() != addrs.RootModuleInstance.String() {
t.Fatalf("wrong path %q", g.Path.String())
}
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
// We're going to go hunting for our deposed instance node here, so we
// can find out its key to use in the assertions below.
var dk states.DeposedKey
for _, v := range g.Vertices() {
tv, ok := v.(*NodeDestroyDeposedResourceInstanceObject)
if !ok {
continue
}
if dk != states.NotDeposed {
t.Fatalf("more than one deposed instance node in the graph; want only one")
}
dk = tv.DeposedKey
}
if dk == states.NotDeposed {
t.Fatalf("no deposed instance node in the graph; want one")
}
destroyName := fmt.Sprintf("test_object.A (destroy deposed %s)", dk)
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
// Create A, Modify B, Destroy A
testGraphHappensBefore(
t, g,
"test_object.A",
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
destroyName,
)
testGraphHappensBefore(
t, g,
"test_object.A",
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
"test_object.B",
)
testGraphHappensBefore(
t, g,
"test_object.B",
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
destroyName,
)
}
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
// This tests the ordering of two resources that are both CBD that
// require destroy/create.
func TestApplyGraphBuilder_doubleCBD(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.A"),
ChangeSrc: plans.ChangeSrc{
Action: plans.CreateThenDelete,
},
},
{
Addr: mustResourceInstanceAddr("test_object.B"),
ChangeSrc: plans.ChangeSrc{
Action: plans.CreateThenDelete,
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
},
},
},
}
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-double-cbd"),
Changes: changes,
Plugins: simpleMockPluginLibrary(),
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
g, err := b.Build(addrs.RootModuleInstance)
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
if err != nil {
t.Fatalf("err: %s", err)
}
if g.Path.String() != addrs.RootModuleInstance.String() {
t.Fatalf("wrong path %q", g.Path.String())
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
2018-09-25 09:42:44 -05:00
// We're going to go hunting for our deposed instance node here, so we
// can find out its key to use in the assertions below.
var destroyA, destroyB string
for _, v := range g.Vertices() {
tv, ok := v.(*NodeDestroyDeposedResourceInstanceObject)
if !ok {
continue
}
switch tv.Addr.Resource.Resource.Name {
2018-09-25 09:42:44 -05:00
case "A":
destroyA = fmt.Sprintf("test_object.A (destroy deposed %s)", tv.DeposedKey)
case "B":
destroyB = fmt.Sprintf("test_object.B (destroy deposed %s)", tv.DeposedKey)
default:
t.Fatalf("unknown instance: %s", tv.Addr)
}
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
2018-09-25 09:42:44 -05:00
// Create A, Modify B, Destroy A
testGraphHappensBefore(
t, g,
"test_object.A",
destroyA,
)
testGraphHappensBefore(
t, g,
"test_object.A",
"test_object.B",
)
testGraphHappensBefore(
t, g,
"test_object.B",
destroyB,
)
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
// This tests the ordering of two resources being destroyed that depend
// on each other from only state. GH-11749
func TestApplyGraphBuilder_destroyStateOnly(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("module.child.test_object.A"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
{
Addr: mustResourceInstanceAddr("module.child.test_object.B"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
},
}
state := states.NewState()
root := state.EnsureModule(addrs.RootModuleInstance)
child := state.EnsureModule(addrs.RootModuleInstance.Child("child", addrs.NoKey))
root.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.A").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"foo"}`),
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
child.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.B").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"bar"}`),
Dependencies: []addrs.ConfigResource{mustConfigResourceAddr("module.child.test_object.A")},
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
b := &ApplyGraphBuilder{
Config: testModule(t, "empty"),
Changes: changes,
State: state,
Plugins: simpleMockPluginLibrary(),
}
g, diags := b.Build(addrs.RootModuleInstance)
if diags.HasErrors() {
t.Fatalf("err: %s", diags.Err())
}
if g.Path.String() != addrs.RootModuleInstance.String() {
t.Fatalf("wrong path %q", g.Path.String())
}
testGraphHappensBefore(
t, g,
"module.child.test_object.B (destroy)",
"module.child.test_object.A (destroy)")
}
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
// This tests the ordering of destroying a single count of a resource.
func TestApplyGraphBuilder_destroyCount(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.A[1]"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
{
Addr: mustResourceInstanceAddr("test_object.B"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
},
},
},
}
state := states.NewState()
root := state.RootModule()
addrA := mustResourceInstanceAddr("test_object.A[1]")
root.SetResourceInstanceCurrent(
addrA.Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"B"}`),
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
root.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.B").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"B"}`),
Dependencies: []addrs.ConfigResource{addrA.ContainingResource().Config()},
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-count"),
Changes: changes,
Plugins: simpleMockPluginLibrary(),
State: state,
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
g, err := b.Build(addrs.RootModuleInstance)
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
if err != nil {
t.Fatalf("err: %s", err)
}
if g.Path.String() != addrs.RootModuleInstance.String() {
t.Fatalf("wrong module path %q", g.Path)
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
core: refactoring.ImpliedMoveStatements replaces NodeCountBoundary Going back a long time we've had a special magic behavior which tries to recognize a situation where a module author either added or removed the "count" argument from a resource that already has instances, and to silently rename the zeroth or no-key instance so that we don't plan to destroy and recreate the associated object. Now we have a more general idea of "move statements", and specifically the idea of "implied" move statements which replicates the same heuristic we used to use for this behavior, we can treat this magic renaming rule as just another "move statement", special only in that Terraform generates it automatically rather than it being written out explicitly in the configuration. In return for wiring that in, we can now remove altogether the NodeCountBoundary graph node type and its associated graph transformer, CountBoundaryTransformer. We handle moves as a preprocessing step before building the plan graph, so we no longer need to include any special nodes in the graph to deal with that situation. The test updates here are mainly for the graph builders themselves, to acknowledge that indeed we're no longer inserting the NodeCountBoundary vertices. The vertices that NodeCountBoundary previously depended on now become dependencies of the special "root" vertex, although in many cases here we don't see that explicitly because of the transitive reduction algorithm, which notices when there's already an equivalent indirect dependency chain and removes the redundant edge. We already have plenty of test coverage for these "count boundary" cases in the context tests whose names start with TestContext2Plan_count and TestContext2Apply_resourceCount, all of which continued to pass here without any modification and so are not visible in the diff. The test functions particularly relevant to this situation are: - TestContext2Plan_countIncreaseFromNotSet - TestContext2Plan_countDecreaseToOne - TestContext2Plan_countOneIndex - TestContext2Apply_countDecreaseToOneCorrupted The last of those in particular deals with the situation where we have both a no-key instance _and_ a zero-key instance in the prior state, which is interesting here because to exercises an intentional interaction between refactoring.ImpliedMoveStatements and refactoring.ApplyMoves, where we intentionally generate an implied move statement that produces a collision and then expect ApplyMoves to deal with it in the same way as it would deal with all other collisions, and thus ensure we handle both the explicit and implied collisions in the same way. This does affect some UI-level tests, because a nice side-effect of this new treatment of this old feature is that we can now report explicitly in the UI that we're assigning new addresses to these objects, whereas before we just said nothing and hoped the user would just guess what had happened and why they therefore weren't seeing a diff. The backend/local plan tests actually had a pre-existing bug where they were using a state with a different instance key than the config called for but getting away with it because we'd previously silently fix it up. That's still fixed up, but now done with an explicit mention in the UI and so I made the state consistent with the configuration here so that the tests would be able to recognize _real_ differences where present, as opposed to the errant difference caused by that inconsistency.
2021-09-17 17:32:32 -05:00
got := strings.TrimSpace(g.String())
want := strings.TrimSpace(testApplyGraphBuilderDestroyCountStr)
if diff := cmp.Diff(want, got); diff != "" {
t.Fatalf("wrong result\n%s", diff)
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
}
func TestApplyGraphBuilder_moduleDestroy(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("module.A.test_object.foo"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
{
Addr: mustResourceInstanceAddr("module.B.test_object.foo"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
},
}
state := states.NewState()
modA := state.EnsureModule(addrs.RootModuleInstance.Child("A", addrs.NoKey))
modA.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.foo").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"foo"}`),
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
modB := state.EnsureModule(addrs.RootModuleInstance.Child("B", addrs.NoKey))
modB.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.foo").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"foo","value":"foo"}`),
Dependencies: []addrs.ConfigResource{mustConfigResourceAddr("module.A.test_object.foo")},
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-module-destroy"),
Changes: changes,
Plugins: simpleMockPluginLibrary(),
State: state,
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
testGraphHappensBefore(
t, g,
"module.B.test_object.foo (destroy)",
"module.A.test_object.foo (destroy)",
)
}
func TestApplyGraphBuilder_targetModule(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.foo"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
},
},
{
Addr: mustResourceInstanceAddr("module.child2.test_object.foo"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
},
},
},
}
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-target-module"),
Changes: changes,
Plugins: simpleMockPluginLibrary(),
Targets: []addrs.Targetable{
addrs.RootModuleInstance.Child("child2", addrs.NoKey),
},
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
testGraphNotContains(t, g, "module.child1.output.instance_id")
}
// Ensure that an update resulting from the removal of a resource happens after
// that resource is destroyed.
func TestApplyGraphBuilder_updateFromOrphan(t *testing.T) {
schemas := simpleTestSchemas()
instanceSchema := schemas.Providers[addrs.NewDefaultProvider("test")].ResourceTypes["test_object"]
bBefore, _ := plans.NewDynamicValue(
cty.ObjectVal(map[string]cty.Value{
"id": cty.StringVal("b_id"),
"test_string": cty.StringVal("a_id"),
}), instanceSchema.ImpliedType())
bAfter, _ := plans.NewDynamicValue(
cty.ObjectVal(map[string]cty.Value{
"id": cty.StringVal("b_id"),
"test_string": cty.StringVal("changed"),
}), instanceSchema.ImpliedType())
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.a"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
{
Addr: mustResourceInstanceAddr("test_object.b"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
Before: bBefore,
After: bAfter,
},
},
},
}
state := states.NewState()
root := state.EnsureModule(addrs.RootModuleInstance)
root.SetResourceInstanceCurrent(
addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",
Name: "a",
}.Instance(addrs.NoKey),
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"a_id"}`),
},
addrs.AbsProviderConfig{
Provider: addrs.NewDefaultProvider("test"),
Module: addrs.RootModule,
},
)
root.SetResourceInstanceCurrent(
addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",
Name: "b",
}.Instance(addrs.NoKey),
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"b_id","test_string":"a_id"}`),
Dependencies: []addrs.ConfigResource{
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
{
Resource: addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",
Name: "a",
},
Module: root.Addr.Module(),
},
},
},
addrs.AbsProviderConfig{
Provider: addrs.NewDefaultProvider("test"),
Module: addrs.RootModule,
},
)
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-orphan-update"),
Changes: changes,
Plugins: simpleMockPluginLibrary(),
State: state,
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
expected := strings.TrimSpace(`
test_object.a (destroy)
test_object.b
test_object.a (destroy)
`)
instanceGraph := filterInstances(g)
got := strings.TrimSpace(instanceGraph.String())
if got != expected {
t.Fatalf("expected:\n%s\ngot:\n%s", expected, got)
}
}
// Ensure that an update resulting from the removal of a resource happens before
// a CBD resource is destroyed.
func TestApplyGraphBuilder_updateFromCBDOrphan(t *testing.T) {
schemas := simpleTestSchemas()
instanceSchema := schemas.Providers[addrs.NewDefaultProvider("test")].ResourceTypes["test_object"]
bBefore, _ := plans.NewDynamicValue(
cty.ObjectVal(map[string]cty.Value{
"id": cty.StringVal("b_id"),
"test_string": cty.StringVal("a_id"),
}), instanceSchema.ImpliedType())
bAfter, _ := plans.NewDynamicValue(
cty.ObjectVal(map[string]cty.Value{
"id": cty.StringVal("b_id"),
"test_string": cty.StringVal("changed"),
}), instanceSchema.ImpliedType())
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.a"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
{
Addr: mustResourceInstanceAddr("test_object.b"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
Before: bBefore,
After: bAfter,
},
},
},
}
state := states.NewState()
root := state.EnsureModule(addrs.RootModuleInstance)
root.SetResourceInstanceCurrent(
addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",
Name: "a",
}.Instance(addrs.NoKey),
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"a_id"}`),
CreateBeforeDestroy: true,
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
root.SetResourceInstanceCurrent(
addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",
Name: "b",
}.Instance(addrs.NoKey),
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"b_id","test_string":"a_id"}`),
Dependencies: []addrs.ConfigResource{
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
{
Resource: addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",
Name: "a",
},
Module: root.Addr.Module(),
},
},
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-orphan-update"),
Changes: changes,
Plugins: simpleMockPluginLibrary(),
State: state,
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
expected := strings.TrimSpace(`
test_object.a (destroy)
test_object.b
test_object.b
`)
instanceGraph := filterInstances(g)
got := strings.TrimSpace(instanceGraph.String())
if got != expected {
t.Fatalf("expected:\n%s\ngot:\n%s", expected, got)
}
}
// The orphan clean up node should not be connected to a provider
func TestApplyGraphBuilder_orphanedWithProvider(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.A"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
},
}
state := states.NewState()
root := state.EnsureModule(addrs.RootModuleInstance)
root.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.A").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"A"}`),
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"].foo`),
)
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-orphan-alias"),
Changes: changes,
Plugins: simpleMockPluginLibrary(),
State: state,
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatal(err)
}
// The cleanup node has no state or config of its own, so would create a
// default provider which we don't want.
testGraphNotContains(t, g, "provider.test")
}
const testApplyGraphBuilderStr = `
2020-04-02 12:43:23 -05:00
module.child (close)
module.child.test_object.other
module.child (expand)
module.child.test_object.create
module.child.test_object.create (expand)
module.child.test_object.create (expand)
module.child (expand)
provider["registry.terraform.io/hashicorp/test"]
module.child.test_object.other
module.child.test_object.create
module.child.test_object.other (expand)
module.child.test_object.other (expand)
module.child (expand)
provider["registry.terraform.io/hashicorp/test"]
provider["registry.terraform.io/hashicorp/test"]
provider["registry.terraform.io/hashicorp/test"] (close)
module.child.test_object.other
test_object.other
root
core: refactoring.ImpliedMoveStatements replaces NodeCountBoundary Going back a long time we've had a special magic behavior which tries to recognize a situation where a module author either added or removed the "count" argument from a resource that already has instances, and to silently rename the zeroth or no-key instance so that we don't plan to destroy and recreate the associated object. Now we have a more general idea of "move statements", and specifically the idea of "implied" move statements which replicates the same heuristic we used to use for this behavior, we can treat this magic renaming rule as just another "move statement", special only in that Terraform generates it automatically rather than it being written out explicitly in the configuration. In return for wiring that in, we can now remove altogether the NodeCountBoundary graph node type and its associated graph transformer, CountBoundaryTransformer. We handle moves as a preprocessing step before building the plan graph, so we no longer need to include any special nodes in the graph to deal with that situation. The test updates here are mainly for the graph builders themselves, to acknowledge that indeed we're no longer inserting the NodeCountBoundary vertices. The vertices that NodeCountBoundary previously depended on now become dependencies of the special "root" vertex, although in many cases here we don't see that explicitly because of the transitive reduction algorithm, which notices when there's already an equivalent indirect dependency chain and removes the redundant edge. We already have plenty of test coverage for these "count boundary" cases in the context tests whose names start with TestContext2Plan_count and TestContext2Apply_resourceCount, all of which continued to pass here without any modification and so are not visible in the diff. The test functions particularly relevant to this situation are: - TestContext2Plan_countIncreaseFromNotSet - TestContext2Plan_countDecreaseToOne - TestContext2Plan_countOneIndex - TestContext2Apply_countDecreaseToOneCorrupted The last of those in particular deals with the situation where we have both a no-key instance _and_ a zero-key instance in the prior state, which is interesting here because to exercises an intentional interaction between refactoring.ImpliedMoveStatements and refactoring.ApplyMoves, where we intentionally generate an implied move statement that produces a collision and then expect ApplyMoves to deal with it in the same way as it would deal with all other collisions, and thus ensure we handle both the explicit and implied collisions in the same way. This does affect some UI-level tests, because a nice side-effect of this new treatment of this old feature is that we can now report explicitly in the UI that we're assigning new addresses to these objects, whereas before we just said nothing and hoped the user would just guess what had happened and why they therefore weren't seeing a diff. The backend/local plan tests actually had a pre-existing bug where they were using a state with a different instance key than the config called for but getting away with it because we'd previously silently fix it up. That's still fixed up, but now done with an explicit mention in the UI and so I made the state consistent with the configuration here so that the tests would be able to recognize _real_ differences where present, as opposed to the errant difference caused by that inconsistency.
2021-09-17 17:32:32 -05:00
module.child (close)
provider["registry.terraform.io/hashicorp/test"] (close)
test_object.create
test_object.create (expand)
test_object.create (expand)
provider["registry.terraform.io/hashicorp/test"]
test_object.other
test_object.create
test_object.other (expand)
test_object.other (expand)
provider["registry.terraform.io/hashicorp/test"]
`
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
const testApplyGraphBuilderDestroyCountStr = `
provider["registry.terraform.io/hashicorp/test"]
provider["registry.terraform.io/hashicorp/test"] (close)
test_object.B
root
provider["registry.terraform.io/hashicorp/test"] (close)
test_object.A (expand)
provider["registry.terraform.io/hashicorp/test"]
2018-09-25 09:42:44 -05:00
test_object.A[1] (destroy)
provider["registry.terraform.io/hashicorp/test"]
test_object.B
test_object.A (expand)
test_object.A[1] (destroy)
test_object.B (expand)
test_object.B (expand)
provider["registry.terraform.io/hashicorp/test"]
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
`