opentofu/terraform/graph_builder_apply_test.go

836 lines
22 KiB
Go
Raw Normal View History

package terraform
import (
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
"fmt"
"strings"
"testing"
"github.com/hashicorp/terraform/addrs"
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
"github.com/hashicorp/terraform/plans"
"github.com/hashicorp/terraform/states"
"github.com/zclconf/go-cty/cty"
)
func TestApplyGraphBuilder_impl(t *testing.T) {
var _ GraphBuilder = new(ApplyGraphBuilder)
}
func TestApplyGraphBuilder(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.create"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Create,
},
},
{
Addr: mustResourceInstanceAddr("test_object.other"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
},
},
{
Addr: mustResourceInstanceAddr("module.child.test_object.create"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Create,
},
},
{
Addr: mustResourceInstanceAddr("module.child.test_object.other"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Create,
},
},
},
}
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-basic"),
Changes: changes,
Components: simpleMockComponentFactory(),
Schemas: simpleTestSchemas(),
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
if g.Path.String() != addrs.RootModuleInstance.String() {
t.Fatalf("wrong path %q", g.Path.String())
}
actual := strings.TrimSpace(g.String())
expected := strings.TrimSpace(testApplyGraphBuilderStr)
if actual != expected {
t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actual, expected)
}
}
// This tests the ordering of two resources where a non-CBD depends
// on a CBD. GH-11349.
func TestApplyGraphBuilder_depCbd(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.A"),
ChangeSrc: plans.ChangeSrc{
Action: plans.CreateThenDelete,
},
},
{
Addr: mustResourceInstanceAddr("test_object.B"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
},
},
},
}
state := states.NewState()
root := state.EnsureModule(addrs.RootModuleInstance)
root.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.A").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"A"}`),
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
root.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.B").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"B","test_list":["x"]}`),
Dependencies: []addrs.ConfigResource{mustConfigResourceAddr("test_object.A")},
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-dep-cbd"),
Changes: changes,
Components: simpleMockComponentFactory(),
Schemas: simpleTestSchemas(),
State: state,
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
if g.Path.String() != addrs.RootModuleInstance.String() {
t.Fatalf("wrong path %q", g.Path.String())
}
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
// We're going to go hunting for our deposed instance node here, so we
// can find out its key to use in the assertions below.
var dk states.DeposedKey
for _, v := range g.Vertices() {
tv, ok := v.(*NodeDestroyDeposedResourceInstanceObject)
if !ok {
continue
}
if dk != states.NotDeposed {
t.Fatalf("more than one deposed instance node in the graph; want only one")
}
dk = tv.DeposedKey
}
if dk == states.NotDeposed {
t.Fatalf("no deposed instance node in the graph; want one")
}
destroyName := fmt.Sprintf("test_object.A (destroy deposed %s)", dk)
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
// Create A, Modify B, Destroy A
testGraphHappensBefore(
t, g,
"test_object.A",
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
destroyName,
)
testGraphHappensBefore(
t, g,
"test_object.A",
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
"test_object.B",
)
testGraphHappensBefore(
t, g,
"test_object.B",
core: Be more explicit in how we handle create_before_destroy Previously our handling of create_before_destroy -- and of deposed objects in particular -- was rather "implicit" and spread over various different subsystems. We'd quietly just destroy every deposed object during a destroy operation, without any user-visible plan to do so. Here we make things more explicit by tracking each deposed object individually by its pseudorandomly-allocated key. There are two different mechanisms at play here, building on the same concepts: - During a replace operation with create_before_destroy, we *pre-allocate* a DeposedKey to use for the prior object in the "apply" node and then pass that exact id to the destroy node, ensuring that we only destroy the single object we planned to destroy. In the happy path here the user never actually sees the allocated deposed key because we use it and then immediately destroy it within the same operation. However, that destroy may fail, which brings us to the second mechanism: - If any deposed objects are already present in state during _plan_, we insert a destroy change for them into the plan so that it's explicit to the user that we are going to destroy these additional objects, and then create an individual graph node for each one in DiffTransformer. The main motivation here is to be more careful in how we handle these destroys so that from a user's standpoint we never destroy something without the user knowing about it ahead of time. However, this new organization also hopefully makes the code itself a little easier to follow because the connection between the create and destroy steps of a Replace is reprseented in a single place (in DiffTransformer) and deposed instances each have their own explicit graph node rather than being secretly handled as part of the main instance-level graph node.
2018-09-20 14:30:52 -05:00
destroyName,
)
}
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
// This tests the ordering of two resources that are both CBD that
// require destroy/create.
func TestApplyGraphBuilder_doubleCBD(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.A"),
ChangeSrc: plans.ChangeSrc{
Action: plans.CreateThenDelete,
},
},
{
Addr: mustResourceInstanceAddr("test_object.B"),
ChangeSrc: plans.ChangeSrc{
Action: plans.CreateThenDelete,
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
},
},
},
}
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-double-cbd"),
Changes: changes,
Components: simpleMockComponentFactory(),
Schemas: simpleTestSchemas(),
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
g, err := b.Build(addrs.RootModuleInstance)
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
if err != nil {
t.Fatalf("err: %s", err)
}
if g.Path.String() != addrs.RootModuleInstance.String() {
t.Fatalf("wrong path %q", g.Path.String())
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
2018-09-25 09:42:44 -05:00
// We're going to go hunting for our deposed instance node here, so we
// can find out its key to use in the assertions below.
var destroyA, destroyB string
for _, v := range g.Vertices() {
tv, ok := v.(*NodeDestroyDeposedResourceInstanceObject)
if !ok {
continue
}
switch tv.Addr.Resource.Resource.Name {
2018-09-25 09:42:44 -05:00
case "A":
destroyA = fmt.Sprintf("test_object.A (destroy deposed %s)", tv.DeposedKey)
case "B":
destroyB = fmt.Sprintf("test_object.B (destroy deposed %s)", tv.DeposedKey)
default:
t.Fatalf("unknown instance: %s", tv.Addr)
}
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
2018-09-25 09:42:44 -05:00
// Create A, Modify B, Destroy A
testGraphHappensBefore(
t, g,
"test_object.A",
destroyA,
)
testGraphHappensBefore(
t, g,
"test_object.A",
"test_object.B",
)
testGraphHappensBefore(
t, g,
"test_object.B",
destroyB,
)
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
// This tests the ordering of two resources being destroyed that depend
// on each other from only state. GH-11749
func TestApplyGraphBuilder_destroyStateOnly(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("module.child.test_object.A"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
{
Addr: mustResourceInstanceAddr("module.child.test_object.B"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
},
}
state := states.NewState()
root := state.EnsureModule(addrs.RootModuleInstance)
child := state.EnsureModule(addrs.RootModuleInstance.Child("child", addrs.NoKey))
root.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.A").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"foo"}`),
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
child.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.B").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"bar"}`),
Dependencies: []addrs.ConfigResource{mustConfigResourceAddr("module.child.test_object.A")},
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
b := &ApplyGraphBuilder{
Config: testModule(t, "empty"),
Changes: changes,
State: state,
Components: simpleMockComponentFactory(),
Schemas: simpleTestSchemas(),
}
g, diags := b.Build(addrs.RootModuleInstance)
if diags.HasErrors() {
t.Fatalf("err: %s", diags.Err())
}
if g.Path.String() != addrs.RootModuleInstance.String() {
t.Fatalf("wrong path %q", g.Path.String())
}
testGraphHappensBefore(
t, g,
"module.child.test_object.B (destroy)",
"module.child.test_object.A (destroy)")
}
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
// This tests the ordering of destroying a single count of a resource.
func TestApplyGraphBuilder_destroyCount(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.A[1]"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
{
Addr: mustResourceInstanceAddr("test_object.B"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
},
},
},
}
state := states.NewState()
root := state.RootModule()
addrA := mustResourceInstanceAddr("test_object.A[1]")
root.SetResourceInstanceCurrent(
addrA.Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"B"}`),
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
root.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.B").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"B"}`),
Dependencies: []addrs.ConfigResource{addrA.ContainingResource().Config()},
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
b := &ApplyGraphBuilder{
2018-09-25 09:42:44 -05:00
Config: testModule(t, "graph-builder-apply-count"),
Changes: changes,
Components: simpleMockComponentFactory(),
Schemas: simpleTestSchemas(),
State: state,
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
g, err := b.Build(addrs.RootModuleInstance)
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
if err != nil {
t.Fatalf("err: %s", err)
}
if g.Path.String() != addrs.RootModuleInstance.String() {
t.Fatalf("wrong module path %q", g.Path)
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
actual := strings.TrimSpace(g.String())
expected := strings.TrimSpace(testApplyGraphBuilderDestroyCountStr)
if actual != expected {
t.Fatalf("wrong result\n\ngot:\n%s\n\nwant:\n%s", actual, expected)
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
}
}
func TestApplyGraphBuilder_moduleDestroy(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("module.A.test_object.foo"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
{
Addr: mustResourceInstanceAddr("module.B.test_object.foo"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
},
}
state := states.NewState()
modA := state.EnsureModule(addrs.RootModuleInstance.Child("A", addrs.NoKey))
modA.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.foo").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"foo"}`),
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
modB := state.EnsureModule(addrs.RootModuleInstance.Child("B", addrs.NoKey))
modB.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.foo").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"foo","value":"foo"}`),
Dependencies: []addrs.ConfigResource{mustConfigResourceAddr("module.A.test_object.foo")},
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-module-destroy"),
Changes: changes,
Components: simpleMockComponentFactory(),
Schemas: simpleTestSchemas(),
State: state,
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
testGraphHappensBefore(
t, g,
"module.B.test_object.foo (destroy)",
"module.A.test_object.foo (destroy)",
)
}
func TestApplyGraphBuilder_provisioner(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.foo"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Create,
},
},
},
}
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-provisioner"),
Changes: changes,
Components: simpleMockComponentFactory(),
Schemas: simpleTestSchemas(),
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
testGraphContains(t, g, "provisioner.test")
testGraphHappensBefore(
t, g,
"provisioner.test",
"test_object.foo",
)
}
func TestApplyGraphBuilder_provisionerDestroy(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.foo"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
},
}
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-provisioner"),
Changes: changes,
Components: simpleMockComponentFactory(),
Schemas: simpleTestSchemas(),
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
testGraphContains(t, g, "provisioner.test")
testGraphHappensBefore(
t, g,
"provisioner.test",
"test_object.foo (destroy)",
)
}
func TestApplyGraphBuilder_targetModule(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.foo"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
},
},
{
Addr: mustResourceInstanceAddr("module.child2.test_object.foo"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
},
},
},
}
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-target-module"),
Changes: changes,
Components: simpleMockComponentFactory(),
Schemas: simpleTestSchemas(),
Targets: []addrs.Targetable{
addrs.RootModuleInstance.Child("child2", addrs.NoKey),
},
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
testGraphNotContains(t, g, "module.child1.output.instance_id")
}
// Ensure that an update resulting from the removal of a resource happens after
// that resource is destroyed.
func TestApplyGraphBuilder_updateFromOrphan(t *testing.T) {
schemas := simpleTestSchemas()
instanceSchema := schemas.Providers[addrs.NewDefaultProvider("test")].ResourceTypes["test_object"]
bBefore, _ := plans.NewDynamicValue(
cty.ObjectVal(map[string]cty.Value{
"id": cty.StringVal("b_id"),
"test_string": cty.StringVal("a_id"),
}), instanceSchema.ImpliedType())
bAfter, _ := plans.NewDynamicValue(
cty.ObjectVal(map[string]cty.Value{
"id": cty.StringVal("b_id"),
"test_string": cty.StringVal("changed"),
}), instanceSchema.ImpliedType())
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.a"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
{
Addr: mustResourceInstanceAddr("test_object.b"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
Before: bBefore,
After: bAfter,
},
},
},
}
state := states.NewState()
root := state.EnsureModule(addrs.RootModuleInstance)
root.SetResourceInstanceCurrent(
addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",
Name: "a",
}.Instance(addrs.NoKey),
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"a_id"}`),
},
addrs.AbsProviderConfig{
Provider: addrs.NewDefaultProvider("test"),
Module: addrs.RootModule,
},
)
root.SetResourceInstanceCurrent(
addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",
Name: "b",
}.Instance(addrs.NoKey),
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"b_id","test_string":"a_id"}`),
Dependencies: []addrs.ConfigResource{
addrs.ConfigResource{
Resource: addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",
Name: "a",
},
Module: root.Addr.Module(),
},
},
},
addrs.AbsProviderConfig{
Provider: addrs.NewDefaultProvider("test"),
Module: addrs.RootModule,
},
)
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-orphan-update"),
Changes: changes,
Components: simpleMockComponentFactory(),
Schemas: schemas,
State: state,
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
expected := strings.TrimSpace(`
test_object.a (destroy)
test_object.b
test_object.a (destroy)
`)
instanceGraph := filterInstances(g)
got := strings.TrimSpace(instanceGraph.String())
if got != expected {
t.Fatalf("expected:\n%s\ngot:\n%s", expected, got)
}
}
// Ensure that an update resulting from the removal of a resource happens before
// a CBD resource is destroyed.
func TestApplyGraphBuilder_updateFromCBDOrphan(t *testing.T) {
schemas := simpleTestSchemas()
instanceSchema := schemas.Providers[addrs.NewDefaultProvider("test")].ResourceTypes["test_object"]
bBefore, _ := plans.NewDynamicValue(
cty.ObjectVal(map[string]cty.Value{
"id": cty.StringVal("b_id"),
"test_string": cty.StringVal("a_id"),
}), instanceSchema.ImpliedType())
bAfter, _ := plans.NewDynamicValue(
cty.ObjectVal(map[string]cty.Value{
"id": cty.StringVal("b_id"),
"test_string": cty.StringVal("changed"),
}), instanceSchema.ImpliedType())
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.a"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
{
Addr: mustResourceInstanceAddr("test_object.b"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Update,
Before: bBefore,
After: bAfter,
},
},
},
}
state := states.NewState()
root := state.EnsureModule(addrs.RootModuleInstance)
root.SetResourceInstanceCurrent(
addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",
Name: "a",
}.Instance(addrs.NoKey),
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"a_id"}`),
CreateBeforeDestroy: true,
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
root.SetResourceInstanceCurrent(
addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",
Name: "b",
}.Instance(addrs.NoKey),
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"b_id","test_string":"a_id"}`),
Dependencies: []addrs.ConfigResource{
addrs.ConfigResource{
Resource: addrs.Resource{
Mode: addrs.ManagedResourceMode,
Type: "test_object",
Name: "a",
},
Module: root.Addr.Module(),
},
},
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
)
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-apply-orphan-update"),
Changes: changes,
Components: simpleMockComponentFactory(),
Schemas: schemas,
State: state,
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatalf("err: %s", err)
}
expected := strings.TrimSpace(`
test_object.a (destroy)
test_object.b
test_object.b
`)
instanceGraph := filterInstances(g)
got := strings.TrimSpace(instanceGraph.String())
if got != expected {
t.Fatalf("expected:\n%s\ngot:\n%s", expected, got)
}
}
// The orphan clean up node should not be connected to a provider
func TestApplyGraphBuilder_orphanedWithProvider(t *testing.T) {
changes := &plans.Changes{
Resources: []*plans.ResourceInstanceChangeSrc{
{
Addr: mustResourceInstanceAddr("test_object.A"),
ChangeSrc: plans.ChangeSrc{
Action: plans.Delete,
},
},
},
}
state := states.NewState()
root := state.EnsureModule(addrs.RootModuleInstance)
root.SetResourceInstanceCurrent(
mustResourceInstanceAddr("test_object.A").Resource,
&states.ResourceInstanceObjectSrc{
Status: states.ObjectReady,
AttrsJSON: []byte(`{"id":"A"}`),
},
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"].foo`),
)
b := &ApplyGraphBuilder{
Config: testModule(t, "graph-builder-orphan-alias"),
Changes: changes,
Components: simpleMockComponentFactory(),
Schemas: simpleTestSchemas(),
State: state,
}
g, err := b.Build(addrs.RootModuleInstance)
if err != nil {
t.Fatal(err)
}
// The cleanup node has no state or config of its own, so would create a
// default provider which we don't want.
testGraphNotContains(t, g, "provider.test")
}
const testApplyGraphBuilderStr = `
meta.count-boundary (EachMode fixup)
2020-04-02 12:43:23 -05:00
module.child (close)
test_object.other
2020-04-02 12:43:23 -05:00
module.child (close)
module.child.test_object.other
module.child (expand)
module.child.test_object.create
module.child.test_object.create (expand)
module.child.test_object.create (expand)
module.child (expand)
provider["registry.terraform.io/hashicorp/test"]
provisioner.test
module.child.test_object.other
module.child.test_object.create
module.child.test_object.other (expand)
module.child.test_object.other (expand)
module.child (expand)
provider["registry.terraform.io/hashicorp/test"]
provider["registry.terraform.io/hashicorp/test"]
provider["registry.terraform.io/hashicorp/test"] (close)
module.child.test_object.other
test_object.other
provisioner.test
provisioner.test (close)
module.child.test_object.create
root
meta.count-boundary (EachMode fixup)
provider["registry.terraform.io/hashicorp/test"] (close)
provisioner.test (close)
test_object.create
test_object.create (expand)
test_object.create (expand)
provider["registry.terraform.io/hashicorp/test"]
test_object.other
test_object.create
test_object.other (expand)
test_object.other (expand)
provider["registry.terraform.io/hashicorp/test"]
`
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
const testApplyGraphBuilderDestroyCountStr = `
meta.count-boundary (EachMode fixup)
test_object.B
provider["registry.terraform.io/hashicorp/test"]
provider["registry.terraform.io/hashicorp/test"] (close)
test_object.B
root
meta.count-boundary (EachMode fixup)
provider["registry.terraform.io/hashicorp/test"] (close)
test_object.A (expand)
provider["registry.terraform.io/hashicorp/test"]
2018-09-25 09:42:44 -05:00
test_object.A[1] (destroy)
provider["registry.terraform.io/hashicorp/test"]
test_object.B
test_object.A (expand)
test_object.A[1] (destroy)
test_object.B (expand)
test_object.B (expand)
provider["registry.terraform.io/hashicorp/test"]
terraform: apply resource must depend on destroy deps Fixes #10440 This updates the behavior of "apply" resources to depend on the destroy versions of their dependencies. We make an exception to this behavior when the "apply" resource is CBD. This is odd and not 100% correct, but it mimics the behavior of the legacy graphs and avoids us having to do major core work to support the 100% correct solution. I'll explain this in examples... Given the following configuration: resource "null_resource" "a" { count = "${var.count}" } resource "null_resource" "b" { triggers { key = "${join(",", null_resource.a.*.id)}" } } Assume we've successfully created this configuration with count = 2. When going from count = 2 to count = 1, `null_resource.b` should wait for `null_resource.a.1` to destroy. If it doesn't, then it is a race: depending when we interpolate the `triggers.key` attribute of `null_resource.b`, we may get 1 value or 2. If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get 2. This was the root cause of #10440 In the legacy graphs, `null_resource.b` would depend on the destruction of any `null_resource.a` (orphans, tainted, anything!). This would ensure proper ordering. We mimic that behavior here. The difference is CBD. If `null_resource.b` has CBD enabled, then the ordering **in the legacy graph** becomes: 1. null_resource.b (create) 2. null_resource.b (destroy) 3. null_resource.a (destroy) In this case, the update would always have 2 values for `triggers.key`, even though we were destroying a resource later! This scenario required two `terraform apply` operations. This is what the CBD check is for in this PR. We do this to mimic the behavior of the legacy graph. The correct solution to do one day is to allow splat references (`null_resource.a.*.id`) to happen in parallel and only read up to to the `count` amount in the state. This requires some fairly significant work close to the 0.8 release date, so we can defer this to later and adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
`