2016-09-13 12:56:37 -05:00
|
|
|
package terraform
|
|
|
|
|
|
|
|
import (
|
2018-09-20 14:30:52 -05:00
|
|
|
"fmt"
|
2016-09-13 12:56:37 -05:00
|
|
|
"strings"
|
|
|
|
"testing"
|
2018-05-04 21:24:06 -05:00
|
|
|
|
core: refactoring.ImpliedMoveStatements replaces NodeCountBoundary
Going back a long time we've had a special magic behavior which tries to
recognize a situation where a module author either added or removed the
"count" argument from a resource that already has instances, and to
silently rename the zeroth or no-key instance so that we don't plan to
destroy and recreate the associated object.
Now we have a more general idea of "move statements", and specifically
the idea of "implied" move statements which replicates the same heuristic
we used to use for this behavior, we can treat this magic renaming rule as
just another "move statement", special only in that Terraform generates it
automatically rather than it being written out explicitly in the
configuration.
In return for wiring that in, we can now remove altogether the
NodeCountBoundary graph node type and its associated graph transformer,
CountBoundaryTransformer. We handle moves as a preprocessing step before
building the plan graph, so we no longer need to include any special nodes
in the graph to deal with that situation.
The test updates here are mainly for the graph builders themselves, to
acknowledge that indeed we're no longer inserting the NodeCountBoundary
vertices. The vertices that NodeCountBoundary previously depended on now
become dependencies of the special "root" vertex, although in many cases
here we don't see that explicitly because of the transitive reduction
algorithm, which notices when there's already an equivalent indirect
dependency chain and removes the redundant edge.
We already have plenty of test coverage for these "count boundary" cases
in the context tests whose names start with TestContext2Plan_count and
TestContext2Apply_resourceCount, all of which continued to pass here
without any modification and so are not visible in the diff. The test
functions particularly relevant to this situation are:
- TestContext2Plan_countIncreaseFromNotSet
- TestContext2Plan_countDecreaseToOne
- TestContext2Plan_countOneIndex
- TestContext2Apply_countDecreaseToOneCorrupted
The last of those in particular deals with the situation where we have
both a no-key instance _and_ a zero-key instance in the prior state, which
is interesting here because to exercises an intentional interaction
between refactoring.ImpliedMoveStatements and refactoring.ApplyMoves,
where we intentionally generate an implied move statement that produces
a collision and then expect ApplyMoves to deal with it in the same way as
it would deal with all other collisions, and thus ensure we handle both
the explicit and implied collisions in the same way.
This does affect some UI-level tests, because a nice side-effect of this
new treatment of this old feature is that we can now report explicitly
in the UI that we're assigning new addresses to these objects, whereas
before we just said nothing and hoped the user would just guess what had
happened and why they therefore weren't seeing a diff.
The backend/local plan tests actually had a pre-existing bug where they
were using a state with a different instance key than the config called
for but getting away with it because we'd previously silently fix it up.
That's still fixed up, but now done with an explicit mention in the UI
and so I made the state consistent with the configuration here so that the
tests would be able to recognize _real_ differences where present, as
opposed to the errant difference caused by that inconsistency.
2021-09-17 17:32:32 -05:00
|
|
|
"github.com/google/go-cmp/cmp"
|
|
|
|
"github.com/zclconf/go-cty/cty"
|
|
|
|
|
2021-05-17 14:00:50 -05:00
|
|
|
"github.com/hashicorp/terraform/internal/addrs"
|
2021-05-17 14:33:17 -05:00
|
|
|
"github.com/hashicorp/terraform/internal/plans"
|
2021-05-17 14:43:35 -05:00
|
|
|
"github.com/hashicorp/terraform/internal/states"
|
2016-09-13 12:56:37 -05:00
|
|
|
)
|
|
|
|
|
|
|
|
func TestApplyGraphBuilder_impl(t *testing.T) {
|
|
|
|
var _ GraphBuilder = new(ApplyGraphBuilder)
|
|
|
|
}
|
|
|
|
|
|
|
|
func TestApplyGraphBuilder(t *testing.T) {
|
terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-08-14 16:24:45 -05:00
|
|
|
changes := &plans.Changes{
|
|
|
|
Resources: []*plans.ResourceInstanceChangeSrc{
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.create"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Create,
|
2016-09-13 12:56:37 -05:00
|
|
|
},
|
|
|
|
},
|
terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-08-14 16:24:45 -05:00
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.other"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Update,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("module.child.test_object.create"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Create,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("module.child.test_object.other"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Create,
|
2016-09-14 16:43:14 -05:00
|
|
|
},
|
|
|
|
},
|
2016-09-13 12:56:37 -05:00
|
|
|
},
|
|
|
|
}
|
|
|
|
|
|
|
|
b := &ApplyGraphBuilder{
|
2021-08-31 12:58:05 -05:00
|
|
|
Config: testModule(t, "graph-builder-apply-basic"),
|
|
|
|
Changes: changes,
|
|
|
|
Plugins: simpleMockPluginLibrary(),
|
2016-09-13 12:56:37 -05:00
|
|
|
}
|
|
|
|
|
2018-05-04 21:24:06 -05:00
|
|
|
g, err := b.Build(addrs.RootModuleInstance)
|
2016-09-13 12:56:37 -05:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("err: %s", err)
|
|
|
|
}
|
|
|
|
|
2018-05-04 21:24:06 -05:00
|
|
|
if g.Path.String() != addrs.RootModuleInstance.String() {
|
|
|
|
t.Fatalf("wrong path %q", g.Path.String())
|
2016-09-13 12:56:37 -05:00
|
|
|
}
|
|
|
|
|
core: refactoring.ImpliedMoveStatements replaces NodeCountBoundary
Going back a long time we've had a special magic behavior which tries to
recognize a situation where a module author either added or removed the
"count" argument from a resource that already has instances, and to
silently rename the zeroth or no-key instance so that we don't plan to
destroy and recreate the associated object.
Now we have a more general idea of "move statements", and specifically
the idea of "implied" move statements which replicates the same heuristic
we used to use for this behavior, we can treat this magic renaming rule as
just another "move statement", special only in that Terraform generates it
automatically rather than it being written out explicitly in the
configuration.
In return for wiring that in, we can now remove altogether the
NodeCountBoundary graph node type and its associated graph transformer,
CountBoundaryTransformer. We handle moves as a preprocessing step before
building the plan graph, so we no longer need to include any special nodes
in the graph to deal with that situation.
The test updates here are mainly for the graph builders themselves, to
acknowledge that indeed we're no longer inserting the NodeCountBoundary
vertices. The vertices that NodeCountBoundary previously depended on now
become dependencies of the special "root" vertex, although in many cases
here we don't see that explicitly because of the transitive reduction
algorithm, which notices when there's already an equivalent indirect
dependency chain and removes the redundant edge.
We already have plenty of test coverage for these "count boundary" cases
in the context tests whose names start with TestContext2Plan_count and
TestContext2Apply_resourceCount, all of which continued to pass here
without any modification and so are not visible in the diff. The test
functions particularly relevant to this situation are:
- TestContext2Plan_countIncreaseFromNotSet
- TestContext2Plan_countDecreaseToOne
- TestContext2Plan_countOneIndex
- TestContext2Apply_countDecreaseToOneCorrupted
The last of those in particular deals with the situation where we have
both a no-key instance _and_ a zero-key instance in the prior state, which
is interesting here because to exercises an intentional interaction
between refactoring.ImpliedMoveStatements and refactoring.ApplyMoves,
where we intentionally generate an implied move statement that produces
a collision and then expect ApplyMoves to deal with it in the same way as
it would deal with all other collisions, and thus ensure we handle both
the explicit and implied collisions in the same way.
This does affect some UI-level tests, because a nice side-effect of this
new treatment of this old feature is that we can now report explicitly
in the UI that we're assigning new addresses to these objects, whereas
before we just said nothing and hoped the user would just guess what had
happened and why they therefore weren't seeing a diff.
The backend/local plan tests actually had a pre-existing bug where they
were using a state with a different instance key than the config called
for but getting away with it because we'd previously silently fix it up.
That's still fixed up, but now done with an explicit mention in the UI
and so I made the state consistent with the configuration here so that the
tests would be able to recognize _real_ differences where present, as
opposed to the errant difference caused by that inconsistency.
2021-09-17 17:32:32 -05:00
|
|
|
got := strings.TrimSpace(g.String())
|
|
|
|
want := strings.TrimSpace(testApplyGraphBuilderStr)
|
|
|
|
if diff := cmp.Diff(want, got); diff != "" {
|
|
|
|
t.Fatalf("wrong result\n%s", diff)
|
2016-09-13 12:56:37 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-02-07 13:49:50 -06:00
|
|
|
// This tests the ordering of two resources where a non-CBD depends
|
|
|
|
// on a CBD. GH-11349.
|
|
|
|
func TestApplyGraphBuilder_depCbd(t *testing.T) {
|
terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-08-14 16:24:45 -05:00
|
|
|
changes := &plans.Changes{
|
|
|
|
Resources: []*plans.ResourceInstanceChangeSrc{
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.A"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
2018-09-21 19:08:52 -05:00
|
|
|
Action: plans.CreateThenDelete,
|
2017-02-13 14:52:45 -06:00
|
|
|
},
|
terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-08-14 16:24:45 -05:00
|
|
|
},
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.B"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Update,
|
2017-02-07 13:49:50 -06:00
|
|
|
},
|
|
|
|
},
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
2019-11-19 16:41:45 -06:00
|
|
|
state := states.NewState()
|
|
|
|
root := state.EnsureModule(addrs.RootModuleInstance)
|
|
|
|
root.SetResourceInstanceCurrent(
|
|
|
|
mustResourceInstanceAddr("test_object.A").Resource,
|
|
|
|
&states.ResourceInstanceObjectSrc{
|
|
|
|
Status: states.ObjectReady,
|
|
|
|
AttrsJSON: []byte(`{"id":"A"}`),
|
|
|
|
},
|
2020-04-01 12:13:40 -05:00
|
|
|
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
|
2019-11-19 16:41:45 -06:00
|
|
|
)
|
|
|
|
root.SetResourceInstanceCurrent(
|
|
|
|
mustResourceInstanceAddr("test_object.B").Resource,
|
|
|
|
&states.ResourceInstanceObjectSrc{
|
|
|
|
Status: states.ObjectReady,
|
|
|
|
AttrsJSON: []byte(`{"id":"B","test_list":["x"]}`),
|
2020-09-24 10:57:43 -05:00
|
|
|
Dependencies: []addrs.ConfigResource{mustConfigResourceAddr("test_object.A")},
|
2019-11-19 16:41:45 -06:00
|
|
|
},
|
2020-04-01 12:13:40 -05:00
|
|
|
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
|
2019-11-19 16:41:45 -06:00
|
|
|
)
|
|
|
|
|
2017-02-07 13:49:50 -06:00
|
|
|
b := &ApplyGraphBuilder{
|
2021-08-31 12:58:05 -05:00
|
|
|
Config: testModule(t, "graph-builder-apply-dep-cbd"),
|
|
|
|
Changes: changes,
|
|
|
|
Plugins: simpleMockPluginLibrary(),
|
|
|
|
State: state,
|
2017-02-07 13:49:50 -06:00
|
|
|
}
|
|
|
|
|
2018-05-04 21:24:06 -05:00
|
|
|
g, err := b.Build(addrs.RootModuleInstance)
|
2017-02-07 13:49:50 -06:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("err: %s", err)
|
|
|
|
}
|
|
|
|
|
2018-05-04 21:24:06 -05:00
|
|
|
if g.Path.String() != addrs.RootModuleInstance.String() {
|
|
|
|
t.Fatalf("wrong path %q", g.Path.String())
|
2017-02-07 13:49:50 -06:00
|
|
|
}
|
|
|
|
|
2018-09-20 14:30:52 -05:00
|
|
|
// We're going to go hunting for our deposed instance node here, so we
|
|
|
|
// can find out its key to use in the assertions below.
|
|
|
|
var dk states.DeposedKey
|
|
|
|
for _, v := range g.Vertices() {
|
|
|
|
tv, ok := v.(*NodeDestroyDeposedResourceInstanceObject)
|
|
|
|
if !ok {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
if dk != states.NotDeposed {
|
|
|
|
t.Fatalf("more than one deposed instance node in the graph; want only one")
|
|
|
|
}
|
|
|
|
dk = tv.DeposedKey
|
|
|
|
}
|
|
|
|
if dk == states.NotDeposed {
|
|
|
|
t.Fatalf("no deposed instance node in the graph; want one")
|
|
|
|
}
|
|
|
|
|
2018-09-24 18:25:40 -05:00
|
|
|
destroyName := fmt.Sprintf("test_object.A (destroy deposed %s)", dk)
|
2018-09-20 14:30:52 -05:00
|
|
|
|
2017-02-07 13:49:50 -06:00
|
|
|
// Create A, Modify B, Destroy A
|
|
|
|
testGraphHappensBefore(
|
|
|
|
t, g,
|
2018-05-10 17:38:58 -05:00
|
|
|
"test_object.A",
|
2018-09-20 14:30:52 -05:00
|
|
|
destroyName,
|
|
|
|
)
|
2017-02-07 14:01:27 -06:00
|
|
|
testGraphHappensBefore(
|
|
|
|
t, g,
|
2018-05-10 17:38:58 -05:00
|
|
|
"test_object.A",
|
2018-09-20 14:30:52 -05:00
|
|
|
"test_object.B",
|
|
|
|
)
|
2017-02-07 13:49:50 -06:00
|
|
|
testGraphHappensBefore(
|
|
|
|
t, g,
|
2018-05-10 17:38:58 -05:00
|
|
|
"test_object.B",
|
2018-09-20 14:30:52 -05:00
|
|
|
destroyName,
|
|
|
|
)
|
2017-02-07 13:49:50 -06:00
|
|
|
}
|
|
|
|
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
// This tests the ordering of two resources that are both CBD that
|
|
|
|
// require destroy/create.
|
|
|
|
func TestApplyGraphBuilder_doubleCBD(t *testing.T) {
|
terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-08-14 16:24:45 -05:00
|
|
|
changes := &plans.Changes{
|
|
|
|
Resources: []*plans.ResourceInstanceChangeSrc{
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.A"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
2018-09-21 19:08:52 -05:00
|
|
|
Action: plans.CreateThenDelete,
|
terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-08-14 16:24:45 -05:00
|
|
|
},
|
|
|
|
},
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.B"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
2018-09-21 19:08:52 -05:00
|
|
|
Action: plans.CreateThenDelete,
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
},
|
|
|
|
},
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
|
|
|
b := &ApplyGraphBuilder{
|
2021-08-31 12:58:05 -05:00
|
|
|
Config: testModule(t, "graph-builder-apply-double-cbd"),
|
|
|
|
Changes: changes,
|
|
|
|
Plugins: simpleMockPluginLibrary(),
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
}
|
|
|
|
|
2018-05-04 21:24:06 -05:00
|
|
|
g, err := b.Build(addrs.RootModuleInstance)
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("err: %s", err)
|
|
|
|
}
|
|
|
|
|
2018-05-04 21:24:06 -05:00
|
|
|
if g.Path.String() != addrs.RootModuleInstance.String() {
|
|
|
|
t.Fatalf("wrong path %q", g.Path.String())
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
}
|
|
|
|
|
2018-09-25 09:42:44 -05:00
|
|
|
// We're going to go hunting for our deposed instance node here, so we
|
|
|
|
// can find out its key to use in the assertions below.
|
|
|
|
var destroyA, destroyB string
|
|
|
|
for _, v := range g.Vertices() {
|
|
|
|
tv, ok := v.(*NodeDestroyDeposedResourceInstanceObject)
|
|
|
|
if !ok {
|
|
|
|
continue
|
|
|
|
}
|
|
|
|
|
2020-03-23 07:56:55 -05:00
|
|
|
switch tv.Addr.Resource.Resource.Name {
|
2018-09-25 09:42:44 -05:00
|
|
|
case "A":
|
|
|
|
destroyA = fmt.Sprintf("test_object.A (destroy deposed %s)", tv.DeposedKey)
|
|
|
|
case "B":
|
|
|
|
destroyB = fmt.Sprintf("test_object.B (destroy deposed %s)", tv.DeposedKey)
|
|
|
|
default:
|
|
|
|
t.Fatalf("unknown instance: %s", tv.Addr)
|
|
|
|
}
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
}
|
2018-09-25 09:42:44 -05:00
|
|
|
|
|
|
|
// Create A, Modify B, Destroy A
|
|
|
|
testGraphHappensBefore(
|
|
|
|
t, g,
|
|
|
|
"test_object.A",
|
|
|
|
destroyA,
|
|
|
|
)
|
|
|
|
testGraphHappensBefore(
|
|
|
|
t, g,
|
|
|
|
"test_object.A",
|
|
|
|
"test_object.B",
|
|
|
|
)
|
|
|
|
testGraphHappensBefore(
|
|
|
|
t, g,
|
|
|
|
"test_object.B",
|
|
|
|
destroyB,
|
|
|
|
)
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
}
|
|
|
|
|
2017-02-07 22:14:38 -06:00
|
|
|
// This tests the ordering of two resources being destroyed that depend
|
|
|
|
// on each other from only state. GH-11749
|
|
|
|
func TestApplyGraphBuilder_destroyStateOnly(t *testing.T) {
|
terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-08-14 16:24:45 -05:00
|
|
|
changes := &plans.Changes{
|
|
|
|
Resources: []*plans.ResourceInstanceChangeSrc{
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("module.child.test_object.A"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Delete,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("module.child.test_object.B"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Delete,
|
2017-02-07 22:14:38 -06:00
|
|
|
},
|
|
|
|
},
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
2019-12-03 15:04:25 -06:00
|
|
|
state := states.NewState()
|
|
|
|
root := state.EnsureModule(addrs.RootModuleInstance)
|
|
|
|
child := state.EnsureModule(addrs.RootModuleInstance.Child("child", addrs.NoKey))
|
|
|
|
root.SetResourceInstanceCurrent(
|
|
|
|
mustResourceInstanceAddr("test_object.A").Resource,
|
|
|
|
&states.ResourceInstanceObjectSrc{
|
|
|
|
Status: states.ObjectReady,
|
|
|
|
AttrsJSON: []byte(`{"id":"foo"}`),
|
|
|
|
},
|
2020-04-01 12:13:40 -05:00
|
|
|
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
|
2019-12-03 15:04:25 -06:00
|
|
|
)
|
|
|
|
child.SetResourceInstanceCurrent(
|
|
|
|
mustResourceInstanceAddr("test_object.B").Resource,
|
|
|
|
&states.ResourceInstanceObjectSrc{
|
|
|
|
Status: states.ObjectReady,
|
|
|
|
AttrsJSON: []byte(`{"id":"bar"}`),
|
2020-09-24 10:57:43 -05:00
|
|
|
Dependencies: []addrs.ConfigResource{mustConfigResourceAddr("module.child.test_object.A")},
|
2017-02-07 22:14:38 -06:00
|
|
|
},
|
2020-04-01 12:13:40 -05:00
|
|
|
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
|
2019-12-03 15:04:25 -06:00
|
|
|
)
|
2017-02-07 22:14:38 -06:00
|
|
|
|
|
|
|
b := &ApplyGraphBuilder{
|
2021-08-31 12:58:05 -05:00
|
|
|
Config: testModule(t, "empty"),
|
|
|
|
Changes: changes,
|
|
|
|
State: state,
|
|
|
|
Plugins: simpleMockPluginLibrary(),
|
2017-02-07 22:14:38 -06:00
|
|
|
}
|
|
|
|
|
terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-08-14 16:24:45 -05:00
|
|
|
g, diags := b.Build(addrs.RootModuleInstance)
|
|
|
|
if diags.HasErrors() {
|
|
|
|
t.Fatalf("err: %s", diags.Err())
|
2017-02-07 22:14:38 -06:00
|
|
|
}
|
|
|
|
|
2018-05-04 21:24:06 -05:00
|
|
|
if g.Path.String() != addrs.RootModuleInstance.String() {
|
|
|
|
t.Fatalf("wrong path %q", g.Path.String())
|
2017-02-07 22:14:38 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
testGraphHappensBefore(
|
|
|
|
t, g,
|
2018-05-10 17:38:58 -05:00
|
|
|
"module.child.test_object.B (destroy)",
|
|
|
|
"module.child.test_object.A (destroy)")
|
2017-02-07 22:14:38 -06:00
|
|
|
}
|
|
|
|
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
// This tests the ordering of destroying a single count of a resource.
|
|
|
|
func TestApplyGraphBuilder_destroyCount(t *testing.T) {
|
terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-08-14 16:24:45 -05:00
|
|
|
changes := &plans.Changes{
|
|
|
|
Resources: []*plans.ResourceInstanceChangeSrc{
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.A[1]"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Delete,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.B"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Update,
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
},
|
|
|
|
},
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
2020-07-16 08:09:18 -05:00
|
|
|
state := states.NewState()
|
|
|
|
root := state.RootModule()
|
|
|
|
addrA := mustResourceInstanceAddr("test_object.A[1]")
|
|
|
|
root.SetResourceInstanceCurrent(
|
|
|
|
addrA.Resource,
|
|
|
|
&states.ResourceInstanceObjectSrc{
|
|
|
|
Status: states.ObjectReady,
|
|
|
|
AttrsJSON: []byte(`{"id":"B"}`),
|
|
|
|
},
|
|
|
|
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
|
|
|
|
)
|
|
|
|
root.SetResourceInstanceCurrent(
|
|
|
|
mustResourceInstanceAddr("test_object.B").Resource,
|
|
|
|
&states.ResourceInstanceObjectSrc{
|
|
|
|
Status: states.ObjectReady,
|
|
|
|
AttrsJSON: []byte(`{"id":"B"}`),
|
|
|
|
Dependencies: []addrs.ConfigResource{addrA.ContainingResource().Config()},
|
|
|
|
},
|
|
|
|
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
|
|
|
|
)
|
|
|
|
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
b := &ApplyGraphBuilder{
|
2021-08-31 12:58:05 -05:00
|
|
|
Config: testModule(t, "graph-builder-apply-count"),
|
|
|
|
Changes: changes,
|
|
|
|
Plugins: simpleMockPluginLibrary(),
|
|
|
|
State: state,
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
}
|
|
|
|
|
2018-05-04 21:24:06 -05:00
|
|
|
g, err := b.Build(addrs.RootModuleInstance)
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("err: %s", err)
|
|
|
|
}
|
|
|
|
|
2018-05-04 21:24:06 -05:00
|
|
|
if g.Path.String() != addrs.RootModuleInstance.String() {
|
|
|
|
t.Fatalf("wrong module path %q", g.Path)
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
}
|
|
|
|
|
core: refactoring.ImpliedMoveStatements replaces NodeCountBoundary
Going back a long time we've had a special magic behavior which tries to
recognize a situation where a module author either added or removed the
"count" argument from a resource that already has instances, and to
silently rename the zeroth or no-key instance so that we don't plan to
destroy and recreate the associated object.
Now we have a more general idea of "move statements", and specifically
the idea of "implied" move statements which replicates the same heuristic
we used to use for this behavior, we can treat this magic renaming rule as
just another "move statement", special only in that Terraform generates it
automatically rather than it being written out explicitly in the
configuration.
In return for wiring that in, we can now remove altogether the
NodeCountBoundary graph node type and its associated graph transformer,
CountBoundaryTransformer. We handle moves as a preprocessing step before
building the plan graph, so we no longer need to include any special nodes
in the graph to deal with that situation.
The test updates here are mainly for the graph builders themselves, to
acknowledge that indeed we're no longer inserting the NodeCountBoundary
vertices. The vertices that NodeCountBoundary previously depended on now
become dependencies of the special "root" vertex, although in many cases
here we don't see that explicitly because of the transitive reduction
algorithm, which notices when there's already an equivalent indirect
dependency chain and removes the redundant edge.
We already have plenty of test coverage for these "count boundary" cases
in the context tests whose names start with TestContext2Plan_count and
TestContext2Apply_resourceCount, all of which continued to pass here
without any modification and so are not visible in the diff. The test
functions particularly relevant to this situation are:
- TestContext2Plan_countIncreaseFromNotSet
- TestContext2Plan_countDecreaseToOne
- TestContext2Plan_countOneIndex
- TestContext2Apply_countDecreaseToOneCorrupted
The last of those in particular deals with the situation where we have
both a no-key instance _and_ a zero-key instance in the prior state, which
is interesting here because to exercises an intentional interaction
between refactoring.ImpliedMoveStatements and refactoring.ApplyMoves,
where we intentionally generate an implied move statement that produces
a collision and then expect ApplyMoves to deal with it in the same way as
it would deal with all other collisions, and thus ensure we handle both
the explicit and implied collisions in the same way.
This does affect some UI-level tests, because a nice side-effect of this
new treatment of this old feature is that we can now report explicitly
in the UI that we're assigning new addresses to these objects, whereas
before we just said nothing and hoped the user would just guess what had
happened and why they therefore weren't seeing a diff.
The backend/local plan tests actually had a pre-existing bug where they
were using a state with a different instance key than the config called
for but getting away with it because we'd previously silently fix it up.
That's still fixed up, but now done with an explicit mention in the UI
and so I made the state consistent with the configuration here so that the
tests would be able to recognize _real_ differences where present, as
opposed to the errant difference caused by that inconsistency.
2021-09-17 17:32:32 -05:00
|
|
|
got := strings.TrimSpace(g.String())
|
|
|
|
want := strings.TrimSpace(testApplyGraphBuilderDestroyCountStr)
|
|
|
|
if diff := cmp.Diff(want, got); diff != "" {
|
|
|
|
t.Fatalf("wrong result\n%s", diff)
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-12-14 23:39:13 -06:00
|
|
|
func TestApplyGraphBuilder_moduleDestroy(t *testing.T) {
|
terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-08-14 16:24:45 -05:00
|
|
|
changes := &plans.Changes{
|
|
|
|
Resources: []*plans.ResourceInstanceChangeSrc{
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("module.A.test_object.foo"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Delete,
|
2016-12-14 23:39:13 -06:00
|
|
|
},
|
|
|
|
},
|
terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-08-14 16:24:45 -05:00
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("module.B.test_object.foo"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Delete,
|
2016-12-14 23:39:13 -06:00
|
|
|
},
|
|
|
|
},
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
2019-12-03 15:04:25 -06:00
|
|
|
state := states.NewState()
|
|
|
|
modA := state.EnsureModule(addrs.RootModuleInstance.Child("A", addrs.NoKey))
|
|
|
|
modA.SetResourceInstanceCurrent(
|
|
|
|
mustResourceInstanceAddr("test_object.foo").Resource,
|
|
|
|
&states.ResourceInstanceObjectSrc{
|
|
|
|
Status: states.ObjectReady,
|
|
|
|
AttrsJSON: []byte(`{"id":"foo"}`),
|
|
|
|
},
|
2020-04-01 12:13:40 -05:00
|
|
|
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
|
2019-12-03 15:04:25 -06:00
|
|
|
)
|
|
|
|
modB := state.EnsureModule(addrs.RootModuleInstance.Child("B", addrs.NoKey))
|
|
|
|
modB.SetResourceInstanceCurrent(
|
|
|
|
mustResourceInstanceAddr("test_object.foo").Resource,
|
|
|
|
&states.ResourceInstanceObjectSrc{
|
|
|
|
Status: states.ObjectReady,
|
|
|
|
AttrsJSON: []byte(`{"id":"foo","value":"foo"}`),
|
2020-09-24 10:57:43 -05:00
|
|
|
Dependencies: []addrs.ConfigResource{mustConfigResourceAddr("module.A.test_object.foo")},
|
2019-12-03 15:04:25 -06:00
|
|
|
},
|
2020-04-01 12:13:40 -05:00
|
|
|
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
|
2019-12-03 15:04:25 -06:00
|
|
|
)
|
|
|
|
|
2016-12-14 23:39:13 -06:00
|
|
|
b := &ApplyGraphBuilder{
|
2021-08-31 12:58:05 -05:00
|
|
|
Config: testModule(t, "graph-builder-apply-module-destroy"),
|
|
|
|
Changes: changes,
|
|
|
|
Plugins: simpleMockPluginLibrary(),
|
|
|
|
State: state,
|
2016-12-14 23:39:13 -06:00
|
|
|
}
|
|
|
|
|
2018-05-04 21:24:06 -05:00
|
|
|
g, err := b.Build(addrs.RootModuleInstance)
|
2016-12-14 23:39:13 -06:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("err: %s", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
testGraphHappensBefore(
|
|
|
|
t, g,
|
2018-05-10 17:38:58 -05:00
|
|
|
"module.B.test_object.foo (destroy)",
|
|
|
|
"module.A.test_object.foo (destroy)",
|
|
|
|
)
|
2016-12-14 23:39:13 -06:00
|
|
|
}
|
|
|
|
|
2017-02-13 14:52:45 -06:00
|
|
|
func TestApplyGraphBuilder_targetModule(t *testing.T) {
|
terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-08-14 16:24:45 -05:00
|
|
|
changes := &plans.Changes{
|
|
|
|
Resources: []*plans.ResourceInstanceChangeSrc{
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.foo"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Update,
|
2017-02-13 14:52:45 -06:00
|
|
|
},
|
|
|
|
},
|
terraform: Ugly huge change to weave in new State and Plan types
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
2018-08-14 16:24:45 -05:00
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("module.child2.test_object.foo"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Update,
|
2017-02-13 14:52:45 -06:00
|
|
|
},
|
|
|
|
},
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
|
|
|
b := &ApplyGraphBuilder{
|
2021-08-31 12:58:05 -05:00
|
|
|
Config: testModule(t, "graph-builder-apply-target-module"),
|
|
|
|
Changes: changes,
|
|
|
|
Plugins: simpleMockPluginLibrary(),
|
2018-05-04 21:24:06 -05:00
|
|
|
Targets: []addrs.Targetable{
|
|
|
|
addrs.RootModuleInstance.Child("child2", addrs.NoKey),
|
|
|
|
},
|
2017-02-13 14:52:45 -06:00
|
|
|
}
|
|
|
|
|
2018-05-04 21:24:06 -05:00
|
|
|
g, err := b.Build(addrs.RootModuleInstance)
|
2017-02-13 14:52:45 -06:00
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("err: %s", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
testGraphNotContains(t, g, "module.child1.output.instance_id")
|
|
|
|
}
|
|
|
|
|
2019-10-29 09:57:38 -05:00
|
|
|
// Ensure that an update resulting from the removal of a resource happens after
|
|
|
|
// that resource is destroyed.
|
|
|
|
func TestApplyGraphBuilder_updateFromOrphan(t *testing.T) {
|
|
|
|
schemas := simpleTestSchemas()
|
2020-04-01 12:13:40 -05:00
|
|
|
instanceSchema := schemas.Providers[addrs.NewDefaultProvider("test")].ResourceTypes["test_object"]
|
2019-10-29 09:57:38 -05:00
|
|
|
|
|
|
|
bBefore, _ := plans.NewDynamicValue(
|
|
|
|
cty.ObjectVal(map[string]cty.Value{
|
|
|
|
"id": cty.StringVal("b_id"),
|
|
|
|
"test_string": cty.StringVal("a_id"),
|
|
|
|
}), instanceSchema.ImpliedType())
|
|
|
|
bAfter, _ := plans.NewDynamicValue(
|
|
|
|
cty.ObjectVal(map[string]cty.Value{
|
|
|
|
"id": cty.StringVal("b_id"),
|
|
|
|
"test_string": cty.StringVal("changed"),
|
|
|
|
}), instanceSchema.ImpliedType())
|
|
|
|
|
|
|
|
changes := &plans.Changes{
|
|
|
|
Resources: []*plans.ResourceInstanceChangeSrc{
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.a"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Delete,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.b"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Update,
|
|
|
|
Before: bBefore,
|
|
|
|
After: bAfter,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
|
|
|
state := states.NewState()
|
|
|
|
root := state.EnsureModule(addrs.RootModuleInstance)
|
|
|
|
root.SetResourceInstanceCurrent(
|
|
|
|
addrs.Resource{
|
|
|
|
Mode: addrs.ManagedResourceMode,
|
|
|
|
Type: "test_object",
|
|
|
|
Name: "a",
|
|
|
|
}.Instance(addrs.NoKey),
|
|
|
|
&states.ResourceInstanceObjectSrc{
|
|
|
|
Status: states.ObjectReady,
|
|
|
|
AttrsJSON: []byte(`{"id":"a_id"}`),
|
|
|
|
},
|
2020-02-13 14:32:58 -06:00
|
|
|
addrs.AbsProviderConfig{
|
2020-04-01 12:13:40 -05:00
|
|
|
Provider: addrs.NewDefaultProvider("test"),
|
2020-03-10 20:21:19 -05:00
|
|
|
Module: addrs.RootModule,
|
2020-02-13 14:32:58 -06:00
|
|
|
},
|
2019-10-29 09:57:38 -05:00
|
|
|
)
|
|
|
|
root.SetResourceInstanceCurrent(
|
|
|
|
addrs.Resource{
|
|
|
|
Mode: addrs.ManagedResourceMode,
|
|
|
|
Type: "test_object",
|
|
|
|
Name: "b",
|
|
|
|
}.Instance(addrs.NoKey),
|
|
|
|
&states.ResourceInstanceObjectSrc{
|
|
|
|
Status: states.ObjectReady,
|
|
|
|
AttrsJSON: []byte(`{"id":"b_id","test_string":"a_id"}`),
|
2020-03-23 14:26:18 -05:00
|
|
|
Dependencies: []addrs.ConfigResource{
|
core: Functional-style API for terraform.Context
Previously terraform.Context was built in an unfortunate way where all of
the data was provided up front in terraform.NewContext and then mutated
directly by subsequent operations. That made the data flow hard to follow,
commonly leading to bugs, and also meant that we were forced to take
various actions too early in terraform.NewContext, rather than waiting
until a more appropriate time during an operation.
This (enormous) commit changes terraform.Context so that its fields are
broadly just unchanging data about the execution context (current
workspace name, available plugins, etc) whereas the main data Terraform
works with arrives via individual method arguments and is returned in
return values.
Specifically, this means that terraform.Context no longer "has-a" config,
state, and "planned changes", instead holding on to those only temporarily
during an operation. The caller is responsible for propagating the outcome
of one step into the next step so that the data flow between operations is
actually visible.
However, since that's a change to the main entry points in the "terraform"
package, this commit also touches every file in the codebase which
interacted with those APIs. Most of the noise here is in updating tests
to take the same actions using the new API style, but this also affects
the main-code callers in the backends and in the command package.
My goal here was to refactor without changing observable behavior, but in
practice there are a couple externally-visible behavior variations here
that seemed okay in service of the broader goal:
- The "terraform graph" command is no longer hooked directly into the
core graph builders, because that's no longer part of the public API.
However, I did include a couple new Context functions whose contract
is to produce a UI-oriented graph, and _for now_ those continue to
return the physical graph we use for those operations. There's no
exported API for generating the "validate" and "eval" graphs, because
neither is particularly interesting in its own right, and so
"terraform graph" no longer supports those graph types.
- terraform.NewContext no longer has the responsibility for collecting
all of the provider schemas up front. Instead, we wait until we need
them. However, that means that some of our error messages now have a
slightly different shape due to unwinding through a differently-shaped
call stack. As of this commit we also end up reloading the schemas
multiple times in some cases, which is functionally acceptable but
likely represents a performance regression. I intend to rework this to
use caching, but I'm saving that for a later commit because this one is
big enough already.
The proximal reason for this change is to resolve the chicken/egg problem
whereby there was previously no single point where we could apply "moved"
statements to the previous run state before creating a plan. With this
change in place, we can now do that as part of Context.Plan, prior to
forking the input state into the three separate state artifacts we use
during planning.
However, this is at least the third project in a row where the previous
API design led to piling more functionality into terraform.NewContext and
then working around the incorrect order of operations that produces, so
I intend that by paying the cost/risk of this large diff now we can in
turn reduce the cost/risk of future projects that relate to our main
workflow actions.
2021-08-24 14:06:38 -05:00
|
|
|
{
|
2019-10-29 09:57:38 -05:00
|
|
|
Resource: addrs.Resource{
|
|
|
|
Mode: addrs.ManagedResourceMode,
|
|
|
|
Type: "test_object",
|
|
|
|
Name: "a",
|
|
|
|
},
|
2020-03-23 14:26:18 -05:00
|
|
|
Module: root.Addr.Module(),
|
2019-10-29 09:57:38 -05:00
|
|
|
},
|
|
|
|
},
|
|
|
|
},
|
2020-02-13 14:32:58 -06:00
|
|
|
addrs.AbsProviderConfig{
|
2020-04-01 12:13:40 -05:00
|
|
|
Provider: addrs.NewDefaultProvider("test"),
|
2020-03-10 20:21:19 -05:00
|
|
|
Module: addrs.RootModule,
|
2020-02-13 14:32:58 -06:00
|
|
|
},
|
2019-10-29 09:57:38 -05:00
|
|
|
)
|
|
|
|
|
|
|
|
b := &ApplyGraphBuilder{
|
2021-08-31 12:58:05 -05:00
|
|
|
Config: testModule(t, "graph-builder-apply-orphan-update"),
|
|
|
|
Changes: changes,
|
|
|
|
Plugins: simpleMockPluginLibrary(),
|
|
|
|
State: state,
|
2019-10-29 09:57:38 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
g, err := b.Build(addrs.RootModuleInstance)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("err: %s", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
expected := strings.TrimSpace(`
|
|
|
|
test_object.a (destroy)
|
|
|
|
test_object.b
|
|
|
|
test_object.a (destroy)
|
|
|
|
`)
|
|
|
|
|
|
|
|
instanceGraph := filterInstances(g)
|
2019-10-30 14:59:34 -05:00
|
|
|
got := strings.TrimSpace(instanceGraph.String())
|
2019-12-12 14:02:26 -06:00
|
|
|
|
|
|
|
if got != expected {
|
|
|
|
t.Fatalf("expected:\n%s\ngot:\n%s", expected, got)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Ensure that an update resulting from the removal of a resource happens before
|
|
|
|
// a CBD resource is destroyed.
|
|
|
|
func TestApplyGraphBuilder_updateFromCBDOrphan(t *testing.T) {
|
|
|
|
schemas := simpleTestSchemas()
|
2020-04-01 12:13:40 -05:00
|
|
|
instanceSchema := schemas.Providers[addrs.NewDefaultProvider("test")].ResourceTypes["test_object"]
|
2019-12-12 14:02:26 -06:00
|
|
|
|
|
|
|
bBefore, _ := plans.NewDynamicValue(
|
|
|
|
cty.ObjectVal(map[string]cty.Value{
|
|
|
|
"id": cty.StringVal("b_id"),
|
|
|
|
"test_string": cty.StringVal("a_id"),
|
|
|
|
}), instanceSchema.ImpliedType())
|
|
|
|
bAfter, _ := plans.NewDynamicValue(
|
|
|
|
cty.ObjectVal(map[string]cty.Value{
|
|
|
|
"id": cty.StringVal("b_id"),
|
|
|
|
"test_string": cty.StringVal("changed"),
|
|
|
|
}), instanceSchema.ImpliedType())
|
|
|
|
|
|
|
|
changes := &plans.Changes{
|
|
|
|
Resources: []*plans.ResourceInstanceChangeSrc{
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.a"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Delete,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.b"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Update,
|
|
|
|
Before: bBefore,
|
|
|
|
After: bAfter,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
|
|
|
state := states.NewState()
|
|
|
|
root := state.EnsureModule(addrs.RootModuleInstance)
|
|
|
|
root.SetResourceInstanceCurrent(
|
|
|
|
addrs.Resource{
|
|
|
|
Mode: addrs.ManagedResourceMode,
|
|
|
|
Type: "test_object",
|
|
|
|
Name: "a",
|
|
|
|
}.Instance(addrs.NoKey),
|
|
|
|
&states.ResourceInstanceObjectSrc{
|
|
|
|
Status: states.ObjectReady,
|
|
|
|
AttrsJSON: []byte(`{"id":"a_id"}`),
|
|
|
|
CreateBeforeDestroy: true,
|
|
|
|
},
|
2020-04-01 12:13:40 -05:00
|
|
|
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
|
2019-12-12 14:02:26 -06:00
|
|
|
)
|
|
|
|
root.SetResourceInstanceCurrent(
|
|
|
|
addrs.Resource{
|
|
|
|
Mode: addrs.ManagedResourceMode,
|
|
|
|
Type: "test_object",
|
|
|
|
Name: "b",
|
|
|
|
}.Instance(addrs.NoKey),
|
|
|
|
&states.ResourceInstanceObjectSrc{
|
|
|
|
Status: states.ObjectReady,
|
|
|
|
AttrsJSON: []byte(`{"id":"b_id","test_string":"a_id"}`),
|
2020-03-23 14:26:18 -05:00
|
|
|
Dependencies: []addrs.ConfigResource{
|
core: Functional-style API for terraform.Context
Previously terraform.Context was built in an unfortunate way where all of
the data was provided up front in terraform.NewContext and then mutated
directly by subsequent operations. That made the data flow hard to follow,
commonly leading to bugs, and also meant that we were forced to take
various actions too early in terraform.NewContext, rather than waiting
until a more appropriate time during an operation.
This (enormous) commit changes terraform.Context so that its fields are
broadly just unchanging data about the execution context (current
workspace name, available plugins, etc) whereas the main data Terraform
works with arrives via individual method arguments and is returned in
return values.
Specifically, this means that terraform.Context no longer "has-a" config,
state, and "planned changes", instead holding on to those only temporarily
during an operation. The caller is responsible for propagating the outcome
of one step into the next step so that the data flow between operations is
actually visible.
However, since that's a change to the main entry points in the "terraform"
package, this commit also touches every file in the codebase which
interacted with those APIs. Most of the noise here is in updating tests
to take the same actions using the new API style, but this also affects
the main-code callers in the backends and in the command package.
My goal here was to refactor without changing observable behavior, but in
practice there are a couple externally-visible behavior variations here
that seemed okay in service of the broader goal:
- The "terraform graph" command is no longer hooked directly into the
core graph builders, because that's no longer part of the public API.
However, I did include a couple new Context functions whose contract
is to produce a UI-oriented graph, and _for now_ those continue to
return the physical graph we use for those operations. There's no
exported API for generating the "validate" and "eval" graphs, because
neither is particularly interesting in its own right, and so
"terraform graph" no longer supports those graph types.
- terraform.NewContext no longer has the responsibility for collecting
all of the provider schemas up front. Instead, we wait until we need
them. However, that means that some of our error messages now have a
slightly different shape due to unwinding through a differently-shaped
call stack. As of this commit we also end up reloading the schemas
multiple times in some cases, which is functionally acceptable but
likely represents a performance regression. I intend to rework this to
use caching, but I'm saving that for a later commit because this one is
big enough already.
The proximal reason for this change is to resolve the chicken/egg problem
whereby there was previously no single point where we could apply "moved"
statements to the previous run state before creating a plan. With this
change in place, we can now do that as part of Context.Plan, prior to
forking the input state into the three separate state artifacts we use
during planning.
However, this is at least the third project in a row where the previous
API design led to piling more functionality into terraform.NewContext and
then working around the incorrect order of operations that produces, so
I intend that by paying the cost/risk of this large diff now we can in
turn reduce the cost/risk of future projects that relate to our main
workflow actions.
2021-08-24 14:06:38 -05:00
|
|
|
{
|
2019-12-12 14:02:26 -06:00
|
|
|
Resource: addrs.Resource{
|
|
|
|
Mode: addrs.ManagedResourceMode,
|
|
|
|
Type: "test_object",
|
|
|
|
Name: "a",
|
|
|
|
},
|
2020-03-23 14:26:18 -05:00
|
|
|
Module: root.Addr.Module(),
|
2019-12-12 14:02:26 -06:00
|
|
|
},
|
|
|
|
},
|
|
|
|
},
|
2020-04-01 12:13:40 -05:00
|
|
|
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`),
|
2019-12-12 14:02:26 -06:00
|
|
|
)
|
|
|
|
|
|
|
|
b := &ApplyGraphBuilder{
|
2021-08-31 12:58:05 -05:00
|
|
|
Config: testModule(t, "graph-builder-apply-orphan-update"),
|
|
|
|
Changes: changes,
|
|
|
|
Plugins: simpleMockPluginLibrary(),
|
|
|
|
State: state,
|
2019-12-12 14:02:26 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
g, err := b.Build(addrs.RootModuleInstance)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatalf("err: %s", err)
|
|
|
|
}
|
|
|
|
|
|
|
|
expected := strings.TrimSpace(`
|
|
|
|
test_object.a (destroy)
|
|
|
|
test_object.b
|
|
|
|
test_object.b
|
|
|
|
`)
|
|
|
|
|
|
|
|
instanceGraph := filterInstances(g)
|
2019-10-30 14:59:34 -05:00
|
|
|
got := strings.TrimSpace(instanceGraph.String())
|
|
|
|
|
|
|
|
if got != expected {
|
|
|
|
t.Fatalf("expected:\n%s\ngot:\n%s", expected, got)
|
2019-10-29 09:57:38 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-12-16 16:45:34 -06:00
|
|
|
// The orphan clean up node should not be connected to a provider
|
|
|
|
func TestApplyGraphBuilder_orphanedWithProvider(t *testing.T) {
|
|
|
|
changes := &plans.Changes{
|
|
|
|
Resources: []*plans.ResourceInstanceChangeSrc{
|
|
|
|
{
|
|
|
|
Addr: mustResourceInstanceAddr("test_object.A"),
|
|
|
|
ChangeSrc: plans.ChangeSrc{
|
|
|
|
Action: plans.Delete,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
|
|
|
state := states.NewState()
|
|
|
|
root := state.EnsureModule(addrs.RootModuleInstance)
|
|
|
|
root.SetResourceInstanceCurrent(
|
|
|
|
mustResourceInstanceAddr("test_object.A").Resource,
|
|
|
|
&states.ResourceInstanceObjectSrc{
|
|
|
|
Status: states.ObjectReady,
|
|
|
|
AttrsJSON: []byte(`{"id":"A"}`),
|
|
|
|
},
|
2020-04-01 12:13:40 -05:00
|
|
|
mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"].foo`),
|
2019-12-16 16:45:34 -06:00
|
|
|
)
|
|
|
|
|
|
|
|
b := &ApplyGraphBuilder{
|
2021-08-31 12:58:05 -05:00
|
|
|
Config: testModule(t, "graph-builder-orphan-alias"),
|
|
|
|
Changes: changes,
|
|
|
|
Plugins: simpleMockPluginLibrary(),
|
|
|
|
State: state,
|
2019-12-16 16:45:34 -06:00
|
|
|
}
|
|
|
|
|
|
|
|
g, err := b.Build(addrs.RootModuleInstance)
|
|
|
|
if err != nil {
|
|
|
|
t.Fatal(err)
|
|
|
|
}
|
|
|
|
|
|
|
|
// The cleanup node has no state or config of its own, so would create a
|
|
|
|
// default provider which we don't want.
|
|
|
|
testGraphNotContains(t, g, "provider.test")
|
|
|
|
}
|
|
|
|
|
2016-09-13 12:56:37 -05:00
|
|
|
const testApplyGraphBuilderStr = `
|
2020-04-02 12:43:23 -05:00
|
|
|
module.child (close)
|
|
|
|
module.child.test_object.other
|
2020-05-12 09:54:43 -05:00
|
|
|
module.child (expand)
|
2018-05-10 17:38:58 -05:00
|
|
|
module.child.test_object.create
|
2020-05-12 09:54:43 -05:00
|
|
|
module.child.test_object.create (expand)
|
|
|
|
module.child.test_object.create (expand)
|
|
|
|
module.child (expand)
|
2020-04-01 12:13:40 -05:00
|
|
|
provider["registry.terraform.io/hashicorp/test"]
|
2018-05-10 17:38:58 -05:00
|
|
|
module.child.test_object.other
|
|
|
|
module.child.test_object.create
|
2020-05-12 09:54:43 -05:00
|
|
|
module.child.test_object.other (expand)
|
|
|
|
module.child.test_object.other (expand)
|
|
|
|
module.child (expand)
|
2020-04-01 12:13:40 -05:00
|
|
|
provider["registry.terraform.io/hashicorp/test"]
|
|
|
|
provider["registry.terraform.io/hashicorp/test"]
|
|
|
|
provider["registry.terraform.io/hashicorp/test"] (close)
|
2018-05-10 17:38:58 -05:00
|
|
|
module.child.test_object.other
|
|
|
|
test_object.other
|
2017-04-12 16:25:15 -05:00
|
|
|
root
|
core: refactoring.ImpliedMoveStatements replaces NodeCountBoundary
Going back a long time we've had a special magic behavior which tries to
recognize a situation where a module author either added or removed the
"count" argument from a resource that already has instances, and to
silently rename the zeroth or no-key instance so that we don't plan to
destroy and recreate the associated object.
Now we have a more general idea of "move statements", and specifically
the idea of "implied" move statements which replicates the same heuristic
we used to use for this behavior, we can treat this magic renaming rule as
just another "move statement", special only in that Terraform generates it
automatically rather than it being written out explicitly in the
configuration.
In return for wiring that in, we can now remove altogether the
NodeCountBoundary graph node type and its associated graph transformer,
CountBoundaryTransformer. We handle moves as a preprocessing step before
building the plan graph, so we no longer need to include any special nodes
in the graph to deal with that situation.
The test updates here are mainly for the graph builders themselves, to
acknowledge that indeed we're no longer inserting the NodeCountBoundary
vertices. The vertices that NodeCountBoundary previously depended on now
become dependencies of the special "root" vertex, although in many cases
here we don't see that explicitly because of the transitive reduction
algorithm, which notices when there's already an equivalent indirect
dependency chain and removes the redundant edge.
We already have plenty of test coverage for these "count boundary" cases
in the context tests whose names start with TestContext2Plan_count and
TestContext2Apply_resourceCount, all of which continued to pass here
without any modification and so are not visible in the diff. The test
functions particularly relevant to this situation are:
- TestContext2Plan_countIncreaseFromNotSet
- TestContext2Plan_countDecreaseToOne
- TestContext2Plan_countOneIndex
- TestContext2Apply_countDecreaseToOneCorrupted
The last of those in particular deals with the situation where we have
both a no-key instance _and_ a zero-key instance in the prior state, which
is interesting here because to exercises an intentional interaction
between refactoring.ImpliedMoveStatements and refactoring.ApplyMoves,
where we intentionally generate an implied move statement that produces
a collision and then expect ApplyMoves to deal with it in the same way as
it would deal with all other collisions, and thus ensure we handle both
the explicit and implied collisions in the same way.
This does affect some UI-level tests, because a nice side-effect of this
new treatment of this old feature is that we can now report explicitly
in the UI that we're assigning new addresses to these objects, whereas
before we just said nothing and hoped the user would just guess what had
happened and why they therefore weren't seeing a diff.
The backend/local plan tests actually had a pre-existing bug where they
were using a state with a different instance key than the config called
for but getting away with it because we'd previously silently fix it up.
That's still fixed up, but now done with an explicit mention in the UI
and so I made the state consistent with the configuration here so that the
tests would be able to recognize _real_ differences where present, as
opposed to the errant difference caused by that inconsistency.
2021-09-17 17:32:32 -05:00
|
|
|
module.child (close)
|
2020-04-01 12:13:40 -05:00
|
|
|
provider["registry.terraform.io/hashicorp/test"] (close)
|
2018-05-10 17:38:58 -05:00
|
|
|
test_object.create
|
2020-05-12 09:54:43 -05:00
|
|
|
test_object.create (expand)
|
|
|
|
test_object.create (expand)
|
2020-04-01 12:13:40 -05:00
|
|
|
provider["registry.terraform.io/hashicorp/test"]
|
2018-05-10 17:38:58 -05:00
|
|
|
test_object.other
|
|
|
|
test_object.create
|
2020-05-12 09:54:43 -05:00
|
|
|
test_object.other (expand)
|
|
|
|
test_object.other (expand)
|
2020-04-01 12:13:40 -05:00
|
|
|
provider["registry.terraform.io/hashicorp/test"]
|
2016-09-13 12:56:37 -05:00
|
|
|
`
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
|
|
|
|
const testApplyGraphBuilderDestroyCountStr = `
|
2020-04-01 12:13:40 -05:00
|
|
|
provider["registry.terraform.io/hashicorp/test"]
|
|
|
|
provider["registry.terraform.io/hashicorp/test"] (close)
|
2018-05-10 17:38:58 -05:00
|
|
|
test_object.B
|
2017-04-12 16:25:15 -05:00
|
|
|
root
|
2020-04-01 12:13:40 -05:00
|
|
|
provider["registry.terraform.io/hashicorp/test"] (close)
|
2020-05-12 09:54:43 -05:00
|
|
|
test_object.A (expand)
|
2020-04-01 12:13:40 -05:00
|
|
|
provider["registry.terraform.io/hashicorp/test"]
|
2018-09-25 09:42:44 -05:00
|
|
|
test_object.A[1] (destroy)
|
2020-04-01 12:13:40 -05:00
|
|
|
provider["registry.terraform.io/hashicorp/test"]
|
2018-05-10 17:38:58 -05:00
|
|
|
test_object.B
|
2020-05-12 09:54:43 -05:00
|
|
|
test_object.A (expand)
|
2018-05-10 17:38:58 -05:00
|
|
|
test_object.A[1] (destroy)
|
2020-05-12 09:54:43 -05:00
|
|
|
test_object.B (expand)
|
|
|
|
test_object.B (expand)
|
2020-04-01 12:13:40 -05:00
|
|
|
provider["registry.terraform.io/hashicorp/test"]
|
terraform: apply resource must depend on destroy deps
Fixes #10440
This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.
We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.
I'll explain this in examples...
Given the following configuration:
resource "null_resource" "a" {
count = "${var.count}"
}
resource "null_resource" "b" {
triggers { key = "${join(",", null_resource.a.*.id)}" }
}
Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.
If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440
In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.
The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:
1. null_resource.b (create)
2. null_resource.b (destroy)
3. null_resource.a (destroy)
In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.
This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.
The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-04 01:44:09 -06:00
|
|
|
`
|