Commit Graph

12 Commits

Author SHA1 Message Date
Martin Atkins
88b5607a7a core: Fetch schemas during context construction
Previously we fetched schemas during the AttachSchemaTransformer,
potentially multiple times as that was re-run for each graph built. Now
we fetch the schemas just once during context construction, passing that
result into each of the graph builders.

This only addresses the schema accesses during graph construction. We're
still separately loading schemas during the main walk for evaluation
purposes. This will be addressed in a later commit.
2018-10-16 18:49:20 -07:00
Martin Atkins
bec0f56808 core: Pass components through to the destroy transformers
These transformers both construct temporary graphs using many of the same
transformers used in the apply graph, and properly doing this now requires
access to the providers and provisioners in order to obtain their schemas.

Along with this, we also update the tests here to use the
simpleMockComponentFactory helper to get a mock provider with a schema
already configured, which means we also need to update the test fixtures
and assertions to use the resource type and attributes defined in that
mock factory.
2018-10-16 18:48:28 -07:00
Martin Atkins
4a21b763aa core: Get tests compiling again
After the refactoring to integrate HCL2 many of the tests were no longer
using correct types, attribute names, etc.

This is a bulk update of all of the tests to make them compile again, with
minimal changes otherwise. Although the tests now compile, many of them
do not yet pass. The tests will be gradually repaired in subsequent
commits, as we continue to complete the refactoring and retrofit work.
2018-10-16 18:46:46 -07:00
Mitchell Hashimoto
af61d566c2
terraform: passing test for destroy edge for module only
Just adding passing tests as a sanity check for a bug.
2017-02-07 19:12:03 -08:00
Mitchell Hashimoto
0e4a6e3e89
terraform: apply resource must depend on destroy deps
Fixes #10440

This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.

We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.

I'll explain this in examples...

Given the following configuration:

    resource "null_resource" "a" {
       count = "${var.count}"
    }

    resource "null_resource" "b" {
      triggers { key = "${join(",", null_resource.a.*.id)}" }
    }

Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.

If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440

In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.

The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:

  1. null_resource.b (create)
  2. null_resource.b (destroy)
  3. null_resource.a (destroy)

In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.

This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.

The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-03 23:54:29 -08:00
Mitchell Hashimoto
95239a7fe6
terraform: when promoting non-CBD to CBD, mark the config as such
This brings the change for the  new graph. See #10455
2016-12-02 09:46:42 -05:00
Mitchell Hashimoto
f6161a7dc9
terraform: destroy edge must include resources through outputs
This fixes: `TestContext2Apply_moduleDestroyOrder`

The new destroy graph wasn't properly creating edges that happened
_through_ an output, it was only created the edges for _direct_
dependents.

To fix this, the DestroyEdgeTransformer now creates the full transitive
list of destroy edges by walking all ancestors. This will create more
edges than are necessary but also will no longer miss resources through
an output.
2016-11-11 14:29:19 -08:00
Mitchell Hashimoto
19350d617d
terraform: references can have backups
terraform: more specific resource references

terraform: outputs need to know about the new reference format

terraform: resources w/o a config still have a referencable name
2016-11-08 13:59:30 -08:00
Mitchell Hashimoto
fb29b6a2dc
terraform: destroy edges should never point to self
Fixes #9920

This was an issue caught with the shadow graph. Self references in
provisioners were causing a self-edge on destroy apply graphs.

We need to explicitly check that we're not creating an edge to ourself.
This is also how the reference transformer works.
2016-11-08 12:27:33 -08:00
Mitchell Hashimoto
7baf64f806
terraform: starting CBD, destroy edge for the destroy relationship 2016-10-19 13:38:52 -07:00
Mitchell Hashimoto
08dade5475
terraform: more destroy edge tests 2016-10-19 13:38:52 -07:00
Mitchell Hashimoto
7b2bd93094
terraform: test the destroy edge transform 2016-10-19 13:38:52 -07:00