Commit Graph

19 Commits

Author SHA1 Message Date
James Bardin
a2718e4f79 ignore errors interpolating RawCount during apply
If a count field references another count field which is interpolated
but is attached to a resource already in the state, the result of that
first interpolation will be lost when a plan is serialized. This is
because the result of the first interpolation is stored directly in the
module config, in an unexported config field.

This is not a general fix for the above situation, which would require
refactoring how counts are handles throughout the config. Ignoring the
error works, because in most cases the count will be properly
handled during the resource's interpolation.
2018-03-09 19:16:04 -05:00
Martin Atkins
b511caf049 core: interpolate the count config during the apply walk
Previously we would interpolate the count config (ResourceConfig.RawCount)
only while preparing to dynamic-expand aggregate resource nodes. This is
problematic because we do not dynamic-expand any resource nodes during the
apply walk, and so previously the count value was not available for
interpolation during apply and would result in an error.

Now we interpolate RawCount once for each resource we visit during the
apply walk -- even though that redundantly interpolates the same config
multiple times when count > 1 -- to ensure that it's available by the
time we interpolate any remaining expressions in the config and any
expressions within "connection" and "provisioner" blocks.

This error was masked by us sharing a single RawConfig instance between
the plan and apply walks when "terraform apply" is run with no explicit
plan file argument, but was exposed by the workflow where the plan is
written first to disk since in that case the interpolation result from
during the plan phase is not present in the deflated plan object. For
this reason, the new context test serializes the plan into an in-memory
buffer and reloads it in order to simulate the effect of the two-step
workflow.
2018-01-19 13:06:00 -08:00
James Bardin
b79adeae02 save resolved providers for resources to state
Use the ResourceState.Provider field to store the full name of the
provider used during apply. This field is only used when a resource is
removed from the config, and will allow that resource to be removed by
the exact same provider with which it was created.

Modify the locations which might accept the alue of the
ResourceState.Provider field to detect that the name is resolved.
2017-11-07 13:09:36 -05:00
James Bardin
a14fd0344c WIP reference providers by full name
This turned out to be a big messy commit, since the way providers are
referenced is tightly coupled throughout the code. That starts to unify
how providers are referenced, using the format output node Name method.

Add a new field to the internal resource data types called
ResolvedProvider. This is set by a new setter method SetProvider when a
resource is connected to a provider during graph creation. This allows
us to later lookup the provider instance a resource is connected to,
without requiring it to have the same module path.

The InitProvider context method now takes 2 arguments, one if the
provider type and the second is the full name of the provider. While the
provider type could still be parsed from the full name, this makes it
more explicit and, and changes to the name format won't effect this
code.
2017-11-02 15:00:06 -04:00
Mitchell Hashimoto
d59725e9fd
terraform: convert StateDeps to use new structs 2017-01-26 20:47:20 -08:00
Mitchell Hashimoto
d3df7874d5
terraform: introduce EvalApplyPre so that PreApply is called even for
destroy provisioners.
2017-01-20 20:36:53 -08:00
Mitchell Hashimoto
e9f6c9c429
terraform: run destroy provisioners on destroy 2017-01-20 18:07:51 -08:00
Mitchell Hashimoto
0e4a6e3e89
terraform: apply resource must depend on destroy deps
Fixes #10440

This updates the behavior of "apply" resources to depend on the
destroy versions of their dependencies.

We make an exception to this behavior when the "apply" resource is CBD.
This is odd and not 100% correct, but it mimics the behavior of the
legacy graphs and avoids us having to do major core work to support the
100% correct solution.

I'll explain this in examples...

Given the following configuration:

    resource "null_resource" "a" {
       count = "${var.count}"
    }

    resource "null_resource" "b" {
      triggers { key = "${join(",", null_resource.a.*.id)}" }
    }

Assume we've successfully created this configuration with count = 2.
When going from count = 2 to count = 1, `null_resource.b` should wait
for `null_resource.a.1` to destroy.

If it doesn't, then it is a race: depending when we interpolate the
`triggers.key` attribute of `null_resource.b`, we may get 1 value or 2.
If `null_resource.a.1` is destroyed, we'll get 1. Otherwise, we'll get
2. This was the root cause of #10440

In the legacy graphs, `null_resource.b` would depend on the destruction
of any `null_resource.a` (orphans, tainted, anything!). This would
ensure proper ordering. We mimic that behavior here.

The difference is CBD. If `null_resource.b` has CBD enabled, then the
ordering **in the legacy graph** becomes:

  1. null_resource.b (create)
  2. null_resource.b (destroy)
  3. null_resource.a (destroy)

In this case, the update would always have 2 values for `triggers.key`,
even though we were destroying a resource later! This scenario required
two `terraform apply` operations.

This is what the CBD check is for in this PR. We do this to mimic the
behavior of the legacy graph.

The correct solution to do one day is to allow splat references
(`null_resource.a.*.id`) to happen in parallel and only read up to to
the `count` amount in the state. This requires some fairly significant
work close to the 0.8 release date, so we can defer this to later and
adopt the 0.7.x behavior for now.
2016-12-03 23:54:29 -08:00
Mitchell Hashimoto
3b2282ca57
terraform: Destroy node should only include deposed for specific index
Fixes #10338

The destruction step for a resource was included the deposed resources
for _all_ resources with that name (ignoring the "index"). For example:
`aws_instance.foo.0` was including destroying deposed for
`aws_instance.foo.1`.

This changes the config to the deposed transformer to properly include
that index.

This change includes a larger change of changing `stateId` to include
the index. This affected more parts but was ultimately the issue in
question.
2016-11-29 09:16:18 -08:00
Mitchell Hashimoto
f50c7acf95
terraform: record dependency to self (other index)
Fixes #10313

The new graph wasn't properly recording resource dependencies to a
specific index of itself. For example: `foo.bar.2` depending on
`foo.bar.0` wasn't shown in the state when it should've been.

This adds a test to verify this and fixes it.
2016-11-23 09:25:20 -08:00
Mitchell Hashimoto
c6fd938fb8
terraform: EvalInstanceInfo on data sources in new graph
This doesn't cause any practical issues as far as I'm aware (couldn't
get any test to fail), but caused shadow errors since it wasn't matching
the prior behavior.
2016-11-15 09:02:10 -08:00
Mitchell Hashimoto
4a6cc3b100
terraform: new apply resource node supports data sources
This enables the new apply graph's resource node to apply data sources.
Data sources appear to only be tested for "refresh" which is likely
where they're set but they've also been implemented (not my code, not
trying to edit code) within the "apply" operation as well.

This adds an apply test to ensure data sources work, and then modifies
the new apply node to support data sources.
2016-10-20 22:03:48 -07:00
Mitchell Hashimoto
6622ca001d
terraform: abstract resource nodes 2016-10-19 13:38:53 -07:00
Mitchell Hashimoto
7baf64f806
terraform: starting CBD, destroy edge for the destroy relationship 2016-10-19 13:38:52 -07:00
Mitchell Hashimoto
311d27108e
terraform: Enable DestroyEdgeTransformer 2016-10-19 13:38:52 -07:00
Mitchell Hashimoto
bd5d97f9f5
terraform: transform to attach resource configs 2016-10-19 13:38:52 -07:00
Mitchell Hashimoto
80ef7f1acf
terraform: properly compare bad diffs 2016-10-19 13:38:51 -07:00
Mitchell Hashimoto
9e8cd48cda
terraform: add destroy nodes, destroys kind of work 2016-10-19 13:38:51 -07:00
Mitchell Hashimoto
2e8a419fd8
terraform: starting work on destroy 2016-10-19 13:38:51 -07:00