Commit Graph

444 Commits

Author SHA1 Message Date
James Bardin
cfefeec926 walkDestroy is a form of "apply"
When computing the count value, make sure to include walkDestroy with
walkApply, as the former is only a special case of the latter.

When applying a saved plan, the computed count values are lost and we
can no longer query the state for those values. The apply walk was
already considered in the `resourceCountMax` function, but the destroy
walk was not.  This worked when destroying in a single operation
("terraform destroy"), since the state would still be updated with the
latest counts from the plan.
2018-04-10 11:46:29 -04:00
James Bardin
79b948c9cc detect scaled in resources when evaluating *s
If an existing resources is scaled back to 0, locals and outputs will
still have a multi-variable reference to evaluate, which should return
an empty list. Due to how the resource is removed, the resource will
still exist in the state but with no primary instance, which needs to be
ignored in the instance count.
2018-04-03 10:00:45 -04:00
James Bardin
e5f8adfc1a add failing test for invalid output with targets
Outputs that are missing references aren't always removed from the
graph, due to being filtered before their dependents are removed.
2018-03-19 20:32:37 -04:00
James Bardin
f3d1fb3aff failing test for interpolated count from plan
An interpolated count value that is determined during plan, is lost
during plan serialization, causing apply to fail when the interpolation
string can't be evaluated.
2018-03-09 19:04:39 -05:00
James Bardin
7fbc35a36c Make sure outputs are removed when targeting
Similar to NodeApplyableOuptut, NodeDestroyableOutputs also need to stay
in the graph if any ancestor nodes

Use the same GraphNodeTargetDownstream method to keep them from being
pruned, since they are dependent on the output node and all its
descendants.
2018-01-31 13:51:40 -05:00
James Bardin
2d138d9917 add a more complex locals test
Using destroy provisioners again for edge cases during destroy.
2018-01-30 10:47:17 -05:00
James Bardin
7ac0a46981 add destroy provisioner test with locals, outputs
Add a complex destroy provisioner testcase using locals, outputs and
variables.

Add that pesky "id" attribute to the instance states for interpolation.
2018-01-29 18:01:58 -05:00
James Bardin
7da1a39480 always evaluate locals, even during destroy
Destroy-time provisioners require us to re-evaluate during destroy.

Rather than destroying local values, which doesn't do much since they
aren't persisted to state, we always evaluate them regardless of the
type of apply. Since the destroy-time local node is no longer a
"destroy" operation, the order of evaluation need to be reversed. Take
the existing DestroyValueReferenceTransformer and change it to reverse
the outgoing edges, rather than in incoming edges. This makes it so that
any dependencies of a local or output node are destroyed after
evaluation.

Having locals evaluated during destroy failed one other test, but that
was the odd case where we need `id` to exist as an attribute as well as
a field.
2018-01-29 16:16:41 -05:00
Martin Atkins
b511caf049 core: interpolate the count config during the apply walk
Previously we would interpolate the count config (ResourceConfig.RawCount)
only while preparing to dynamic-expand aggregate resource nodes. This is
problematic because we do not dynamic-expand any resource nodes during the
apply walk, and so previously the count value was not available for
interpolation during apply and would result in an error.

Now we interpolate RawCount once for each resource we visit during the
apply walk -- even though that redundantly interpolates the same config
multiple times when count > 1 -- to ensure that it's available by the
time we interpolate any remaining expressions in the config and any
expressions within "connection" and "provisioner" blocks.

This error was masked by us sharing a single RawConfig instance between
the plan and apply walks when "terraform apply" is run with no explicit
plan file argument, but was exposed by the workflow where the plan is
written first to disk since in that case the interpolation result from
during the plan phase is not present in the deflated plan object. For
this reason, the new context test serializes the plan into an in-memory
buffer and reloads it in order to simulate the effect of the two-step
workflow.
2018-01-19 13:06:00 -08:00
James Bardin
ba749db9ed add test checking CloseProvider
There was no test checking that Close wsa called on the mock provider.
This fails now since the CloseProviderTransformer isn't using the fully
resolved provider name.
2018-01-04 15:00:09 -05:00
James Bardin
105b66e74d error out when a referenced provider is missing 2017-11-13 20:41:38 -05:00
James Bardin
b9b418bcb0 test passing in implicitly used provider 2017-11-13 20:41:38 -05:00
James Bardin
b8b7548614 remove module referencing existing provider 2017-11-07 22:05:52 -05:00
James Bardin
3801ca5bbf test that Refresh updates Provider fields in state 2017-11-07 21:42:30 -05:00
James Bardin
b9b4912bfb complete passing providers through modules
Here we complete the passing of providers between modules via the
module/providers configuration, add another test and update broken test
outputs.

The DisbableProviderTransformer is being removed, since it was really
only for provider configuration inheritance. Since configuration is no
longer inherited, there's no need to keep around unused providers. The
actually shouldn't be any unused providers going into the graph any
longer, but put off verifying that condition for later.  Replace it's
usage with the PruneProviderTransformer, and use that to also remove the
unneeded proxy provider nodes.
2017-11-07 09:41:57 -05:00
James Bardin
d9d21d4200 don't add missing provider aliases to the graph
A missing provider alias should not be implicitly added to the graph.

Run the AttachaProviderConfigTransformer immediately after adding the
providers, since the ProviderConfigTransformer should have just added
these nodes.
2017-11-06 14:21:28 -05:00
Martin Atkins
37e276e043 core: test correct behavior of plan+apply with unstable values
We have a few pesky functions that don't act like proper functions and
instead return different values on each call. These are tricky because
we need to make sure we don't trip over ourselves by re-generating these
between plan and apply.

Here we add a context test to verify correct behavior in the presence
of such functions.

There's actually a pre-existing bug which this test caught as originally
written: we re-evaluate the interpolation expressions during apply,
causing these unstable functions to produce new values, and so the
applied value ends up not exactly matching the plan. This is a bug that
needs fixing, but it's been around at least since v0.7.6 (random old
version I tried this with to see) so we'll put it on the list and address
it separately. For now, this part of the test is commented out with a
TODO attached.
2017-11-03 16:11:13 -07:00
James Bardin
94ee4d9111 Add new test and update graph outputs
The the grandChild missing test has a provider declared in a child
module which is missing in a grandchildmodule. Verify that the
grandchild gets connected to the child provider, and they all are
connected to the root providers.

Update some test outputs to match the expected behavior of only adding
missing providers at the root level.
2017-11-02 15:00:06 -04:00
James Bardin
402f321abe change import module inheritance test
Importing into a module requires a provider config. Update the
inheritance test to reflect the new import restrictions.
2017-10-27 09:08:15 -04:00
James Bardin
db7596c045 use the inherited provider configs in the graph
Use the configured providers directly, rather than looking for inherited
provider configuration during graph evaluation.

First remove the provider config cache, and the associated
SetProviderConfig and ParentProviderConfig methods on the eval context.
Every provider must be configured, so there's no need to look for
configuration from other provider instances.

The config.ProviderConfig struct now has a Scope field which stores the
proper path for the interpolation scope. To get this metadata to the
interpolator, we add an EvalInterpolatProvider node which can carry the
ProviderConfig, and an InterpolateProvider context method to carry the
ProviderConfig.Scope into the InterplationScope.

Some of the tests could be adjusted to account for the new inheritance
behavior, and some were simply no longer valid and will be removed.

The remaining tests have questions on how they should work in practice.
This mostly concerns orphaned modules where there is no longer a way to
obtain a provider. In some cases we may require that a minimal provider
config be present to handle the destroy process, but we need further
testing.

All disabled code was commented out in this commit to record any
additional comments. The following commit will be a cleanup pass.
2017-10-27 09:08:15 -04:00
James Bardin
da0e74276a fix provider with local value test and docs
Make sure this fails during destroy

Add a note in the documentation that local values may fail during
destroy.
2017-09-29 17:14:07 -04:00
James Bardin
061597304c add test for destroying with locals in a provider
Verify that locals aren't removed from the state before providers are
evaluated during destroy.
2017-09-28 15:31:41 -04:00
James Bardin
9d8ab55658 Add failing test for destroy with locals 2017-09-28 11:06:37 -04:00
Martin Atkins
0a342e8dc2 config: allow local value interpolations in count
There is some additional, early validation on the "count" meta-argument
that verifies that only suitable variable types are used, and adding local
values to this whitelist was missed in the initial implementation.
2017-09-01 17:54:05 -07:00
Martin Atkins
8cd0ee80e5 config: merge/append for local values
It seems that this somehow got lost in the commit/rebase shuffle and
wasn't caught by the tests that _did_ make it because they were all using
just one file.

As a result of this bug, locals would fail to work correctly in any
configuration with more than one .tf file.

Along with restoring the append/merge behavior, this also reworks some of
the tests to exercise the multi-file case as better insurance against
regressions of this sort in future.

This fixes #15969.
2017-09-01 17:51:13 -07:00
Martin Atkins
3a30bfe845 core: evaluate locals and return them for interpolation
We stash the locals in the module state in a map that is ignored for JSON
serialization. We don't include locals in the persisted state because they
can be trivially recomputed and this allows us to assume that they will
pass through verbatim, without any normalization or other transforms
caused by the JSON serialization.

From a user standpoint a local is just a named alias for an expression,
so it's desirable that the result passes through here in as raw a form
as possible, so it behaves as closely as possible to simply using the
given expression directly.
2017-08-21 15:15:25 -07:00
Martin Atkins
5b66953d1d core: graph nodes and edges for local values
A local value is similar to an output in that it exists only within state
and just always evaluates its value as best it can with the current state.
Therefore it has a single graph node type for all walks, which will
deal with that evaluation operation.
2017-08-21 15:15:25 -07:00
James Bardin
1664d4e228 test with bad interpolation during Input
The interpolation going into a module variable here will be valid after
Refresh, but Refresh doesn't happen for the Input phase.
2017-08-10 14:14:29 -04:00
Martin Atkins
a8c58b081c core: -target option to also select resources in descendant modules
Previously the behavior for -target when given a module address was to
target only resources directly within that module, ignoring any resources
defined in child modules.

This behavior turned out to be counter-intuitive, since users expected
the -target address to be interpreted hierarchically.

We'll now use the new "Contains" function for addresses, which provides
a hierarchical "containment" concept that is more consistent with user
expectations. In particular, it allows module.foo to match
module.foo.module.bar.aws_instance.baz, where before that would not have
been true.

Since Contains isn't commutative (unlike Equals) this requires some
special handling for targeting specific indices. When given an argument
like -target=aws_instance.foo[0], the initial graph construction (for
both plan and refresh) is for the resource nodes from configuration, which
have not yet been expanded to separate indexed instances. Thus we need
to do the first pass of TargetsTransformer in mode where indices are
ignored, with the work then completed by the DynamicExpand method which
re-applies the TargetsTransformer in index-sensitive mode.

This is a breaking change for anyone depending on the previous behavior
of -target, since it will now select more resources than before. There is
no way provided to obtain the previous behavior. Eventually we may support
negative targeting, which could then combine with positive targets to
regain the previous behavior as an explicit choice.
2017-06-16 16:36:08 -07:00
Martin Atkins
b82ef2e30e core: ResourceAddress.MatchesConfig method
This is a useful building block for filtering configuration based on a
resource address. It is similar in principle to state filtering, but for
specific resource configuration blocks.
2017-06-09 14:03:59 -07:00
Martin Atkins
25a6d8f471 core: build a module dependency tree from config+state
This new private function takes a configuration tree and a state structure
and finds all of the explicit and implied provider dependencies
represented, returning them as a moduledeps.Module tree structure.

It annotates each dependency with a "reason", which is intended to be
useful to a user trying to figure out where a particular dependency is
coming from, though we don't yet have any UI to view this.

Nothing calls this yet, but a subsequent commit will use the result of
this to produce a constraint-conforming map of provider factories during
context initialization.
2017-06-09 14:03:59 -07:00
Martin Atkins
410b60cb7f Stop requiring multi-vars (splats) to be in array brackets
Prior to Terraform 0.7, lists in Terraform were just a shallow abstraction
on top of strings with a magic delimiter between items. Wrapping a single
string in brackets in the configuration was Terraform's prompt that it
needed to split the string on that delimiter during interpolation.

In 0.7, when first-class lists were added, this convention was preserved
by flattening lists-of-lists by one level when they were encountered in
configuration. However, there was an oversight in that change where it
did not correctly handle the case where the inner list was unknown.

In #14135 we removed some code that was flattening partially-unknown lists
into fully-unknown (untyped) values. This inadvertently exposed the missed
case from the previous paragraph, causing issues for list-wrapped splat
expressions with unknown members. While this worked fine for resources,
due to some fixup done inside helper/schema, this did not work for other
interpolation contexts such as module blocks.

Various attempts to fix this up and restore the flattening behavior
selectively were unsuccessful, due to a proliferation of assumptions all
over the core code that would be too risky to change just to fix this bug.

This change, then, takes the different approach of removing the
requirement that splats be presented inside list brackets. This
requirement didn't make much sense anymore anyway, since no other
list-returning expression had this constraint and so the rest of Terraform
was already successfully dealing with both cases.

This leaves us with two different scenarios:

- For resource arguments, existing normalization code in helper/schema
  does its own flattening that preserves compatibility with the common
  practice of using bracketed splats. This change proves this with a test
  within the "test" provider that exercises the whole Terraform core and
  helper/schema stack that assigns bracketed splats to list and set
  attributes.

- For arguments in other blocks, such as in module callsites, the
  interpolator's own flattening behavior applies to known lists,
  preserving compatibility with configurations from before
  partially-computed splats were possible, but those wishing to use
  partially-computed splats are required to drop the surrounding brackets.
  This is less concerning because this scenario was introduced only in
  0.9.5, so the scope for breakage is limited to those who adopted this
  new feature quickly after upgrading.

As of this commit, the recommendation is to stop using brackets around
splats but the old form continues to be supported for backward
compatibility. In a future _major_ version of Terraform we will probably
phase out this legacy form to improve consistency, but for now both
forms are acceptable at the expense of some (pre-existing) weird behavior
when _actual_ lists-of-lists are used.

This addresses #14521 by officially adopting the suggested workaround of
dropping the brackets around the splat. However, it doesn't yet allow
passing of a partially-unknown list between modules: that still violates
assumptions in Terraform's core, so for the moment partially-unknown lists
work only within a _single_ interpolation expression, and cannot be
passed around between expressions. Until more holistic work is done to
improve Terraform's type handling, passing a partially-unknown splat
through to a module will result in a fully-unknown list emerging on
the other side, just as was the case before #14135; this change just
addresses the fact that this was failing with an error in 0.9.5.
2017-05-23 11:22:37 -07:00
Martin Atkins
45b04c826a core: don't crash if no module state exists for multi var
For child modules, a ModuleState isn't allocated until the first time a
module instance is inserted into the state under the module's path.
Normally interpolations of resource attributes are delayed until at least
one resource has been created due to the nature of the dependency graph,
but if the interpolation value is a multi-var (splat) then it is possible
that the referenced resource has count=0 and thus created _no_ resource
states when it was visited.

Previously we would crash when trying to access the resource map for the
nil module in order to count how many instances are present. Since we know
there can't be any instances present in a nil module, we now preempt
this crash by returning zero early.

This edge-case does not apply to the root module because its ModuleState
is allocated as part of initializing the main State instance.

This fixes #14438.
2017-05-16 09:54:33 -07:00
Chris Marchesi
11b4794612 core: Test for new refresh graph behaviour
Tests on DynamicExpand for both resources and data sources, cover scale
in/out scenarios, and also a verification for the behaviour of config
orphans.
2017-05-12 15:45:06 -07:00
Martin Atkins
7bdf4a925d core: Allow downstream targeting of certain node types
The previous behavior of targets was that targeting a particular node
would implicitly target everything it depends on. This makes sense when
the dependencies in question are between resources, since we need to
make sure all of a resource's dependencies are in place before we can
create or update it.

However, it had the undesirable side-effect that targeting a resource
would _exclude_ any outputs referring to it, since the dependency edge
goes from output to resource. This then causes the output to be "stale",
which is problematic when outputs are being consumed by downstream
configs using terraform_remote_state.

GraphNodeTargetDownstream allows nodes to opt-in to a new behavior where
they can be targeted by _inverted_ dependency edges. That is, it allows
outputs to be considered targeted if anything they directly depend on
is targeted.

This is different than the implied targeting behavior in the other
direction because transitive dependencies are not considered unless the
intermediate nodes themselves have TargetDownstream. This means that
an output1→output2→resource chain can implicitly target both outputs, but
an output→resource1→resource2 chain _won't_ target the output if only
resource2 is targeted.

This behavior creates a scenario where an output can be visited before
all of its dependencies are ready, since it may have a mixture of both
targeted and untargeted dependencies. This is fine for outputs because
they silently ignore any errors encountered during interpolation anyway,
but other hypothetical future implementers of this interface may need to
be more careful.

This fixes #14186.
2017-05-11 11:57:46 -07:00
Martin Atkins
58f5257678 core: context test for partially-unknown splat lists
This is a context test for the behavior enabled by #14135, as some
insurance to decrease the chance that we break it again.
2017-05-04 16:55:32 -07:00
Martin Atkins
b4df03bca4 core: allow partially-unknown lists from splat syntax
This was actually redundant anyway since HIL itself applied a similar
rule where any partially-unknown list would be automatically flattened
to a single unknown value.

However, now we're changing HIL to explicitly permit partially-unknown
lists so that we can allow the index operator [...] to succeed when
applied to one of the elements that _is_ known.

This, in conjunction with hashicorp/hil#51 and hashicorp/hil#52,
fixes #3449.
2017-05-04 15:56:35 -07:00
Chris Marchesi
d41b806789 core: Restore CountBoundaryTransformer to apply, add/adjust tests
Moving the transformer wholesale looks like it broke some tests, with
some actually doing legit work in normalizing singular resources from a
foo.0 notation to just foo.

Adjusted the TestPlanGraphBuilder to account for the extra
meta.count-boundary nodes in the graph output now, as well as added
another context test that tests this case. It appears the issue happens
during validate, as this is where the state can be altered to a broken
state if things are not properly transformed in the plan graph.
2017-04-19 22:23:52 -07:00
James Bardin
3f49227b72 add state an context tests
Make sure duplicate depends_on entries are pruned from existing states
on read.

Make sure new state built from configs with multiple references to the
same resource only add it once to the Dependencies.
2017-04-08 15:37:15 -04:00
Mitchell Hashimoto
69759e04ca
terraform: convert empty path to root path in V1 state 2017-03-21 11:37:12 -07:00
James Bardin
b7152c4405 Merge pull request #12897 from hashicorp/jbardin/ignore-changes
ignore_changes causes keys in other flatmapped objects to be lost from diff
2017-03-21 09:25:47 -04:00
James Bardin
970e7c1923 Add a failing test for missing keys in diff
ignore_changes is causing changes in other flatmapped sets to be
filtered out incorrectly.

This required fixing the testDiffFn to create diffs which include the
old value, breaking one other test.
2017-03-20 17:44:37 -04:00
Mitchell Hashimoto
e8b3062629
terraform: unknown value for variables not set
Fixes #12836

Realistically, these should be caught during validation anyways. In this
case, this was causing 12386 because refresh with a data source will
attempt to use module variables. I don't see any clear logic to prune
those module variables or not add them so its easier to return unknown
to cause the data to be computed and not run.
2017-03-17 15:33:33 -07:00
James Bardin
fe5f519817 Add failing test for invalid interpolation
Adding the submodule causes the count variable interpolation to fail.
2017-03-16 10:35:18 -04:00
Mitchell Hashimoto
9900bd752a
terraform: string through the context meta 2017-03-13 16:21:09 -07:00
Mitchell Hashimoto
a1ec81964b
terraform: destroy ordering needs to handle destroy provisioner edges
This ensures that things aren't destroyed before their values are used.
2017-02-17 14:29:22 -08:00
Mitchell Hashimoto
757217b91f
terraform: destroy resource should depend on destroy-time prov deps 2017-02-17 13:13:44 -08:00
Mitchell Hashimoto
9062a1893e
terraform: don't include providers if not targeted
Fixes #12009

This is a simple change similar to #10911 where we need to exclude
providers that aren't targeted.
2017-02-17 09:21:29 -08:00
Mitchell Hashimoto
4d6085b46a
terraform: outputs should not be included if not targeted
Fixes #10911

Outputs that aren't targeted shouldn't be included in the graph.

This requires passing targets to the apply graph. This is unfortunate
but long term should be removable since I'd like to move output changes
to the diff as well.
2017-02-13 12:52:45 -08:00
Mitchell Hashimoto
af61d566c2
terraform: passing test for destroy edge for module only
Just adding passing tests as a sanity check for a bug.
2017-02-07 19:12:03 -08:00