Commit Graph

3254 Commits

Author SHA1 Message Date
James Bardin
35714e61e6 audit graph builder to make them more similar
Auditing the graph builder to remove unused transformers (planning does
not need to close provisioners for example), and re-order them. While
many of the transformations are commutative, using the same order
ensures the same behavior between operations when the commutative
property is lost or changed.
2020-10-06 17:39:53 -04:00
James Bardin
a32028aeed evaluate vars and outputs during import
Outputs were not being evaluated during import, because it was not added
to the walk filter.

Remove any unnecessary walk filters from all the Execute nodes.
2020-10-06 17:22:50 -04:00
Pam Selle
b9eeba0da2 Consider sensitivity when evaluating module outputs
This change "marks" values related to outputs that
have Sensitive set to true.
2020-10-06 13:09:18 -04:00
James Bardin
8049d2e028 do not evaluate module variables during import
We do not currently need to evaluate module variables in order to
import a resource.

This will likely change once we can select the import provider
automatically, and have a more dynamic method for dispatching providers
to module instances. In the meantime we can avoid the evaluation for now
and prevent a certain class of import errors.
2020-10-06 12:51:28 -04:00
James Bardin
c48af3f18b
Merge pull request #26470 from hashicorp/jbardin/inverse-destroy-references
Allow special-case evaluation of instances pending deletion.
2020-10-05 16:20:22 -04:00
James Bardin
ee564a5ceb
Merge pull request #26421 from hashicorp/jbardin/ignore-changes-map
allow ignore_changes to reference any map key
2020-10-05 12:06:05 -04:00
James Bardin
0c72c6f144 s/FullDestroy/IsFullDdestroy/ 2020-10-05 10:50:25 -04:00
James Bardin
e35524c7f0 use existing State rather than Change.Before
The change was passed into the provisioner node because the normal
NodeApplyableResourceInstance overwrites the prior state with the new
state. This however doesn't matter here, because the resource destroy
node does not do this. Also, even if the updated state were to be used
for some reason with a create provisioner, it would be the correct state
to use at that point.
2020-10-05 10:40:14 -04:00
James Bardin
a1181adca4 remove unused method stub 2020-10-05 10:35:31 -04:00
Pam Selle
69b03ebf42
Merge pull request #26379 from hashicorp/pselle/sensitive-nested-block-support
Support sensitivity within nested blocks
2020-10-02 17:28:15 -04:00
Martin Atkins
593cf7b4d5 didyoumean: move from "helper" to "internal"
This new-ish package ended up under "helper" during the 0.12 cycle for
want of some other place to put it, but in retrospect that was an odd
choice because the "helper/" tree is otherwise a bunch of legacy code from
when the SDK lived in this repository.

Here we move it over into the "internal" directory just to distance it
from the guidance of not using "helper/" packages in new projects;
didyoumean is a package we actively use as part of error message hints.
2020-10-02 13:35:07 -07:00
James Bardin
32681190ca
Merge pull request #26458 from hashicorp/jbardin/data-ref-index
data sources with indexed references to managed resources
2020-10-02 13:27:28 -04:00
Pam Selle
f35b530837 Update compatibility checks for blocks to not use marks
Remove marks for object compatibility tests to allow apply
to continue. Adds a block to the test provider to use
in testing, and extends the sensitivity apply test to include a block
2020-10-02 13:11:55 -04:00
Pam Selle
3e7be13dff Update ordering for marking/unmarking and asserting plan valid
Update when we unmark objects so we can assert the plan is valid,
and process UnknownAsNull on the unmarked value
2020-10-02 13:03:11 -04:00
James Bardin
95197f0324 use EvalSelfBlock for destroy provisioners
Evaluate destroy provisioner configurations using only the last resource
state value, and the static instance key data.
2020-10-02 12:38:51 -04:00
James Bardin
43c0525277 fix the test that was supposed to break
The test for this behavior did not work, because the old mock diff
function does not work correctly. Write a PlanResourceChange function to
return a correct plan.
2020-10-02 08:50:24 -04:00
James Bardin
07af1c6225 destroy time eval of resources pending deletion
Allow the evaluation of resource pending deleting only during a full
destroy. With this change we can ensure deposed instances are not
evaluated under normal circumstances, but can be references when needed.
This also allows us to remove the fixup transformer that added
connections so temporary values would evaluate in the correct order when
referencing destroy nodes.

In the majority of cases, we do not want to evaluate resources that are
pending deletion since configuration references only can refer to
resources that is intended to be managed by the configuration. An
exception to that rule is when Terraform is performing a full `destroy`
operation, and providers need to evaluate existing resources for their
configuration.
2020-10-01 17:11:46 -04:00
James Bardin
ac526d8d5d always load instance state when -refresh=false
The loading of the initial instance state was inadvertently skipped when
-refresh=false, causing all resources to appear to be missing from the
state during plan.
2020-10-01 16:04:35 -04:00
James Bardin
78322d5843 data depends_on with indexed references
If a data source refers to a indexed managed resource, we need to
re-target that reference to the containing resource for planning.  Since
data sources use the same mechanism as depends_on for managed resource
references, they can only refer to resources as a whole.
2020-10-01 14:10:09 -04:00
James Bardin
2376b3c5ee
Merge pull request #26444 from hashicorp/jbardin/value-checks
Add some log warnings and remove dead code
2020-10-01 09:37:56 -04:00
James Bardin
ad0b81de81 allow ignore_changes to reference any map key
There are situations when a user may want to keep or exclude a map key
using `ignore_changes` which may not be listed directly in the
configuration. This didn't work previously because the transformation
always started off with the configuration, and would never encounter a
key if it was only present in the prior value.
2020-10-01 09:36:36 -04:00
Kristin Laemmert
90588c036b
terraform: minor cleanup from EvalTree() refactor (#26429)
* Split node_resource_abstract.go into two files, putting
NodeAbstractResourceInstance methods in their own file - it was getting
large enough to be tricky for (my) human eyeballs.

* un-exported the functions that were created as part of the EvalTree()
refactor; they did not need to be public.
2020-10-01 08:12:10 -04:00
James Bardin
98b568ad01 remove unused refresh node
There is no longer a refresh walk, so we no longer use this type.
2020-09-30 18:04:40 -04:00
James Bardin
c52672cbbe log inconsistencies during refresh
We can't make these errors because there's no way to exempt legacy
providers from the check, but we can at least log them for
troubleshooting.
2020-09-30 17:40:39 -04:00
James Bardin
b464bcbc5d copy the refreshed state at the end of plan
Otherwise we may end up with the same state value for the state and
refreshState when re-using the context.
2020-09-30 12:14:37 -04:00
James Bardin
8dc6f518c2 don't use same state in Validate for refreshState 2020-09-30 12:14:37 -04:00
James Bardin
8c699fbe32 Unsynchronized maps in test 2020-09-30 12:14:31 -04:00
James Bardin
bdef106a8b WithPath should only modify the copy 2020-09-30 11:50:50 -04:00
Kristin Laemmert
c258e8efbb
Merge pull request #26413 from hashicorp/mildwonkey/the-rest-of-the-owl-tree
The Last Chapter in our Epic Saga: `EvalTree()` Refactor
2020-09-30 10:48:03 -04:00
James Bardin
a78d75ccfb
Merge pull request #26387 from hashicorp/jbardin/ignore-changes-2
Fix handling of ignore_changes
2020-09-29 15:35:29 -04:00
Kristin Laemmert
0ac53ae3ed terraform: remove deprecated or unused Eval() bits 2020-09-29 15:01:24 -04:00
Kristin Laemmert
184893d1e4 evaltree refactor 2020-09-29 14:31:20 -04:00
Kristin Laemmert
c66494b874 terraform: refactor NodeDestroyDeposedResourceInstanceObject and NodePlanDeposedResourceInstanceObject
The various Eval()s will be refactored in a later PR.
2020-09-29 13:26:50 -04:00
Kristin Laemmert
3bb64e80d5 apply refactor 2020-09-29 13:26:50 -04:00
Kristin Laemmert
26260c47f0 terraform: add ReadDiff method on NodeAbstractResourceInstance to replace EvalReadDiff 2020-09-29 13:26:50 -04:00
Kristin Laemmert
ff5d78ff5a EvalReduceDiff: removing unused struct field 2020-09-29 13:25:17 -04:00
Kristin Laemmert
f835841c28 terrafrom: create CheckPreventDestroy method on NodeAbstractResource
This will eventually replace all uses of EvalCheckPreventDestroy.
2020-09-29 13:24:21 -04:00
Kristin Laemmert
ab5bf50fc3 terrafrom: create NodeAbstractResource ReadResourceInstanceState
function

The original EvalReadState node is used only by `NodeAbstractResource`s,
so I've created a new method on NodeAbstractResource which does the same
thing as EvalReadState. When the EvalNode refactor project is complete,
EvalReadState will be removed entirely.
2020-09-29 13:24:21 -04:00
Kristin Laemmert
d1cf0279fa terraform: NodePlannableResourceInstanceOrphan refactor 2020-09-29 13:24:21 -04:00
Kristin Laemmert
9c721a4131 terraform: improved refactor of EvalWriteResourceState
EvalWriteResourceState finds new life as a method called WriteResourceState on NodeAbstractResource.
2020-09-29 13:24:21 -04:00
James Bardin
a8981a954f don't use ignore_changes during replacement
When replacing an instance, we have to be sure to use the original
configuration which hasn't been processed with ignore_changes.
2020-09-29 13:15:33 -04:00
James Bardin
7fa4c00d1a add validation for ignore_changes references
Ensure that ignore_changes only refers to arguments set in the
configuration.
2020-09-29 13:15:33 -04:00
James Bardin
98124637d8 apply ignore_changes directly to config
In order to ensure all the starting values agree, and since
ignore_changes is only meant to apply to the configuration, we need to
process the ignore_changes values on the config itself rather than the
proposed value.

This ensures the proposed new value and the config value seen by
providers are coordinated, and still allows us to use the rules laid out
by objchange.AssertPlanValid to compare the result to the configuration.
2020-09-29 13:15:30 -04:00
James Bardin
bc3fe5ddae diff should be from proposed, not config
This ensures the mock provider behaves like the shimmed legacy SDK
providers.
2020-09-29 13:15:03 -04:00
James Bardin
801f60fda8 only ignore changes from the configuration
ignore_changes should only exclude changes to the resource arguments,
and not alter the returned value from PlanResourceChange. This would
effect very few providers, since most current providers don't actively
create their plan, and those that do should be generating computed
values here rather than modifying existing ones.
2020-09-29 13:10:52 -04:00
Alisdair McDiarmid
39bc5e825b terraform: Check for sensitive values in outputs
Sensitive values may not be used in outputs which are not also marked
as sensitive. This includes values nested within complex structures.

Note that sensitive values are unmarked before writing to state. This
means that sensitive values used in module outputs will have the
sensitive mark removed. At the moment, we have not implemented
sensitivity propagation from module outputs back to value marks.

This commit also reworks the tests for NodeApplyableOutput to cover
more existing behaviour, as well as this change.
2020-09-25 16:04:06 -04:00
James Bardin
ba7a57d3c5
Merge pull request #26375 from hashicorp/jbardin/data-force-plan-read
data source depends_on
2020-09-25 13:57:06 -04:00
James Bardin
ea9096fb21 data source depends_on
A data source referencing another data source through depends_on should
not be forced to defer until apply. Data sources have no side effects,
so nothing should need to be applied. If the dependency has a
planned change due to a managed resource, the original data source will
also encounter that further down the list of dependencies.

This prevents a data source being read during plan for any reason from
causing other data sources to be deferred until apply. It does not
change the behavior noticeably in 0.14, but because 0.13 still had
separate refresh and plan phases which could read the data source, the
deferral could cause many things downstream to become unexpectedly
unknown until apply.
2020-09-25 13:46:47 -04:00
Alisdair McDiarmid
e41b2ef2d4 terraform: Add test for complex sensitive values
When a sensitive variable has a complex type, any traversal of the
variable should still result in a sensitive value. This test uses a
sensitive `map(string)` and verifies that both plan and state output
include the appropriate sensitive marks for the resource attribute.
2020-09-25 11:37:45 -04:00
Kristin Laemmert
4bba9a70b3 terraform: refactor NodePlannableResource and NodeApplyableResource
NodePlannableResource and NodeApplyableResource EvalTree()s have been
replaced with Execute() nodes and straight-through code. Both called
EvalWriteResourceState and were the only functions to use it, so I chose
to replace EvalWriteResourceState entirely with straight-through code
(by copying the contents into the two locations).
2020-09-25 09:29:18 -04:00
Kristin Laemmert
90655b98b0 terraform: rename mustReourceAddr to mustConfigResourceAddr and add mustAbsResourceAddr
there are too many things that can be called resource addrs and it can
be hard to find the must* I'm looking for, so I renamed one and added
another.
2020-09-25 09:29:18 -04:00
Pam Selle
0a02e7040f
Store sensitive attribute paths in state (#26338)
* Add creation test and simplify in-place test

* Add deletion test

* Start adding marking from state

Start storing paths that should be marked
when pulled out of state. Implements deep
copy for attr paths. This commit also includes some
comment noise from investigations, and fixing the diff test

* Fix apply stripping marks

* Expand diff tests

* Basic apply test

* Update comments on equality checks to clarify current understanding

* Add JSON serialization for sensitive paths

We need to serialize a slice of cty.Path values to be used to re-mark
the sensitive values of a resource instance when loading the state file.
Paths consist of a list of steps, each of which may be either getting an
attribute value by name, or indexing into a collection by string or
number.

To serialize these without building a complex parser for a compact
string form, we render a nested array of small objects, like so:

[
  [
    { type: "get_attr", value: "foo" },
    { type: "index", value: { "type": "number", "value": 2 } }
  ]
]

The above example is equivalent to a path `foo[2]`.

* Format diffs with map types

Comparisons need unmarked values to operate on,
so create unmarked values for those operations. Additionally,
change diff to cover map types

* Remove debugging printing

* Fix bug with marking non-sensitive values

When pulling a sensitive value from state,
we were previously using those marks to remark
the planned new value, but that new value
might *not* be sensitive, so let's not do that

* Fix apply test

Apply was not passing the second state
through to the third pass at apply

* Consistency in checking for length of paths vs inspecting into value

* In apply, don't mark with before paths

* AttrPaths test coverage for DeepCopy

* Revert format changes

Reverts format changes in format/diff for this
branch so those changes can be discussed on a separate PR

* Refactor name of AttrPaths to AttrSensitivePaths

* Rename AttributePaths/attributePaths for naming consistency

Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
2020-09-24 12:40:17 -04:00
James Bardin
a0cee10720 add Addr field for logging 2020-09-24 09:49:22 -04:00
James Bardin
eb17d9799b refresh cbd test 2020-09-24 09:43:48 -04:00
James Bardin
27809871ca update create_before_destroy when refreshing
In order to save any changes to lifecycle options, we need to record
those changes during refresh, otherwise they would only be updated when
there is a change in the resource to be applied.
2020-09-24 09:43:45 -04:00
James Bardin
b16c600edc verify skipRefresh during plan 2020-09-24 09:34:49 -04:00
James Bardin
84f7116ac8 thread skipContext through to the instance node 2020-09-24 09:34:49 -04:00
James Bardin
eebb4dfcb2 add SkipRefresh to the terraform context 2020-09-24 09:34:49 -04:00
James Bardin
def1f9b084 we no longer need EvalRefreshDependencies
This evaluation was required when refresh ran in a separate walk and
managed resources were only partly handled by configuration. Now that we
have the correct dependency information available when refreshing
configured resources, we can update their state accordingly. Since
orphaned resources are not refreshed, they can retain their stored
dependencies for correct ordering.

This also prevents users from introducing cycles with nodes they can't
"see", since only orphaned nodes will retain their stored dependencies,
and the remaining nodes will be updated according to the configuration.
2020-09-23 14:08:52 -04:00
Pam Selle
a720409ded Add test for GetInputVariable, with sensitive config
This adds a test for GetInputVariable, and includes
a variable with a "sensitive" attribute in configuration,
to test that that value is marked as sensitive
2020-09-22 16:35:40 -04:00
James Bardin
906d399189 remove refresh!
Delete all the code associated with the Refresh walk
2020-09-22 10:27:45 -04:00
James Bardin
522df46d91 test output was incorrectly changed
Roll back this part of the change. The incorrect output never passed the
test.
2020-09-22 10:05:52 -04:00
James Bardin
6039622111 Simplify data lifecycle for the no-refresh world
Now that we don't have to handle data sources that may or may not have
been updated during a refresh phase, and the plan phase can save the
data source to the refreshed state, we can remove a lot of the logic
involved in detecting whether the data source needs to be planned or
not.

When there is no separate refresh phase, we always must attempt to read
the data source during planning, and the only conditions are based on
having a known configuration, and not having any dependencies on which
we're waiting. If the data source is read during plan, we can now save
that directly to the refreshed state, and don't need to smuggle the
value as a change to be saved during apply.
2020-09-22 09:55:19 -04:00
James Bardin
921f36a361
Merge pull request #26317 from hashicorp/jbardin/remove-refresh-walk
Replace internal Refresh command with Plan
2020-09-22 09:54:24 -04:00
James Bardin
bc82347a04 fix tests
Update tests to match the new behavior. Some were incorrect, some no
longer make sense, and some just weren't setup to handle th plan api
calls.
2020-09-21 16:17:46 -04:00
James Bardin
4a6dac39a5 Use Plan instead of Refresh
Now that the planning process generates a refreshed state, and can
handle changes between the configuration and the state which the refresh
process cannot, we can use the plan for the refresh command.
2020-09-21 16:17:45 -04:00
James Bardin
8105096ec8 attach dependencies during plan
This was previously done during refresh alone, now we need to insert
these during the refresh portion of plan.
2020-09-21 16:17:45 -04:00
James Bardin
f222ed7479 write updated outputs to the refresh state
If we can evaluate a new output value during plan, write it to the
refreshed state as well.
2020-09-21 16:17:45 -04:00
James Bardin
7951b55f88 remove planned resources from refreshed state 2020-09-21 16:17:45 -04:00
James Bardin
8b31808843 delay data source reads with pending resource ref
Treat any reference from a data source to a managed resource as a
dependency on the entire resource. While a resource's
attribute may be statically resolvable from the configuration, if the
user added a reference to that resource, it stands to reason that the
user intended there to be a dependency which we need to wait on.

This is an extension of an implicit behavior that existed previously in
Terraform, but was lost in the 0.13 release. That behavior was emergent
from the fact that the Refresh walk did not process the configuration
for managed resources, so any new resources in the config would be
evaluate as entirely unknown during Refresh, even if some attributes
were statically resolvable at that point.

This new implementation restores the old behavior, and extends it to
updates and replacements of the referenced resource.
2020-09-18 09:10:45 -04:00
James Bardin
669da06515 saved read data in the refresh state during plan
This only changes the refreshed state stored in the plan file. Since the
change is stored in the plan, the applied result would be the same, but
we should still store the refreshed data in the plan file for tools that
consume the plan file.

This will also be needed in order to implement a new refresh command
based on the plan itself.
2020-09-17 17:12:10 -04:00
Alisdair McDiarmid
e77c367345
Merge pull request #26273 from hashicorp/alisdair/sensitive-variable-plan-tests
Extend sensitive variable plan tests
2020-09-17 12:07:17 -04:00
James Bardin
19d67b7606 don't leave old refreshed state in the context
After apply, any refreshed state from a plan would be invalid. Normal
usage doesn't ever see this, bu internal tests may re-use the context.
2020-09-17 09:55:00 -04:00
James Bardin
1fa3503acd fixup last tests that need correct state 2020-09-17 09:54:59 -04:00
James Bardin
d19f440d81 contexts have a copy of the state
We need to build a new context go get at the modified state
2020-09-17 09:54:59 -04:00
James Bardin
a3c9d7abc1 update comments around evaluating 0 instances 2020-09-17 09:54:59 -04:00
James Bardin
ced7aedeca fixup count transition for refresh state
We need to do this for both states during plan
2020-09-17 09:54:59 -04:00
James Bardin
7d6472dad0 use plan state in contextOptsForPlanViaFile 2020-09-17 09:54:59 -04:00
James Bardin
908217acc4 try refreshing during plan as the default
This breaks a bunch of tests, and we need to figure out why before
moving on.
2020-09-17 09:54:59 -04:00
James Bardin
5cf7e237d5 return the refreshed state in the Plan result 2020-09-17 09:54:59 -04:00
James Bardin
7b178b1788 add a way to selectively write to RefreshState
All resources use EvalWriteState to store their state, so we need a way
to switch the states when the resource is refreshing vs when it is
planning. (this will likely change once we factor out the EvalNode pattern)
2020-09-17 09:54:59 -04:00
James Bardin
ad22e137e6 Get the new RefreshState into the right contexts 2020-09-17 09:54:59 -04:00
James Bardin
d6a586709c Add RefreshState to the eval context
Since plan uses the state as a scratch space for evaluation, we need an
entirely separate state to store the refreshed resources values during
planning. Add a RefreshState method to the EvalContext to retrieve a
state used only for refreshing resources.
2020-09-17 09:54:59 -04:00
James Bardin
be757bd416 Refresh instances during plan
This change refreshes the instance state during plan, so a complete
Refresh no longer needs to happen before Plan.
2020-09-17 09:54:59 -04:00
Alisdair McDiarmid
e1a41daf9b terraform: Test sensitive values in module inputs
Passing a sensitive value as a module input variable should preserve its
sensitivity for the plan.
2020-09-16 16:54:04 -04:00
Alisdair McDiarmid
9e340ab85b terraform: Expand sensitive variable plan test
Test that the changes which use the sensitive variable have
corresponding path value marks. Also remove the unrelated validate
function from this test.
2020-09-16 16:48:28 -04:00
Kristin Laemmert
67d441b131
terraform: refactor ProviderEvalTree (#26236)
* remove leftover debug line

* terraform: refactor ProviderEvalTree

This PR refactors the ProviderEvalTree by folding the entire tree into
straight-through code in NodeApplyableProvider Execute (formally
EvalTree). The EvalConfigProvider functions were refactored into
NodeApplyableProvider functions (since that was the only place they were
used).

I also removed the unused node_provider_disabled code.

* revert removal of graphNodeCloseProvider EvalTree, replace with Execute
2020-09-16 12:17:17 -04:00
James Bardin
696290e481
Merge pull request #26264 from hashicorp/jbardin/evaluate-destroy
allow evaluation of 0 instances during apply
2020-09-16 11:35:33 -04:00
Kristin Laemmert
55d58cb8be
terraform: NodeDestroyResourceInstance refactor (#26246)
* terraform: remove unused eval node

* add UpdateStateHook function to replace EvalUpdateStateHook

* early exit error isn't

* terraform: NodeDestroyResourceInstance refactor

This PR refactor's NodeDestroyResourceInstance EvalTree() into an
Execute() node. EvalRequireState and evalWriteEmptyState were used only
by this node, and they have been removed in favor of straight code.

There are still many calls to someEvalNode.Eval() in NodeDestroyResourceInstance: I plan on refactoring the remaining EvalTree()s before tacking those Eval()s (all of which are used by many graph nodes)

I've added a new function, UpdateStateHook, that is effectively the same
as EvalUpdateStateHook. The latter will be removed when the larger
EvalNode refactor project is complete.
2020-09-16 11:33:55 -04:00
Kristin Laemmert
f64d5b237c
terraform: refactor nodeModuleVariable and NodeRootVariable EvalTree()s (#26245)
EvalModuleCallArguments is now a method on nodeModuleVariable, it's only
caller, and the other functions have been replaces with straight through
code (or in the case of evalVariableValidations, a standalone function).

I was unable to add tests for nodeModuleVariable.Execute, which requires
fixtures that aren't part of the MockEvalContext (a scope.evalContext is
one); it's not ideal but that function should be well covered by the
context tests so I chose to leave it as-is.

Finally, I removed the unused function hclTypeName. Deleting code is
fun!
2020-09-16 11:32:48 -04:00
James Bardin
7156649336 allow evaluation of 0 instances during apply
While this was easier to spot during plan, it is also possible to
evaluate resources with 0 instances during apply as well.

This doesn't effect the failure when scaling CBD instances, it only
changes the fact that the inconsistent value is no longer unknown.
2020-09-16 11:18:23 -04:00
James Bardin
a6dffa89a3 cleanup unused CBD code
Remove the check for CreateBeforeDestroyOverride which can't happen in a
destroy node.

Remove the unnecessary GraphNodeAttachDestroyer interface, since we
don't use it now that plans can record the create+destroy order.
2020-09-16 11:14:36 -04:00
James Bardin
67fd32db7e improve failing test
Correct the initial test state, and expand the test to cause a cycle
without the previous fix.
2020-09-16 10:38:41 -04:00
James Bardin
2ea921f915 destroy nodes must rely on the state cbd status
When there is only a destroy node, there is no descendent to check for
"forced CBD", so we can only rely on the state to verify.
2020-09-16 09:52:48 -04:00
Kristin Laemmert
f5bce1bd39
terraform: refactor graphNodeImportState and graphNodeImportState (#26243)
EvalTrees

The import EvalTree()s have been replaced with Execute +
straight-through code, which let me remove eval_import_state.go.

graphNodeImportStateSub's Execute() function is still using the
old-style (not-refactored) calls to EvalRefresh and EvalWriteState;
those are being saved for a later refactor once all (or at least most)
of the EvalTree()s are gone.

I've added incredibly basic tests for both Execute() functions; they are
only enough to verify basics - the real testing happens in
context_import_test.go.
2020-09-14 16:53:37 -04:00
James Bardin
84438d377f
Merge pull request #26192 from hashicorp/jbardin/lost-cbd
Use saved plan to determine CreateBeforeDestroy status
2020-09-14 08:46:55 -04:00
Kristin Laemmert
7b510c4add
Mildwonkey/node resource validate (#26206)
* terraform: refactor signature of EvalContext.InitProvisioner

Nothing was using the returned provisioners.Interface, so I simplified
the signature.

* terraform: remove provisioner-related EvalTree()s

The various Evals in eval_provisioner were removed from all callers and
replaced with straight-through code.

`NodeValidatableResource.EvalTree()` to `Execute()` was more involved. I
chose to leave the `EvalValidateResource` and `EvalValidateProvisioner`
Eval functions mostly as-is, changing the main function to `.Validate` to
make it clear that these are no longer eval nodes. Eventually I expect
to rename the file (perhaps to just Validate).
2020-09-14 08:43:14 -04:00
Pam Selle
20ee878d0e Updates and improvements to comments 2020-09-11 11:15:44 -04:00
Pam Selle
4034cf9f75 Add basic plan test coverage
This also unearthed that the marking must happen
earlier in the eval_diff in order to produce a valid plan
(so that the planned marked value matches the marked config
value)
2020-09-10 16:06:37 -04:00
Pam Selle
712f5a5cc3 Update plannedNewVal itself
Using markedPlannedNewVal caused many test
failures with ignoreChanges, and I noted plannedNewVal
itself is modified in the eval_diff. plannedNewVal
is now marked closer to the change where it needs it.
There is also a test fixture update to remove interpolation warnings.
2020-09-10 11:04:17 -04:00
Pam Selle
b03d5df9dc Disallow sensitive values as for_each arguments 2020-09-10 11:04:17 -04:00
Pam Selle
3e8b125e53 Apply does not need remarking
Apply, at this moment, appears that
it does not require the remarking strategy,
as the plan has already been printed
2020-09-10 11:04:17 -04:00
Pam Selle
e9d9205ce8 Modifications to eval_diff 2020-09-10 11:04:17 -04:00
Pam Selle
bc55b6a28b Use UnmarkDeepWithPaths and MarkWithPaths
Updates existing code to use the new Value
methods for unmarking/marking and removes
panics/workarounds in cty marshall methods
2020-09-10 11:04:17 -04:00
Pam Selle
6c129a921b Unmark/remark in apply process to allow apply 2020-09-10 11:04:17 -04:00
Pam Selle
896d277a69 If the path is empty, we should not be marking the path 2020-09-10 11:04:17 -04:00
Pam Selle
84d118e18f Track sensitivity through evaluation
Mark sensitivity on a value. However, when the value is encoded to send to the
provider to produce a changeset we must remove the marks, so unmark the value
and remark it with the saved path afterwards
2020-09-10 11:04:17 -04:00
Paul Tyng
f3ff843ffd Remove unused env var TF_SKIP_PROVIDER_VERIFY 2020-09-10 09:03:56 -04:00
James Bardin
ec231c7616 apply the stored plan CreateThenDelete action
When applying a plan, a forced CreateBeforeDestroy may not be set during
the apply walk when downstream resources are no longer present in the
graph. We still need to stick to that plan, and both the
NodeApplyableResourceInstance EvalTree and the individual Eval nodes
need to operate on that planned value.

Ensure that we always check for an existing plan when determining
CreateBeforeDestroy status. This must happen in 2 different code paths
due to the eval node pattern currently in-use. Future refactoring may be
able to unify these code-paths to make this less fragile.
2020-09-09 17:02:28 -04:00
James Bardin
7695d1cefe add test for forced cbd with no other changes
If a resource is forced CreateBeforeDestroy from a dependent resource,
and that dependent has no changes, the plan is changed from
CreateThenDelete to DeleteThenCreate causing an apply error.
2020-09-09 16:41:01 -04:00
Kristin Laemmert
1a1225ae29
Mildwonkey/eval local (#26182)
* terraform: refactor EvalLocal, remove unused EvalDeleteLocal
* terraform: refactor NodeCountBoundary
* terraform: node_module_expand refactor
2020-09-09 15:59:29 -04:00
James Bardin
cf6bc7163a not all plan action changes are provider bugs
A provider cannot influence CreateThenDelete vs DeleteThenCreate, so we
shouldn't attribute this to the provider in the error.
2020-09-09 15:45:06 -04:00
James Bardin
b8b6cae8ef
Merge pull request #26186 from hashicorp/jbardin/cbd-module-dep
don't connect module closers to destroy nodes
2020-09-09 14:15:57 -04:00
James Bardin
c9e581e58a don't connect module closers to destroy nodes
One of the tenants of the graph transformations is that resource destroy
nodes can only be ordered relative to other resources, and can't be
referenced directly. This was broken by the module close node which
naively connected to all module nodes, creating cycles in some cases
when edges are reversed from CreateBeforeDestroy.
2020-09-09 12:23:23 -04:00
Kristin Laemmert
069f379e75 terraform: refactor Node*Ouput
This commit refactors NodeApplyableOutput and NodeDestroyableOutput into
the new Execute() pattern, collapsing the functions in eval_output.go
into one place.

I also reverted a recent decision to have Execute take a _pointer_ to a
walkOperation: I was thinking of interfaces, not constant bytes, so all
it did was cause problems.

And finally I removed eval_lang.go, which was unused.
2020-09-09 08:45:54 -04:00
Kristin Laemmert
8a4b2ab817 terraform: EvalNode removal, continued
This commit continues the overall EvalNode removal project.

Something to note: the NodeRefreshableDataResourceInstance's Execute()
function is intentionally refactored in the bare minimum,
hardly-a-refactor style, because we have another ongoing project which
aims to remove NodeRefreshable*s. It is not worth the effort at this
time. We may revisit this decision in the future.
2020-09-08 13:05:43 -04:00
Kristin Laemmert
df6f3fa6de terraform: modify Execute() signature
walkOperation should be a pointer to simplify test writers' lives - it
is not needed in most cases.
2020-09-08 13:05:43 -04:00
Martin Atkins
efe78b2910 main: new global option -chdir
This new option is intended to address the previous inconsistencies where
some older subcommands supported partially changing the target directory
(where Terraform would use the new directory inconsistently) where newer
commands did not support that override at all.

Instead, now Terraform will accept a -chdir command at the start of the
command line (before the subcommand) and will interpret it as a request
to direct all actions that would normally be taken in the current working
directory into the target directory instead. This is similar to options
offered by some other similar tools, such as the -C option in "make".

The new option is only accepted at the start of the command line (before
the subcommand) as a way to reflect that it is a global command (not
specific to a particular subcommand) and that it takes effect _before_
executing the subcommand. This also means it'll be forced to appear before
any other command-specific arguments that take file paths, which hopefully
communicates that those other arguments are interpreted relative to the
overridden path.

As a measure of pragmatism for existing uses, the path.cwd object in
the Terraform language will continue to return the _original_ working
directory (ignoring -chdir), in case that is important in some exceptional
workflows. The path.root object gives the root module directory, which
will always match the overriden working directory unless the user
simultaneously uses one of the legacy directory override arguments, which
is not a pattern we intend to support in the long run.

As a first step down the deprecation path, this commit adjusts the
documentation to de-emphasize the inconsistent old command line arguments,
including specific guidance on what to use instead for the main three
workflow commands, but all of those options remain supported in the same
way as they were before. In a later commit we'll make those arguments
produce a visible deprecation warning in Terraform's output, and then
in an even later commit we'll remove them entirely so that -chdir is the
single supported way to run Terraform from a directory other than the
one containing the root module configuration.
2020-09-04 15:31:08 -07:00
Kristin Laemmert
883e4487a2
terraform: add GraphNodeExecutable interface (#26132)
This introduces a new GraphNode, GraphNodeExecutable, which will
gradually replace GraphNodeEvalable as part of the overall removal of
EvalTree()s. Terraform's Graph.walk function will now check if a node is
GraphNodeExecutable and run walker.Execute instead of running through
the EvalTree() and Eval().

For the time being, terraform will panic if a node implements both
GraphNodeExecutable and GraphNodeEvalable. This will be removed when
we've finished removing all GraphNodeEvalable implementations.

The new GraphWalker function, Execute(), is meant to replace both
EnterEvalTree and ExitEvalTree, and wraps the call to the
GraphNodeExecutable's Execute function.
2020-09-04 14:03:45 -04:00
Martin Atkins
b0da5b1ce5 core: Remove the last few HIL remnants
We've not been using HIL in the main codepaths since Terraform 0.12, but
some references to it (and some supporting functionality in Terraform)
stuck around due to interactions with types we'd kept around to support
legacy shims.

However, removing the configs.RawConfig field from
terraform.ResourceConfig disconnects that subtree of dependencies from
everything else, allowing us to remove it. This is safe because the only
remaining uses of terraform.ResourceConfig are shims from values that
were already evaluated using the HCL 2 API, and thus they never need
the "just in time" HIL evaluation that ResourceConfig.interpolateForce
used to do.

We also had some HIL references in configs/hcl2shim that were previously
in support of the "terraform 0.12upgrade" command, but the implementation
of that command is now removed.

There was one remaining reference to HIL in a now-unused function in the
helper/schema package, which I removed entirely here.

This then allows us to remove the HIL dependency entirely, and also to
clean up some remaining old remants of the legacy "config" package that
we'd recently moved into the "configs" package pending further pruning.
2020-09-02 15:53:33 -07:00
Kristin Laemmert
196c183dda
terraform: remove state from validate graph walk (#26063)
This pull reverts a recent change to backend/local which created two context, one with and one without state. Instead I have removed the state entirely from the validate graph (by explicitly passing a states.NewState() to the validate graph builder).

This changed caused a test failure, which (ty so much for the help) @jbardin discovered was inaccurate all along: the test's call to `Validate()` was actually what was removing the output from state. The new expected test output matches terraform's actual behavior on the command line: if you use -target to destroy a resource, an output that references only that resource is *not* removed from state even though that test would lead you to believe it did.

This includes two tests to cover the expected behavior:

TestPlan_varsUnset has been updated so it will panic if it gets more than one request to input a variable
TestPlan_providerArgumentUnset covers #26035

Fixes #26035, #26027
2020-08-31 15:45:39 -04:00
Pam Selle
f2d213c461
Merge pull request #25657 from pdecat/typo_hierarchical
Typo: heirarchical => hierarchical
2020-08-28 12:37:49 -04:00
Alisdair McDiarmid
8198e9758c
Merge pull request #26028 from hashicorp/alisdair/fix-eval-read-data-panic
terraform: Fix createEmptyBlocks NestingSingle bug
2020-08-28 10:29:28 -04:00
Pam Selle
29af42df81
Merge pull request #25973 from Fabian-Schmidt/provisioner_comment_fix
provisioner comment fix.
2020-08-28 09:46:13 -04:00
Pam Selle
233c7c6793 Run gofmt 2020-08-28 09:41:00 -04:00
Alisdair McDiarmid
5be772d85a terraform: Fix createEmptyBlocks NestingSingle bug
When calling createEmptyBlocks, we only intend to process legacy SDK
blocks (NestingList or NestingSet), but the function also needs to pass
through NestingSingle block types untouched. To do so we must only look
up the container's element type after we've established that the block
is backed by a container.
2020-08-27 20:29:36 -04:00
Kristin Laemmert
23a8bdd522
configs: finish deprecation of the config package by removing the remaining used functions into configs (#25996) 2020-08-26 14:39:18 -04:00
Martin Atkins
b1eec0fbcd core: NodeAbstractResourceInstance.Provider correct implied provider
When we need to select a qualified provider address based on an implied
provider name, we have a special case that the name "terraform" maps to
terraform.io/builtin/terraform instead of
registry.terraform.io/hashicorp/terraform as would be the case for other
prefixes.

However, in order for that to work properly we need to use
addrs.ImpliedProviderForUnqualifiedType instead of
addrs.NewDefaultProvider, because the latter just unconditionally always
produces a "default" provider configuration (belonging to the "hashicorp"
namespace on the public registry).
2020-08-24 11:41:28 -07:00
Fabian-Schmidt
0c6790e142 Fix provisioner comment. 2020-08-24 14:05:29 +10:00
James Bardin
058dff44f9
Merge pull request #25922 from hashicorp/jbardin/destroy-pruning
Fix node-pruning during destroy
2020-08-21 16:49:14 -04:00
James Bardin
59b7ae8eb4
Merge pull request #25694 from alrs/alrs/terraform-test-err
terraform: fix dropped test error
2020-08-19 11:37:41 -04:00
Alisdair McDiarmid
30c7dfca62
Merge pull request #25898 from hashicorp/alisdair/fix-required-version-diags
terraform: Fix required version constraint diags
2020-08-19 11:26:03 -04:00
Alisdair McDiarmid
3114e2ad7c
Merge pull request #25890 from hashicorp/import-our-nemesis
terraform: Eval module call arguments for import
2020-08-19 11:25:38 -04:00
James Bardin
7ef4e7f6ad
Merge pull request #25857 from hashicorp/jbardin/data-diffs
allow plan data state comparison with legacy SDK
2020-08-19 11:11:40 -04:00
James Bardin
b68ab92392 more complicated for_each destroy 2020-08-19 11:10:12 -04:00
James Bardin
a6776eaa94 completely prune inter-module dependencies
There was a missing outer loop for catching inverse module dependencies
when pruning nodes for destroy. Since the need to "register" the fully
destroyed modules no longer exists, the extra complication of pruning
the modules as a whole from the leaves inward is no longer required.
While it is technically still a valid optimization to reduce iterations,
the extra comparisons required to backtrack for transitive dependencies
don't amount to much, and having a single nested loop is much easier to
maintain.
2020-08-19 11:10:12 -04:00
Alisdair McDiarmid
c98f352dc8 terraform: Fix required version constraint diags
If a module has multiple terraform.required_version constraints, any
failures would point at the last constraint in the error diagnostics. If
an earlier constraint was the actual problem, this leads to confusing
errors like this:

    Error: Unsupported Terraform Core version

      on main.tf line 6, in terraform:
       6:   required_version = ">= 0.13.0"

    This configuration does not support Terraform version 0.13.0.

The error was due to storing the declaration range of the constraint as
a pointer to the contents of a loop variable, which was later
overwritten in later iterations of the loop.  Instead we now use HCL's
handy Ptr() method to create a direct pointer to the range struct.
2020-08-18 09:35:32 -04:00
Alisdair McDiarmid
d8e9964363 terraform: Eval module call arguments for import
Include the import walk in the list of operations for which we create an
EvalModuleCallArgument node. This causes module call arguments to be
evaluated even if the module variables have defaults, ensuring that
invalid default values (such as the common "{}" for variables thought of
as maps) do not cause failures specific to import.

This fixes a bug where a child module evaluates an input variable in its
locals block, assuming that it is a nested object structure. The bug
report includes a default value of "{}", which is overridden by a root
variable value. Without the eval node added in this commit, the default
value is used and the local evaluation errors.
2020-08-17 17:14:12 -04:00
Kristin Laemmert
c9f710ac29
terraform: remove DisableReduce from refresh, plan and apply graphs (#25824) 2020-08-14 14:13:33 -04:00
James Bardin
93246bd978 allow plan data state comparison with legacy SDK
In order to determine if we need to re-read a data source during plan,
we need to compare the newly evaluated configuration with the stored
state. To do that we create a ProposedNewVal, which if there are no
changes, should match the existing state exactly.

A problem arises if the remote data source contains any blocks, and they
are not set in the configuration. Terraform always decodes configuration
blocks as empty containers, however the legacy SDK cannot correctly
handle empty blocks and may return a null block which is saved to the
state. In order to correctly make the comparison for planning, we need
to reify those null blocks as empty containers in the cty value.

The createEmptyBlocks helper converts any null NestingList or NestingSet
blocks to empty list or set cty values. We only need to be concerned
with List and Set, because those are the only types that can be defined
with the legacy SDK. In hindsight these could have been normalized in
the legacy SDK shims had this problem been uncovered earlier, but for the
sake of compatibility we will now normalize these in core.
2020-08-14 13:36:52 -04:00
James Bardin
1c09df1a66
Merge pull request #25779 from hashicorp/jbardin/remove-state-attrs
Remove resource state attributes that are no longer in the schema
2020-08-12 10:49:44 -04:00
James Bardin
b9e076ec66 re-add ModuleInstance -> Module conversion
When working with a ConfigResource, the generalization of a
ModuleInstance to a Module was inadvertently dropped, and there was to
test coverage for that type of target.

Ensure we can target a specific module instance alone.
2020-08-12 10:22:13 -04:00
James Bardin
0df5a7e6cf Generalize target addresses before expansion
Before expansion happens, we only have expansion resource nodes that
know their ConfigResource address. In order to properly compare these to
targets within a module instance, we need to generalize the target to
also be a ConfigResource.

We can also remove the IgnoreIndices field from the transformer, since
we have addresses that are properly scoped and can compare them in the
correct context.
2020-08-12 10:12:43 -04:00
James Bardin
998ba6e6e1 remove extra attrs found in state json
While removal of attributes can be handled by providers through the
UpgradeResourceState call, data sources may need to be evaluated before
reading, and they have no upgrade path in the provider protocol.

Strip out extra attributes during state decoding when they are no longer
present in the schema, and there is no schema upgrade pending.
2020-08-06 22:55:36 -04:00
Lars Lehtonen
9499ec4422
terraform: fix dropped test error 2020-07-28 20:11:54 -07:00
James Bardin
da644568a5 return known empty containers during plan
When looking up a resource during plan, we need to return an empty
container type when we're certain there are going to be no instances.
It's now more common to reference resources in a context that needs to
be known during plan (e.g. for_each), and always returning a DynamicVal
her would block plan from succeeding.
2020-07-23 17:37:07 -04:00
James Bardin
5c31add2fc test data source index reference too 2020-07-23 17:16:32 -04:00
James Bardin
7d3cd5bc43 store planned data source state when deferring
This copies the behavior of resources, so that there is a placeholder
state available for planning.
2020-07-23 17:15:13 -04:00
Patrick Decat
062865735f Typo: heirarchical => hierarchical 2020-07-23 15:09:22 +02:00
James Bardin
5b8e5ec276 destroy provisioner test
Ensure that we have destroy provisioner test that reference self
2020-07-20 15:49:51 -04:00
James Bardin
3223e352ea skip broken test
This is the known case broken by the changes to allow resources pending
destruction to be evaluated from state. When a resource references
another that is create_before_destroy, and that resource is being scaled
in, the first resource will not be updated correctly.
2020-07-20 09:49:47 -04:00
James Bardin
5b8010b5b9 add a fixup transformer to connect destroy refs
Since we have to allow destroy nodes to be evaluated for providers
during a full destroy, this is adding a transformer to connect temporary
values to any destroy versions of their references when possible. The
ensures that the destroy happens before evaluation, even when there
isn't a full create-then-destroy set of instances.

The cases where the connection can't be made are when the temporary
value has a provider descendant, which means it must evaluate early in
the case of a full destroy. This means the value may contain incorrect
data when referencing resource that are create_before_destroy, or being
scaled-in via count or for_each. That will need to be addressed later by
reevaluating how we handle the full destroy case in terraform.
2020-07-20 09:49:47 -04:00
James Bardin
d1dba76132 allow the evaluation of resource being destroyed
During a full destroy, providers may reference resources that are going
to be destroyed as well. We currently cannot change this behavior, so we
need to allow the evaluation and try to prevent it from leaking into as
many other places as possible. Another transformer to try and protect
the values in locals, variables and outputs will be added to enforce
destroy ordering when possible.
2020-07-20 09:49:47 -04:00
James Bardin
6f9d2c51e2 you cannot refer to destroy nodes
Outputs and locals cannot refer to destroy nodes. Since those nodes
types do not have different ordering for create and destroy operations,
connecting them directly to destroy nodes can cause cycles.
2020-07-20 09:49:47 -04:00
James Bardin
ca8338e343 fix tests after moving incorrect references
The destroy graph builder test requires state in order to be correct,
which it didn't have. The other tests hits the edge case where a planned
destroy cannot remove outputs, because the apply phase does not know it
was created from a destroy.
2020-07-20 09:49:47 -04:00
James Bardin
ebe31acc48 track destroy references for data sources too
Since data source destruction is only state removal, and other resources
cannot depend on them creating any physical resources, the destroy
dependencies were not tracked in the state. It turns out that there is a
special case which requires this; running terraform destroy where the
provider depends on a data source. In that case the resources using that
provider need to record their indirect dependence on the data source, so
that they can be deleted before the data source is removed from the
state.
2020-07-20 09:49:47 -04:00
James Bardin
c0dbc95236 test destroy with provider depending on a resource 2020-07-20 09:49:47 -04:00
Martin Atkins
61baceb308 core: Skip edges between resource instances in different module instances
Our reference transformer analyses and our destroy transformer analyses
are built around static (not-yet-expanded) addresses so that they can
correctly handle mixtures of expanded and not-yet-expanded objects in the
same graph.

However, this characteristic also makes them unnecessarily conservative
in their handling of references between resources within different
instances of the same module: we know they can never interact with each
other in practice because the dependencies for all instances of a module
are the same and so one instance cannot possibly depend on another.

As a compromise then, here we introduce a new helper function that can
recognize when a proposed edge is between two resource instances that
belong to different instances of the same module, and thus allow us to
skip actually creating those edges even though our imprecise analyses
believe them to be needed.

As well as significantly reducing the number of edges in situations where
multi-instance resources appear inside multi-instance modules, this also
fixes some potential cycles in situations where a single plan includes
both destroying an instance of a module and creating a new instance of the
same module: the dependencies between the objects in the instance being
destroyed and the objects in the instance being created can, if allowed
to connect, cause Terraform to believe that the create and the destroy
both depend on one another even though there is no need for that to be
true in practice.

This involves a very specialized helper function to encode the situation
where this exception applies. This function has an ugly name to reflect
how specialized it is; it's not intended to be of any use outside of these
three situations in particular.
2020-07-17 08:40:13 -07:00
James Bardin
83632e078f
Merge pull request #25544 from hashicorp/jbardin/resource-state
don't store an entire Resource's state in each ResourceInstance
2020-07-13 13:23:40 -04:00
James Bardin
ee8cc627a0 don't store an entire Resource in each Instance
The AbstractResourceInstance type was storing the entire Resource from
the state, when it only needs the actual instance state. This would
cause resources to consume memory on the order of n^2, where n in the
number of instances of the resource.

Rather than attaching the entire resource state, which includes copying
each individual instance, only attach the ResourceInstance state, and
extract out the provider address from the Resource.
2020-07-10 13:35:13 -04:00
James Bardin
a0567458e2 ensure root module locals and vars are pruned
The pruneUnusedNodes transformer was skipping root level locals and
variables, causing them to be left in the graph during a full destroy.
Use the return value from temporaryValue to indicate if the node is
truly temporary or not, rather then keeping the entire root module.
2020-07-10 09:30:03 -04:00
James Bardin
2555f6f988 remove root output eval nodes from destroy
If we're adding a node to remove a root output from the state, the
output itself does not need to be re-evaluated. The exception for root
outputs caused them to be missed when we refactored resource destruction
to only use the existing state.
2020-07-07 11:10:15 -04:00
James Bardin
b62640d2d5 update output destroy test to reference expander
Have the output reference the expansion of a resource (via the whole
resource object), so that we can be sure we don't attempt to evaluate
that expansion during destroy.
2020-07-07 11:08:14 -04:00
Kristin Laemmert
f3a1f1a263
terraform console: enable use of impure functions (#25442)
* command/console: allow use of impure functions in terraform console
* add tests for Context Eval
2020-07-01 09:43:07 -04:00
Alisdair McDiarmid
df82796550
Merge pull request #25420 from hashicorp/alisdair/fix-import-provider-config-references
terraform: Relax provider config ref constraints
2020-06-29 15:28:10 -04:00
James Bardin
8a152f5649
Merge pull request #25419 from hashicorp/jbardin/cbd-scale-in
don't evaluate destroy instances
2020-06-29 12:58:11 -04:00
Alisdair McDiarmid
ac99a3b916 terraform: Relax provider config ref constraints
When configuring providers, it is normally valid to refer to any value
which is known at apply time. This can include resource instance
attributes, variables, locals, and so on.

The import command has a simpler graph evaluation, which means that
many of these values are unknown. We previously prevented this from
happening by restricting provider configuration references to input
variables (#22862), but this was more restrictive than is necessary.

This commit changes how we verify provider configuration for import.
We no longer inspect the configuration references during graph building,
because this is too early to determine if these values will become known
or not.

Instead, when the provider is configured during evaluation, we
check if the configuration value is wholly known. If not, we fail with a
diagnostic error.

Includes a test case which verifies that providers can now be configured
using locals as well as vars, and an updated test case which verifies
that providers cannot be configured with references to resources.
2020-06-29 10:58:20 -04:00
Kristin Laemmert
45d72b3018
terraform: check for unknows in for_each type before validating set (#25426)
element types

The error message when evaluateForEachExpression encounted an unknown
value of cty.DynamicPseudoType was not clear:

The given "for_each" argument value is unsuitable: "for_each" supports maps
and sets of strings, but you have provided a set containing type dynamic.

By moving the check for unknowns before the check for set element types,
the following error is returned instead:

"The "for_each" value depends on resource attributes that cannot be
determined until apply (...)"
2020-06-29 09:12:36 -04:00
James Bardin
6243a6307a don't evaluate destroy instances
Orphaned instances that are create_before_destroy will still be in the
state when their references are evaluated. We need to skip instances
that are planned to be destroyed altogether, as they can't be part of an
evaluation.
2020-06-26 18:05:53 -04:00
James Bardin
32d12d9719
Merge pull request #25373 from hashicorp/jbardin/targeting
New target transformer
2020-06-25 20:58:34 -04:00
James Bardin
c96914d624
Merge pull request #25399 from hashicorp/jbardin/destroy-deps
index destroy dependencies by addrs.ConfigResource
2020-06-25 16:04:03 -04:00
James Bardin
9f7b3cc1dc index destroy dependencies by addrs.ConfigResource
When the DestroyEdgeTransformer was updated to handle stored
dependencies the addrs.ConfigResource type did not yet exist. The lookup
map keys in the transformer needed to be updated to remove module
indexes.
2020-06-25 15:28:39 -04:00
Alisdair McDiarmid
779fe37a1c command/login: Require "yes" to confirm
This is for consistency with other commands which use prompts, all of
which require "yes" rather than "y" to confirm.

We also migrate the login command to use UIInput, which now supports
securely asking for passwords or secrets via the speakeasy library.
2020-06-25 11:46:51 -04:00
James Bardin
f9ff7d1ee8 test for targeting with modules and output 2020-06-24 12:52:29 -04:00
James Bardin
2fa16c24f7 remove unused interfaces
RemovableIfNotTargeted and GraphNodeTargetDownstream are no longer used
by the target transformer.
2020-06-24 10:45:58 -04:00
James Bardin
c99157c35b new targets transformer
This simplifies the initial targeting logic, and removes the complex
algorithm for finding descendants that result in output changes, which
hid bugs that failed with modules.

The targeting is handled in 2 phases. First we find all individual
resource nodes that are targeted, then add all their dependencies to the
set of targets. This in essence is all we need for targeting, and is
straightforward to understand.

The next phase is to add any root module outputs that can be solely
derived from the set of targeted resources. There is currently no way to
target outputs themselves, so this is how we can allow these to be
updated as part of a target.

Rather than attempting to backtrack through the graph to find candidate
outputs, requiring each node on the chain to properly advertise if it
could be traversed, then backtracking again to determine if the
candidate is valid (which often got "off course"), we can start directly
from the outputs themselves. The algorithm here is simpler: if all the
root output's resource dependencies are targeted, add that output and
its dependencies to the targeted set.
2020-06-24 10:27:52 -04:00
James Bardin
504b49b1d3 make outptut destroy nodes a temporaryValue
These never need to be pruned, except in the case of adding output
changes to a targeted graph.
2020-06-24 10:22:10 -04:00
James Bardin
308eb5f47f add CountBoundaryTransformer after targeting
no need to have the extra nodes and edges in the graph when we're
traversing everything for targeting
2020-06-23 17:22:44 -04:00
Alisdair McDiarmid
9ab9ef6291 command/import: Fix allow-missing-config option
We previously intentionally removed support for the allow-missing-config
option to terraform import, requiring that all imported resources have
matching config. See #24412.

However, the option was not removed from the import command, and it is
widely used. This commit reintroduces support for importing with a
missing configuration by falling back to implying the provider FQN based
on the resource type.
2020-06-23 14:20:50 -04:00
James Bardin
f433228906 hide empty plans for misbehaving data resource
If a data source is storing a value that doesn't comply precisely with
the schema, it will now show up as a perpetual diff during plan.

Since we can easily detect if there is no resulting change from the
stored value, rather than presenting a planned read each time, we can
change the plan to a NoOp and log the incongruity as a warning.
2020-06-18 19:21:19 -04:00
James Bardin
27012f7ee1
Merge pull request #25258 from hashicorp/jbardin/module-refs
Whole module references
2020-06-17 10:39:18 -04:00
James Bardin
534c82f36a module and output depends_on validation tests 2020-06-16 13:17:21 -04:00
James Bardin
a26446931b validate depends_on for outputs
If depends_on is allowed for outputs, we should validate that the
expressions are valid. Since outputs are always evaluated, and
validation is just done by this evaluation, we can check the
depends_on validation during evaluation too.
2020-06-16 12:40:48 -04:00
James Bardin
bdf5acd627 validate depends_on in module calls
Add depends_on validation to module calls, and accumulate diagnostics
for all calls rather than returning early.
2020-06-16 12:39:50 -04:00
James Bardin
a8884b18e3 split depends_on validation into its own function
Only resources were validating depends_on. We can use this same block to
ensure all depends_on validation has the same output.
2020-06-16 12:38:05 -04:00
James Bardin
7154c61f0b reduce module instances refs to the module call
There aren't going to be any nodes specifically for module call
instances during plan, so we have to switch the reference subject to the
general module call.
2020-06-15 20:46:53 -04:00
James Bardin
d6ca469124 module variables can't be referenced as a module 2020-06-15 20:46:03 -04:00
James Bardin
02167dcfe4 test whole module reference from module var
this reference isn't being connected properly
2020-06-15 20:45:23 -04:00
James Bardin
39cf911d38
Merge pull request #25208 from hashicorp/jbardin/expand-import
ensure modules are expanded during import
2020-06-12 12:45:05 -04:00
James Bardin
22680d7409
Merge pull request #25206 from hashicorp/jbardin/target-with-expansion
Targeting with module expansion
2020-06-12 12:44:49 -04:00
James Bardin
c0a5214aec do not look for all descendants from root outputs
The output destroy node only needs to connect to each of the output's
up-edges in order to be connected transitively to all of the outputs
dependencies. In large, highly-connected graphs, this may save
considerable time for each output.
2020-06-11 09:53:09 -04:00
James Bardin
8f4395a1e9 ensure modules are expanded during import
In order to import into a module, we have to make sure that module has
registered the expansion data.
2020-06-10 17:02:41 -04:00
James Bardin
13c6b83e29 expanded module targeting test 2020-06-10 16:11:05 -04:00
James Bardin
98b323d815 ignore module indices in pre-expansion targeting
The TargetsTransformer ignored resource indices before expansion could
happen, but was not handling module indices. Ensure that we collapse all
pre-expansion addresses to "configuration" addresses, with no module or
resource keys.
2020-06-10 15:39:29 -04:00
James Bardin
a2d8376eeb TransformTargets cannot depends on knowing Destroy
There is no reliable way to know if `destroy` was called from the cli.
2020-06-10 15:38:35 -04:00
James Bardin
aa7e6f8d86 nodeCloseModule needs to be kept for downstream 2020-06-10 15:37:55 -04:00
James Bardin
7022345b8f Targets was being dropped in data source nodes 2020-06-10 15:36:44 -04:00
James Bardin
198c632e04 incorrect early return during module transformer
The recursive call should only return immediately on error.

The switch statement to find the current path should not use
ReferenceOutside, as we are getting the path for configuration, not for
references. This case would not have been taken currently, since all
GraphNodeReferenceOutside are also GraphNodeModulePath.
2020-06-06 21:45:05 -04:00
James Bardin
242a916a17 variable ModulePath must return configured path
The parent path case is handled ReferenceOutside
2020-06-06 21:45:05 -04:00
James Bardin
9722686b62 validation test with multiple nested modules 2020-06-06 21:44:41 -04:00