Commit Graph

2958 Commits

Author SHA1 Message Date
James Bardin
1c09df1a66
Merge pull request #25779 from hashicorp/jbardin/remove-state-attrs
Remove resource state attributes that are no longer in the schema
2020-08-12 10:49:44 -04:00
James Bardin
b9e076ec66 re-add ModuleInstance -> Module conversion
When working with a ConfigResource, the generalization of a
ModuleInstance to a Module was inadvertently dropped, and there was to
test coverage for that type of target.

Ensure we can target a specific module instance alone.
2020-08-12 10:22:13 -04:00
James Bardin
0df5a7e6cf Generalize target addresses before expansion
Before expansion happens, we only have expansion resource nodes that
know their ConfigResource address. In order to properly compare these to
targets within a module instance, we need to generalize the target to
also be a ConfigResource.

We can also remove the IgnoreIndices field from the transformer, since
we have addresses that are properly scoped and can compare them in the
correct context.
2020-08-12 10:12:43 -04:00
James Bardin
998ba6e6e1 remove extra attrs found in state json
While removal of attributes can be handled by providers through the
UpgradeResourceState call, data sources may need to be evaluated before
reading, and they have no upgrade path in the provider protocol.

Strip out extra attributes during state decoding when they are no longer
present in the schema, and there is no schema upgrade pending.
2020-08-06 22:55:36 -04:00
James Bardin
da644568a5 return known empty containers during plan
When looking up a resource during plan, we need to return an empty
container type when we're certain there are going to be no instances.
It's now more common to reference resources in a context that needs to
be known during plan (e.g. for_each), and always returning a DynamicVal
her would block plan from succeeding.
2020-07-23 17:37:07 -04:00
James Bardin
5c31add2fc test data source index reference too 2020-07-23 17:16:32 -04:00
James Bardin
7d3cd5bc43 store planned data source state when deferring
This copies the behavior of resources, so that there is a placeholder
state available for planning.
2020-07-23 17:15:13 -04:00
James Bardin
5b8e5ec276 destroy provisioner test
Ensure that we have destroy provisioner test that reference self
2020-07-20 15:49:51 -04:00
James Bardin
3223e352ea skip broken test
This is the known case broken by the changes to allow resources pending
destruction to be evaluated from state. When a resource references
another that is create_before_destroy, and that resource is being scaled
in, the first resource will not be updated correctly.
2020-07-20 09:49:47 -04:00
James Bardin
5b8010b5b9 add a fixup transformer to connect destroy refs
Since we have to allow destroy nodes to be evaluated for providers
during a full destroy, this is adding a transformer to connect temporary
values to any destroy versions of their references when possible. The
ensures that the destroy happens before evaluation, even when there
isn't a full create-then-destroy set of instances.

The cases where the connection can't be made are when the temporary
value has a provider descendant, which means it must evaluate early in
the case of a full destroy. This means the value may contain incorrect
data when referencing resource that are create_before_destroy, or being
scaled-in via count or for_each. That will need to be addressed later by
reevaluating how we handle the full destroy case in terraform.
2020-07-20 09:49:47 -04:00
James Bardin
d1dba76132 allow the evaluation of resource being destroyed
During a full destroy, providers may reference resources that are going
to be destroyed as well. We currently cannot change this behavior, so we
need to allow the evaluation and try to prevent it from leaking into as
many other places as possible. Another transformer to try and protect
the values in locals, variables and outputs will be added to enforce
destroy ordering when possible.
2020-07-20 09:49:47 -04:00
James Bardin
6f9d2c51e2 you cannot refer to destroy nodes
Outputs and locals cannot refer to destroy nodes. Since those nodes
types do not have different ordering for create and destroy operations,
connecting them directly to destroy nodes can cause cycles.
2020-07-20 09:49:47 -04:00
James Bardin
ca8338e343 fix tests after moving incorrect references
The destroy graph builder test requires state in order to be correct,
which it didn't have. The other tests hits the edge case where a planned
destroy cannot remove outputs, because the apply phase does not know it
was created from a destroy.
2020-07-20 09:49:47 -04:00
James Bardin
ebe31acc48 track destroy references for data sources too
Since data source destruction is only state removal, and other resources
cannot depend on them creating any physical resources, the destroy
dependencies were not tracked in the state. It turns out that there is a
special case which requires this; running terraform destroy where the
provider depends on a data source. In that case the resources using that
provider need to record their indirect dependence on the data source, so
that they can be deleted before the data source is removed from the
state.
2020-07-20 09:49:47 -04:00
James Bardin
c0dbc95236 test destroy with provider depending on a resource 2020-07-20 09:49:47 -04:00
Martin Atkins
61baceb308 core: Skip edges between resource instances in different module instances
Our reference transformer analyses and our destroy transformer analyses
are built around static (not-yet-expanded) addresses so that they can
correctly handle mixtures of expanded and not-yet-expanded objects in the
same graph.

However, this characteristic also makes them unnecessarily conservative
in their handling of references between resources within different
instances of the same module: we know they can never interact with each
other in practice because the dependencies for all instances of a module
are the same and so one instance cannot possibly depend on another.

As a compromise then, here we introduce a new helper function that can
recognize when a proposed edge is between two resource instances that
belong to different instances of the same module, and thus allow us to
skip actually creating those edges even though our imprecise analyses
believe them to be needed.

As well as significantly reducing the number of edges in situations where
multi-instance resources appear inside multi-instance modules, this also
fixes some potential cycles in situations where a single plan includes
both destroying an instance of a module and creating a new instance of the
same module: the dependencies between the objects in the instance being
destroyed and the objects in the instance being created can, if allowed
to connect, cause Terraform to believe that the create and the destroy
both depend on one another even though there is no need for that to be
true in practice.

This involves a very specialized helper function to encode the situation
where this exception applies. This function has an ugly name to reflect
how specialized it is; it's not intended to be of any use outside of these
three situations in particular.
2020-07-17 08:40:13 -07:00
James Bardin
83632e078f
Merge pull request #25544 from hashicorp/jbardin/resource-state
don't store an entire Resource's state in each ResourceInstance
2020-07-13 13:23:40 -04:00
James Bardin
ee8cc627a0 don't store an entire Resource in each Instance
The AbstractResourceInstance type was storing the entire Resource from
the state, when it only needs the actual instance state. This would
cause resources to consume memory on the order of n^2, where n in the
number of instances of the resource.

Rather than attaching the entire resource state, which includes copying
each individual instance, only attach the ResourceInstance state, and
extract out the provider address from the Resource.
2020-07-10 13:35:13 -04:00
James Bardin
a0567458e2 ensure root module locals and vars are pruned
The pruneUnusedNodes transformer was skipping root level locals and
variables, causing them to be left in the graph during a full destroy.
Use the return value from temporaryValue to indicate if the node is
truly temporary or not, rather then keeping the entire root module.
2020-07-10 09:30:03 -04:00
James Bardin
2555f6f988 remove root output eval nodes from destroy
If we're adding a node to remove a root output from the state, the
output itself does not need to be re-evaluated. The exception for root
outputs caused them to be missed when we refactored resource destruction
to only use the existing state.
2020-07-07 11:10:15 -04:00
James Bardin
b62640d2d5 update output destroy test to reference expander
Have the output reference the expansion of a resource (via the whole
resource object), so that we can be sure we don't attempt to evaluate
that expansion during destroy.
2020-07-07 11:08:14 -04:00
Kristin Laemmert
f3a1f1a263
terraform console: enable use of impure functions (#25442)
* command/console: allow use of impure functions in terraform console
* add tests for Context Eval
2020-07-01 09:43:07 -04:00
Alisdair McDiarmid
df82796550
Merge pull request #25420 from hashicorp/alisdair/fix-import-provider-config-references
terraform: Relax provider config ref constraints
2020-06-29 15:28:10 -04:00
James Bardin
8a152f5649
Merge pull request #25419 from hashicorp/jbardin/cbd-scale-in
don't evaluate destroy instances
2020-06-29 12:58:11 -04:00
Alisdair McDiarmid
ac99a3b916 terraform: Relax provider config ref constraints
When configuring providers, it is normally valid to refer to any value
which is known at apply time. This can include resource instance
attributes, variables, locals, and so on.

The import command has a simpler graph evaluation, which means that
many of these values are unknown. We previously prevented this from
happening by restricting provider configuration references to input
variables (#22862), but this was more restrictive than is necessary.

This commit changes how we verify provider configuration for import.
We no longer inspect the configuration references during graph building,
because this is too early to determine if these values will become known
or not.

Instead, when the provider is configured during evaluation, we
check if the configuration value is wholly known. If not, we fail with a
diagnostic error.

Includes a test case which verifies that providers can now be configured
using locals as well as vars, and an updated test case which verifies
that providers cannot be configured with references to resources.
2020-06-29 10:58:20 -04:00
Kristin Laemmert
45d72b3018
terraform: check for unknows in for_each type before validating set (#25426)
element types

The error message when evaluateForEachExpression encounted an unknown
value of cty.DynamicPseudoType was not clear:

The given "for_each" argument value is unsuitable: "for_each" supports maps
and sets of strings, but you have provided a set containing type dynamic.

By moving the check for unknowns before the check for set element types,
the following error is returned instead:

"The "for_each" value depends on resource attributes that cannot be
determined until apply (...)"
2020-06-29 09:12:36 -04:00
James Bardin
6243a6307a don't evaluate destroy instances
Orphaned instances that are create_before_destroy will still be in the
state when their references are evaluated. We need to skip instances
that are planned to be destroyed altogether, as they can't be part of an
evaluation.
2020-06-26 18:05:53 -04:00
James Bardin
32d12d9719
Merge pull request #25373 from hashicorp/jbardin/targeting
New target transformer
2020-06-25 20:58:34 -04:00
James Bardin
c96914d624
Merge pull request #25399 from hashicorp/jbardin/destroy-deps
index destroy dependencies by addrs.ConfigResource
2020-06-25 16:04:03 -04:00
James Bardin
9f7b3cc1dc index destroy dependencies by addrs.ConfigResource
When the DestroyEdgeTransformer was updated to handle stored
dependencies the addrs.ConfigResource type did not yet exist. The lookup
map keys in the transformer needed to be updated to remove module
indexes.
2020-06-25 15:28:39 -04:00
Alisdair McDiarmid
779fe37a1c command/login: Require "yes" to confirm
This is for consistency with other commands which use prompts, all of
which require "yes" rather than "y" to confirm.

We also migrate the login command to use UIInput, which now supports
securely asking for passwords or secrets via the speakeasy library.
2020-06-25 11:46:51 -04:00
James Bardin
f9ff7d1ee8 test for targeting with modules and output 2020-06-24 12:52:29 -04:00
James Bardin
2fa16c24f7 remove unused interfaces
RemovableIfNotTargeted and GraphNodeTargetDownstream are no longer used
by the target transformer.
2020-06-24 10:45:58 -04:00
James Bardin
c99157c35b new targets transformer
This simplifies the initial targeting logic, and removes the complex
algorithm for finding descendants that result in output changes, which
hid bugs that failed with modules.

The targeting is handled in 2 phases. First we find all individual
resource nodes that are targeted, then add all their dependencies to the
set of targets. This in essence is all we need for targeting, and is
straightforward to understand.

The next phase is to add any root module outputs that can be solely
derived from the set of targeted resources. There is currently no way to
target outputs themselves, so this is how we can allow these to be
updated as part of a target.

Rather than attempting to backtrack through the graph to find candidate
outputs, requiring each node on the chain to properly advertise if it
could be traversed, then backtracking again to determine if the
candidate is valid (which often got "off course"), we can start directly
from the outputs themselves. The algorithm here is simpler: if all the
root output's resource dependencies are targeted, add that output and
its dependencies to the targeted set.
2020-06-24 10:27:52 -04:00
James Bardin
504b49b1d3 make outptut destroy nodes a temporaryValue
These never need to be pruned, except in the case of adding output
changes to a targeted graph.
2020-06-24 10:22:10 -04:00
James Bardin
308eb5f47f add CountBoundaryTransformer after targeting
no need to have the extra nodes and edges in the graph when we're
traversing everything for targeting
2020-06-23 17:22:44 -04:00
Alisdair McDiarmid
9ab9ef6291 command/import: Fix allow-missing-config option
We previously intentionally removed support for the allow-missing-config
option to terraform import, requiring that all imported resources have
matching config. See #24412.

However, the option was not removed from the import command, and it is
widely used. This commit reintroduces support for importing with a
missing configuration by falling back to implying the provider FQN based
on the resource type.
2020-06-23 14:20:50 -04:00
James Bardin
f433228906 hide empty plans for misbehaving data resource
If a data source is storing a value that doesn't comply precisely with
the schema, it will now show up as a perpetual diff during plan.

Since we can easily detect if there is no resulting change from the
stored value, rather than presenting a planned read each time, we can
change the plan to a NoOp and log the incongruity as a warning.
2020-06-18 19:21:19 -04:00
James Bardin
27012f7ee1
Merge pull request #25258 from hashicorp/jbardin/module-refs
Whole module references
2020-06-17 10:39:18 -04:00
James Bardin
534c82f36a module and output depends_on validation tests 2020-06-16 13:17:21 -04:00
James Bardin
a26446931b validate depends_on for outputs
If depends_on is allowed for outputs, we should validate that the
expressions are valid. Since outputs are always evaluated, and
validation is just done by this evaluation, we can check the
depends_on validation during evaluation too.
2020-06-16 12:40:48 -04:00
James Bardin
bdf5acd627 validate depends_on in module calls
Add depends_on validation to module calls, and accumulate diagnostics
for all calls rather than returning early.
2020-06-16 12:39:50 -04:00
James Bardin
a8884b18e3 split depends_on validation into its own function
Only resources were validating depends_on. We can use this same block to
ensure all depends_on validation has the same output.
2020-06-16 12:38:05 -04:00
James Bardin
7154c61f0b reduce module instances refs to the module call
There aren't going to be any nodes specifically for module call
instances during plan, so we have to switch the reference subject to the
general module call.
2020-06-15 20:46:53 -04:00
James Bardin
d6ca469124 module variables can't be referenced as a module 2020-06-15 20:46:03 -04:00
James Bardin
02167dcfe4 test whole module reference from module var
this reference isn't being connected properly
2020-06-15 20:45:23 -04:00
James Bardin
39cf911d38
Merge pull request #25208 from hashicorp/jbardin/expand-import
ensure modules are expanded during import
2020-06-12 12:45:05 -04:00
James Bardin
22680d7409
Merge pull request #25206 from hashicorp/jbardin/target-with-expansion
Targeting with module expansion
2020-06-12 12:44:49 -04:00
James Bardin
c0a5214aec do not look for all descendants from root outputs
The output destroy node only needs to connect to each of the output's
up-edges in order to be connected transitively to all of the outputs
dependencies. In large, highly-connected graphs, this may save
considerable time for each output.
2020-06-11 09:53:09 -04:00
James Bardin
8f4395a1e9 ensure modules are expanded during import
In order to import into a module, we have to make sure that module has
registered the expansion data.
2020-06-10 17:02:41 -04:00