Commit Graph

2958 Commits

Author SHA1 Message Date
James Bardin
d65bd64955 incorporate addrs.ConfigResource 2020-03-12 15:58:25 -04:00
James Bardin
482ae66e18 minor cleanup
Remove unused variables, sync.Once, and init in BuiltinEvalContext.
Replace some shim calls.
GraphNodeAttachProvider doesn't need to be a NodeModuleInstance.
2020-03-12 14:54:47 -04:00
Kristin Laemmert
1c78b26012
terraform: provider source test (#24342)
* configs: parse provider source string during module merge

This was the smallest unit of work needed to start writing provider
source tests!

* Update configs/parser_test.go

Co-Authored-By: Alisdair McDiarmid <alisdair@users.noreply.github.com>
2020-03-12 12:00:00 -04:00
James Bardin
4a1ec05092 comment fixes 2020-03-11 14:52:15 -04:00
James Bardin
98cfb51f27 convert /terraform to use new provider config
Change all ModuleInstances in provider types to plain Modules
2020-03-11 11:21:45 -04:00
James Bardin
fae5f9958d remove GraphNodeModuleInstance from Resource types
Remove the requirement for most *Resource types to be a
GraphNodeModuleInstance, ensuring that they never call ctx.Path while
being evaluated. This gets rid of most of the direct need for the Path
method currently implemented by NodeResourceAbstract, leaving the
provider and schema related calls for a subsequent PR.
2020-03-10 20:22:22 -04:00
James Bardin
68b500c5c7 remove abs addrs from NodeAbstractResource
This adds more shimming into that node itself, but allows us to pull it
out of the config transformer, and ensure we can create the resources
correctly from the config. The shimmed address usage can then be raised
out of the abstract resource, into the expanded node types.
2020-03-10 17:25:11 -04:00
James Bardin
245296850b fix reference transformer comments
GraphNodeSubPath/GraphNodeModuleInstance is not required for references
2020-03-10 17:25:11 -04:00
James Bardin
ab9a2935ce implement NodePlannableLocal
Using this in the same manner as NodePlannableOutput, which expands the
local values within modules. All thee output and local types are used in
both plan and apply, we may rename these to better reflect their usage
in expanding. That wait until we are certain that apply won't need any
extra machinery for handling values that aren't stored in the plan.
2020-03-10 17:25:11 -04:00
James Bardin
67e06f4fbe remove more UnkeyedInstanceShim
planning variables and outputs no longer needs module instances
2020-03-10 17:25:11 -04:00
James Bardin
87776913c6 nodeExpandModule doesn't need a Path() method
Unexpanded nodes can't implement GraphNodeModuleInstance (nee
GraphNodeSubPath), because they aren't aware how they have been
expanded, and may be in multiple distinct paths.

Since that means the EvalContext won't be in the correct path during the
walk, we just have to ensure that we don't use `ctx.Path()` inside Eval.
2020-03-10 17:25:11 -04:00
James Bardin
a104ecb69d GraphNodeExpand is not used 2020-03-10 17:25:11 -04:00
James Bardin
be2629d2f9 GraphNodeSubPath -> GraphNodeModuleInstance 2020-03-10 17:25:11 -04:00
James Bardin
215f60d5cf remove module shims from module expansion nodes 2020-03-10 17:25:11 -04:00
James Bardin
bd9cfca794 rename GraphNodeSubPath -> GraphNodeModuleInstance 2020-03-10 17:25:11 -04:00
James Bardin
b1df763541 remove UnkeyedInstanceShim from ref transformer
Since references are always within the scope of a single module, and we
can connect all module instance outputs for proper ordering, the
existing transformer works directly with only module paths as opposed to
module instances.

TODO: TransformApplyReferences for more precise module instance
targeting?
2020-03-10 17:25:11 -04:00
James Bardin
521bdcc241 implement GraphNodeModulePath
GraphNodeModulePath is similar to GraphNodeSubPath, except that it
returns an addrs.Module rather than an addrs.ModuleInstance. This is
used by the ReferenceTransformer to connect references, when modules may
not yet be expanded.

Because references only exist within the scope of a module, we can
connect everything knowing only the module path. If the reference is to
an expanded module instance output, we can still properly order the
reference because we'll wait for the entire module to complete
evaluation.
2020-03-10 17:25:11 -04:00
James Bardin
cb99dddb4d fix a flapping test involving CreateBeforeDestroy
A typo in the config caused it to disagree with the plan on whether a
resource should be CreateBeforeDestroy, preventing it from being ordered
properly. Add the new CreateBeforeDestroy field to the test fixture
state as well for completeness.
2020-03-10 16:16:50 -04:00
Kristin Laemmert
c7cc0afb80
Mildwonkey/ps schema (#24312)
* add Config to AttachSchemaTransformer for providerFqn lookup
* terraform: refactor ProvidedBy() to return nil when provider is not set
in config or state
2020-03-10 14:43:57 -04:00
James Bardin
654e880bb8
Merge pull request #24084 from hashicorp/jbardin/cbd-instance-state
Add CreateBeforeDestroy to instance state
2020-03-09 13:16:29 -04:00
Kristin Laemmert
6118d22c1f
terraform: refactor ProvidedBy() to return an addrs.ProviderConfig interface (#24295)
* terraform: refactor ProvidedBy() to return an addrs.ProviderConfig
interface

This refactor allows terraform to indicate whether a specific provider
configuration was found for the resource or if it is instead returning
the assumed default.

With that additional information the provider transformer can check if
there is a specific (non-default) provider FQN.
2020-03-06 08:33:44 -05:00
Paddy
e6592dc710
Add support for provider metadata to modules. (#22583)
Implement a new provider_meta block in the terraform block of modules, allowing provider-keyed metadata to be communicated from HCL to provider binaries.

Bundled in this change for minimal protocol version bumping is the addition of markdown support for attribute descriptions and the ability to indicate when an attribute is deprecated, so this information can be shown in the schema dump.

Co-authored-by: Paul Tyng <paul@paultyng.net>
2020-03-05 16:53:24 -08:00
Pam Selle
c249943360
Module Expansion: Part 2 (#24154)
* WIP: dynamic expand

* WIP: add variable and local support

* WIP: outputs

* WIP: Add referencer

* String representation, fixing tests it impacts

* Fixes TestContext2Apply_outputOrphanModule

* Fix TestContext2Apply_plannedDestroyInterpolatedCount

* Update DestroyOutputTransformer and associated types to reflect PlannableOutputs

* Remove comment about locals

* Remove module count enablement

* Removes allowing count for modules, and reverts the test,
while adding a Skip()'d test that works when you re-enable
the config

* update TargetDownstream signature to match master

* remove unnecessary method

Co-authored-by: James Bardin <j.bardin@gmail.com>
2020-02-24 17:42:32 -05:00
James Bardin
bf65b516c0
Merge pull request #24163 from hashicorp/jbardin/destroy-provisioner-keys
Destroy provisioner each.key
2020-02-20 08:41:55 -05:00
James Bardin
745d4e76ec better comments 2020-02-19 16:54:41 -05:00
James Bardin
0d6b5f1559 update some destroy provisioner tests to use for_each 2020-02-19 16:02:40 -05:00
James Bardin
953ada1cf8 destroy provisioner cannot re-evaluate for_each
During destroy, the for expression may be unknown and evaluation will
fail. Destroy provisioners however can only reference the key value,
which is known in the address.
2020-02-19 16:02:40 -05:00
James Bardin
2e489d88f3 update terraform to work with new dag changes
Also removing unnecessary uses of the Set.List
2020-02-19 14:53:19 -05:00
Martin Atkins
86f0b5191c addrs: Stronger validation and normalization of provider namespace/type
The provider FQN is becoming our primary identifier for a provider, so
it's important that we are clear about the equality rules for these
addresses and what characters are valid within them.

We previously had a basic regex permitting ASCII letters and digits for
validation and no normalization at all. We need to do at least case
folding and UTF-8 normalization because these names will appear in file
and directory names in case-insensitive filesystems and in repository
names such as on GitHub.

Since we're already using DNS-style normalization and validation rules
for the hostname part, rather than defining an entirely new set of rules
here we'll just treat the provider namespace and type as if they were
single labels in a DNS name. Aside from some internal consistency, that
also works out nicely because systems like GitHub use organization and
repository names as part of hostnames (e.g. with GitHub Pages) and so
tend to apply comparable constraints themselves.

This introduces the possibility of names containing letters from alphabets
other than the latin alphabet, and for latin letters with diacritics.
That's consistent with our introduction of similar support for identifiers
in the language in Terraform 0.12, and is intended to be more friendly to
Terraform users throughout the world that might prefer to name their
products using a different alphabet. This is also a further justification
for using the DNS normalization rules: modern companies tend to choose
product names that make good domain names, and now such names will be
usable as Terraform provider names too.
2020-02-18 15:42:09 -08:00
Alisdair McDiarmid
205408f6a5
Merge pull request #24124 from hashicorp/alisdair/fix-for-each-on-set-containing-null
terraform: detect null values in for_each sets
2020-02-18 12:57:50 -05:00
Alisdair McDiarmid
1b1a62026c terraform: Add test coverage for eval_for_each 2020-02-18 07:07:24 -05:00
Kristin Laemmert
2a646aba46 comment cleanup: those FIXME comments are load-bearing and must be up to date 2020-02-14 15:41:31 -08:00
Kristin Laemmert
ac56d12c5c terraform: replace addrs.NewLegacyProvider with lookups when the
configs.Module is accessible.

Continuing the work of removing all calls to addrs.NewLegacyProvider,
this commit uses configs.Module.ProviderForLocalConfig wherever the
caller has access to that Module.
2020-02-14 15:41:31 -08:00
Kristin Laemmert
228d881722 terraform: remove no-longer-necessary type strings
EvalContext.InitProvider no longer needs the redundant typ String
terraform.contextComponentFactory refactored to take an addrs.Provider
instead of a string.
2020-02-14 15:41:31 -08:00
Kristin Laemmert
6e2618d9be terraform: ProviderTransform gets provider fqn from module
Added configs.Module.ProviderForLocalProviderConfig which allows
terraform.ProviderTransformer to get the provider FQN from the module,
instead of assuming NewLegacyProvider.
2020-02-14 15:41:31 -08:00
Martin Atkins
c02a898994 core: InstanceKeyEvalData now aliases instances.RepetitionData
We're not far enough along yet to be able to actually use the
RepetitionData instances provided by the instances package, but having
these types be considered identical will help us to gradually migrate over
as we prepare the rest of Terraform to properly populate the Expander.
2020-02-14 15:20:07 -08:00
Martin Atkins
68b900928d core: Use instances.Expander to handle resource count and for_each
This is a minimal integration of instances.Expander used just for resource
count and for_each, for now just forcing modules to always be singletons
because the rest of Terraform Core isn't ready to deal with expanding
module calls yet.

This doesn't integrate super cleanly yet because we still have some
cleanup work to do in the design of the plan walk, to make it explicit
that the nodes in the plan graph represent static configuration objects
rather than expanded instances, including for modules. To make this work
in the meantime, there is some shimming between addrs.Module and
addrs.ModuleInstance to correct for the discontinuities that result from
the fact that Terraform currently assumes that modules are always
singletons.
2020-02-14 15:20:07 -08:00
Martin Atkins
8ea78dfe7d core: Make an instances.Expander available to every graph walk
This is not used yet, but in future commits will be used as a
"blackboard" to centrally aggregate the information pertaining to
expansion of resources and modules (using "count" or "for_each") to help
ensure consistent treatment of the expansion process during a graph walk.

In practice this only really makes sense for the plan walk, because the
apply walk doesn't do any dynamic expansion.
2020-02-14 15:20:07 -08:00
Martin Atkins
1dece66b10 instances: A package for module/resource reptition
This package aims to encapsulate the module/resource repetition problem
so that Terraform Core's graph node DynamicExpand implementations can be
simpler.

This is also a building block on the path towards module repetition, by
modelling the recursive expansion of modules and their contents. This will
allow the Terraform Core plan graph to have one node per configuration
construct, each of which will DynamicExpand into as many sub-nodes as
necessary to cover all of the recursive module instantiations.

For the moment this is just dead code, because Terraform Core isn't yet
updated to use it.
2020-02-14 15:20:07 -08:00
Alisdair McDiarmid
0ef7d6dea7 terraform: detect null values in for_each sets
Previously, passing `[null, null]` to `for_each` caused a panic. This
commit detects this invalid usage and returns an error instead.

Fixes #24047
2020-02-14 17:20:08 -05:00
Kristin Laemmert
add134298a
addrs: ProviderConfig fixups (#24115)
* fix outdated syntax in comments
* test for non-strings in ParseAbsProviderConfig
* ProviderConfigDefault and ProviderConfigAliased now take Providers
instead of strings
2020-02-14 09:06:50 -05:00
James Bardin
85ebaac8ce update provider types in tests 2020-02-13 21:15:11 -05:00
James Bardin
691bb6b907 use CreateBeforeDestroy from state
If the resource was stored as CreateBeforeDestroy, use the same ordering
regardless.

This reversal will be taken care if more cleanly in state-only destroys,
and with less risk. Don't use this commit as-is.
2020-02-13 21:09:59 -05:00
James Bardin
a44cf03eaa test for CBD instance being removed entirely
Even though this is only the destroy half of CreateBeforeDestroy, the
resource may still require the same destroy order.
2020-02-13 21:04:56 -05:00
James Bardin
d2a9dd3cef don't override CreateBeforeDestroy from diff
If the Diff is only a delete action, we can't override
CreateBeforeDestroy, because it will always be false and prevent the
stored state value from being used.
2020-02-13 21:04:56 -05:00
James Bardin
b4f06c22fe fixup provider types in new tests 2020-02-13 16:05:28 -05:00
James Bardin
099806c128 fixup LocalProviderConfig literal 2020-02-13 15:43:52 -05:00
James Bardin
d4d99be2db remove some destroy special cases
We no longer need special cases for most things during a full destroy,
so remove those from the graph transformations.

The only remaining cases are:
 - remove the root outputs, so destroy ends up with a clean state
 - reverse the target deps when targeting a destroy.
2020-02-13 15:43:52 -05:00
James Bardin
8c5853ee4e remove old references code from abstract resource 2020-02-13 15:43:52 -05:00
James Bardin
ca5b0e6894 no longer need DestroyValueReferenceTransformer
since destroy nodes are no longer connected to values, there's no need
to try and wrangle their edges to prevent cycles during destroy.
2020-02-13 15:43:52 -05:00
James Bardin
a4bc91abeb remove invalid destroy provisioner tests
Remove all the destroy provisioner tests that are testing what is no
longer allowed.

Add missing state dependencies to remaining tests that require it.
2020-02-13 15:43:52 -05:00
James Bardin
9edb719aaa run AttachStateTransformer in destroy plan
The AttachStateTransformer was never run in the destroy plan. This means
that resource without configuration that used a non-default provider
would not be connected to the correct provider for the plan.

The test that was attempting to catch this only worked because the
temporary graph used in the DestroyEdgeTransformer would add the state
and detect some issues.
2020-02-13 15:43:19 -05:00
James Bardin
a0ba481cad add state where it's now needed for tests 2020-02-13 15:42:10 -05:00
James Bardin
4a41af08b8 new deps are more precise 2020-02-13 15:42:10 -05:00
James Bardin
681d197628 fix DestroyEdgeTransformer tests
The tests require working node implementations with real state.
2020-02-13 15:42:10 -05:00
James Bardin
b5517b53ec simplify CBD transformation
Start by removing the DestroyEdge type altogether. This is only used to
detect the natural edge between a resource's create and destroy nodes,
but that's not necessary for any transformations. The custom edge type
also interferes with normal graph manipulations, because you can't
delete an arbitrary edge without knowing the type, so deletion of the
edge based only on the endpoints is often done incorrectly. The dag
package itself does this incorrectly in TransitiveReduction, which
always assumes the BasicEdge type.

Now that inter-resource destroy dependencies are already connected in the
DestroyEdgeTransformer (from the stored deps in state), there's no need
to search out all dependant resources in the CBD transformation, as they
should all be connected. This makes the CBD transformation rule quite
simple: reverse any edges from create nodes.
2020-02-13 15:42:10 -05:00
James Bardin
451190a5e6 remove DestroyEdge
This special edge type is no longer used. While we still have the option
of encoding more meaning into the edged themselves, having one special
edge type used only in one specific case was easily overlooked, as
dag.BasicEdge is assumed in all other cases.
2020-02-13 15:42:10 -05:00
James Bardin
8b5522a090 do not attempt to find more destroy dependencies
The requires destroy dependencies are what is stored in the state, and
have already been connected. There is no need to search further, since
new nodes from the config may interfere with the original destroy
ordering.
2020-02-13 15:42:10 -05:00
James Bardin
45f2a61bdb do not connect references from destroy nodes
Destroy nodes only require their own state to evaluate. Do not connect
any of their references in the graph.
2020-02-13 15:42:10 -05:00
Kristin Laemmert
47a16b0937
addrs: embed Provider in AbsProviderConfig instead of Type
a large refactor to addrs.AbsProviderConfig, embedding the addrs.Provider instead of a Type string. I've added and updated tests, added some Legacy functions to support older state formats and shims, and added a normalization step when reading v4 (current) state files (not the added tests under states/statefile/roundtrip which work with both current and legacy-style AbsProviderConfig strings).

The remaining 'fixme' and 'todo' comments are mostly going to be addressed in a subsequent PR and involve looking up a given local provider config's FQN. This is fine for now as we are only working with default assumption.
2020-02-13 15:32:58 -05:00
Kristin Laemmert
80862f3436
command/import: attach references before validating provider (#22862)
There was an order-of-operations bug where the import graph builder was
validating that the provider did not have any resource references before
references were actually being attached. This PR fixes the order of
operations and adds a test (in the command package).

Fixes #22804
2020-02-12 14:00:08 -05:00
Pierre Carles
b956e8ef35
Fix negative parallelism and negative semaphore (#23902)
* Throw an error when parallelism <=0

* Panic in case of negative semaphore
2020-02-12 10:10:52 -05:00
Kristin Laemmert
b4f21b6044 terraform: fix issue merging provider version constraints
A bug in ConfigTreeDependencies, where a pointer was being updated
instead of the map value, meant that only the first provider config
version constraing to be processes was being stored. This fixes that
bug, so now the returned moduledeps.Providers could have multiple
version constraints.

The responsibility for resolving provider version selection continues to
lie in the command package's ProviderResolver (under plugins.go).
2020-02-06 11:28:48 -05:00
Kristin Laemmert
c242f9389d terraform: look up provider fqns from
configs.Module.ProviderRequirements

This is close to a no-op - we aren't accepting the provider source
attribute yet, so the only entires in the ProviderRequirements will
already be legacy provider addrs.

This PR also removes the unused `uid` field from ResourceProvider and
ResourceProvisioner. It's unused now and even less likely to be useful
now that we have a specific addrs.Provider type.
2020-02-06 11:28:48 -05:00
Kristin Laemmert
7eed30595a
moduledeps: replace ProviderInstance with addrs.Provider (#24017)
* addrs: add ParseProviderSourceString function to parse fqns from
tfconfig-inspect
* moduledeps: use addrs.Provider instead of ProviderInstance
2020-02-05 09:27:32 -05:00
Kristin Laemmert
80ab551867
terraform: use addrs.Provider as map keys for provider schemas (#24002)
This is a stepping-stone PR for the provider source project. In this PR
"legcay-stype" FQNs are created from the provider name string. Future
work involves encoding the FQN directly in the AbsProviderConfig and
removing the calls to addrs.NewLegacyProvider().
2020-02-03 08:18:04 -05:00
Martin Atkins
8b511524d6
Initial steps towards AbsProviderConfig/LocalProviderConfig separation (#23978)
* Introduce "Local" terminology for non-absolute provider config addresses

In a future change AbsProviderConfig and LocalProviderConfig are going to
become two entirely distinct types, rather than Abs embedding Local as
written here. This naming change is in preparation for that subsequent
work, which will also include introducing a new "ProviderConfig" type
that is an interface that AbsProviderConfig and LocalProviderConfig both
implement.

This is intended to be largely just a naming change to get started, so
we can deal with all of the messy renaming. However, this did also require
a slight change in modeling where the Resource.DefaultProviderConfig
method has become Resource.DefaultProvider returning a Provider address
directly, because this method doesn't have enough information to construct
a true and accurate LocalProviderConfig -- it would need to refer to the
configuration to know what this module is calling the provider it has
selected.

In order to leave a trail to follow for subsequent work, all of the
changes here are intended to ensure that remaining work will become
obvious via compile-time errors when all of the following changes happen:
- The concept of "legacy" provider addresses is removed from the addrs
  package, including removing addrs.NewLegacyProvider and
  addrs.Provider.LegacyString.
- addrs.AbsProviderConfig stops having addrs.LocalProviderConfig embedded
  in it and has an addrs.Provider and a string alias directly instead.
- The provider-schema-handling parts of Terraform core are updated to
  work with addrs.Provider to identify providers, rather than legacy
  strings.

In particular, there are still several codepaths here making legacy
provider address assumptions (in order to limit the scope of this change)
but I've made sure each one is doing something that relies on at least
one of the above changes not having been made yet.

* addrs: ProviderConfig interface

In a (very) few special situations in the main "terraform" package we need
to make runtime decisions about whether a provider config is absolute
or local.

We currently do that by exploiting the fact that AbsProviderConfig has
LocalProviderConfig nested inside of it and so in the local case we can
just ignore the wrapping AbsProviderConfig and use the embedded value.

In a future change we'll be moving away from that embedding and making
these two types distinct in order to represent that mapping between them
requires consulting a lookup table in the configuration, and so here we
introduce a new interface type ProviderConfig that can represent either
AbsProviderConfig or LocalProviderConfig decided dynamically at runtime.

This also includes the Config.ResolveAbsProviderAddr method that will
eventually be responsible for that local-to-absolute translation, so
that callers with access to the configuration can normalize to an
addrs.AbsProviderConfig given a non-nil addrs.ProviderConfig. That's
currently unused because existing callers are still relying on the
simplistic structural transform, but we'll switch them over in a later
commit.

* rename LocalType to LocalName

Co-authored-by: Kristin Laemmert <mildwonkey@users.noreply.github.com>
2020-01-31 08:23:07 -05:00
Kristin Laemmert
6541775ce4
addrs: roll back change to Type field in ProviderConfig (#23937) 2020-01-28 08:13:30 -05:00
Kristin Laemmert
272cb44d3d
configs: extend module.ProviderRequirements to include the addrs.Provider instead of just version constraints. (#23843)
Renamed file.ProviderRequirements to file.RequiredProviders to match the
name of the block in the configuration. file.RequiredProviders contains
the contents of the file(s); module.ProviderRequirements contains the
parsed and merged provider requirements.

Extended decodeRequiredProvidersBlock to parse the new provider source
syntax (version only, it will ignore any other attributes).

Added some tests; swapped deep.Equal with cmp.Equal in the
terraform/module_dependencies_test.go because deep was not catching
incorrect constraints.
2020-01-13 11:31:47 -05:00
Martin Atkins
ff4ea042c2 config: Allow module authors to specify validation rules for variables
The existing "type" argument allows specifying a type constraint that
allows for some basic validation, but often there are more constraints on
a variable value than just its type.

This new feature (requiring an experiment opt-in for now, while we refine
it) allows specifying arbitrary validation rules for any variable which
can then cause custom error messages to be returned when a caller provides
an inappropriate value.

    variable "example" {
      validation {
        condition = var.example != "nope"
        error_message = "Example value must not be \"nope\"."
      }
    }

The core parts of this are designed to do as little new work as possible
when no validations are specified, and thus the main new checking codepath
here can therefore only run when the experiment is enabled in order to
permit having validations.
2020-01-10 15:23:25 -08:00
James Bardin
4aa8a1cece Add GraphNodeNoProvider to skip adding a providers
While the NodeDestroyResource type should not be a
GraphNodeProviderConsumer, we're going to avoid uncovering more hidden
behavior by explicitly skipping provider creation and connections in the
provider transformers.

This should be removed when more in-depth testing can be done during a
major release cycle.
2020-01-10 16:28:44 -05:00
James Bardin
a6cdfad590 NodeDestroyResource needs to be referencable
The change in #23696 removed the NodeAbstractResource methods from the
NodeDestroyResource type, in order to prevent other resource behaviors,
like requesting a provider.

While this node type is not directly referenced, it was implicitly
ordered against the module cleanup by virtue of being a resource node.

Since there's no good entry point to test this ordering at the moment,
2020-01-10 12:52:01 -05:00
James Bardin
8b0888798f
Merge pull request #23717 from hashicorp/jbardin/destroy-plan-values
Always prune unused values
2020-01-07 17:06:20 -05:00
Martin Atkins
7f8e087ce3 core: Don't panic if EvalMaybeResourceDeposedObject has no DeposedKey
This is a "should never happen" case, but we have reports of it actually
happening. In order to try to collect a bit more data about what's going
on here, we're changing what was previously a hard panic into a normal
error message that can include the address of the instance we were working
on and the action we were trying to do to it at the time.

The hope is to narrow down what situations can trigger this in order to
find a reliable reproduction case in order to debug further. This also
means that for those who _do_ encounter this problem in the meantime
Terraform will have a chance to shut down cleanly and therefore be more
likely to be able to recover on a subsequent plan/apply cycle.

Further investigation of this will follow once we see a report or two of
this updated error message.
2020-01-06 10:22:51 -08:00
James Bardin
6bad4e3dc0 add an interp that fails during planned destroy
Chain some values so that we can get an interpolation that fails if a
module input is evaluated during destroy.
2019-12-19 09:09:38 -05:00
James Bardin
aa8a0f063c update 2 tests to match the new output
These 2 tests were matching the old-style state strings, and the only
change is to module outputs which are no longer marshaled.
2019-12-19 09:09:38 -05:00
James Bardin
fe3edb8e46 more aggressively prune unused values
Since a planned destroy can no longer indicate it is a full destroy,
unused values were being left in the apply graph for evaluation. If
these values contains interpolations that can fail, (for example, a
zipmap with mismatched list sizes), it will cause the apply to abort.

The PrunUnusedValuesTransformer was only previously run during destroy,
more out of conservatism than for any other particular reason. Adapt it
to always remove unused values from the graph, with the exception being
the root module outputs, which must be retained when we don't have a
clear indication that a full destroy is being executed.
2019-12-19 09:09:38 -05:00
James Bardin
37d2202afe
Merge pull request #23696 from hashicorp/jbardin/orphan-resource-provider
NodeDestroyResource does not need a provider
2019-12-17 09:09:44 -05:00
James Bardin
414cbbe808 NodeDestroyResource does not need a provider
The resource cleanup node does not need a provider. We can't directly
remove the ProvidedBy method, but this node only needs to be eval-able
so we can remove all the NodeAbstractResource methods at once.
2019-12-16 17:55:49 -05:00
James Bardin
a57337327d check resource-level connections block for refs
References from a resource-level connection blocks were not returned
from NodeAbstractResource.References, causing the provisioner connection
attributes to sometimes be evaluated too early.
2019-12-12 12:57:23 -05:00
Kristin Laemmert
e3416124cc
addrs: replace "Type string" with "Type Provider" in ProviderConfig
* huge change to weave new addrs.Provider into addrs.ProviderConfig
* terraform: do not include an empty string in the returned Providers /
Provisioners
- Fixed a minor bug where results included an extra empty string
2019-12-06 08:00:18 -05:00
Martin Atkins
30bf83cdeb helper/logging: Bring the LevelFilter into our own codebase
In order to make this work reasonably we can't avoid using some funny
heuristics, which are somewhat reasonable to apply within the context of
Terraform itself but would not be good to add to the general "logutils".

Specifically, this is adding the additional heuristic that lines starting
with spaces are continuation lines and so should inherit the log level
of the most recent non-continuation line.
2019-12-05 15:22:03 -08:00
Pam Selle
2a2201cc74
Merge pull request #23475 from hashicorp/pselle/setMetaCleanup
Cleanup SetResourceInstanceCurrent to be clearer and more reliable
2019-12-04 12:04:12 -05:00
Kristin Laemmert
9891d0354a
providers: use addrs.Provider as map keys for provider.Factory (#23548)
* terraform/context: use new addrs.Provider as map key in provider factories
* added NewLegacyProviderType and LegacyString funcs to make it explicit that these are temporary placeholders

This PR introduces a new concept, provider fully-qualified name (FQN), encapsulated by the `addrs.Provider` struct.
2019-12-04 11:30:20 -05:00
Pam Selle
c5e6d36ace Revert "Confirmed these two tests are fine behavior manually running test case"
This reverts commit e8aa5bfdc9.
2019-12-03 16:52:29 -05:00
Pam Selle
e8aa5bfdc9 Confirmed these two tests are fine behavior manually running test case 2019-12-03 14:27:18 -05:00
Pam Selle
ca9da51516 Cleanup 2019-12-03 14:27:18 -05:00
Pam Selle
d144b83d50 This works, with lots of noise 2019-12-03 14:27:18 -05:00
Pam Selle
8cd4bd545c Delete dead code 2019-12-03 14:27:18 -05:00
James Bardin
bff675e0c3 check for missing Current instance in EachMap
NoEach and Each list both have this check, but it was missing in
EachMap. Refactor the EachList check to remove a level of indentation,
and make the check consistently near the start of the block.
2019-12-03 11:16:42 -05:00
James Bardin
88a806cc32
Merge pull request #23450 from hashicorp/jbarin/err-diags
fix diagnostics handling
2019-11-22 15:02:50 -05:00
James Bardin
5ed7d17265 remove incorrect comment
The CreateBeforeDestroy transformer correctly handles the edge referred
to in the comment, and going forward it will probably be easier to use
the knowledge of this edge for CBD anyway.
2019-11-21 11:35:54 -05:00
James Bardin
8510aa81ca make sure to get a ResourceAddr for destroy refs
addr.Resource is sometimes a resource, except when it's an instance.
Make sure to always get the underlying resource.
2019-11-21 11:35:54 -05:00
James Bardin
c47f100e56 update test states that need dependency info
A number of tests had no, or incomplete state for the transformations
they wanted to test. Add states state with the correct dependencies for
these tests.
2019-11-21 11:35:54 -05:00
James Bardin
23112e198a fix missing deposed key 2019-11-21 10:31:41 -05:00
James Bardin
6caa5d23e2 fix diagnostics handling
Located all non-test paths where a Diagnostic type was assigned to an
error variable.
2019-11-21 09:14:50 -05:00
James Bardin
1e6f04547f collect all dependencies for create_before_destroy
An earlier change to eliminate the large amount of duplicate edges being
added by the original CreateBeforeDestroy dependency mapper mistakingly
prevented adding edges when there are multiple CBD dependencies.

This updates the algorithm to use a map to collect all possible edges
and de-deplucating them before processing.
2019-11-20 11:44:43 -05:00
James Bardin
3ad9ae5001 failing test for missing cbd edge 2019-11-20 10:56:28 -05:00
James Bardin
33bef7c42e don't append refreshed state dependencies
EvalRefreshDependencies is used to update resource dependencies when
they don't exist, allow broken or old states to be updated. While
appending any newly found dependencies is tempting to have the largest
set available, changes to the config could conflict with the prior
dependencies causing cycles.
2019-11-18 09:32:57 -05:00
James Bardin
682083cdd1 Creators cannot depend directly on CBD dest nodes
Since a create node cannot both depend on its destroy node AND be
CreateBeforeDestroy, the same goes for its dependencies. While we do
connect resources with dependency destroy nodes so that updates are
ordered correctly, this ordering does not make sense in the
CreateBeforeDestroy case.

If resource node is CreateBeforeDestroy, we need to remove any direct
dependencies from it to destroy nodes to prevent cycles. Since we don't
know for certain if a crate node is going to be CreateBeforeDestroy at
the time the edge is added in the graph, we add it unconditionally and
prune it out later on. The pruning happens during the CBD transformer
when the CBD destroy node reverses it's own destroy edge.  The reason
this works for detecting the original edge, is that dependencies of CBD
resources are forced to be CBD themselves. This does have a false
positive where the case of the original node is NOT CBD, but this can be
taken care of later when we gather enough information in the graph to
prevent the connection in the first place.
2019-11-17 09:56:44 -05:00
James Bardin
71f4526ae5 failing test for cbd cycle 2019-11-17 09:55:32 -05:00
James Bardin
cbd64c0d3c restore the prior tainted status on failed apply 2019-11-08 10:33:27 -05:00
James Bardin
84b5de9ae4 simplify EvalMaybeTainted logic
The EvalMaybeTainted logic was confusing, with deep nesting and unneeded
duplicate fields.
2019-11-08 10:29:01 -05:00
James Bardin
4b04e3fa59 failing tests for destroying tainted resources
If a tainted resource fails to destroy, it loses the tainted status
2019-11-08 10:29:01 -05:00
James Bardin
bee703360c
Merge pull request #23252 from hashicorp/jbardin/abs-state-dependencies
store absolute addresses for resource dependencies in the state
2019-11-08 10:25:32 -05:00
James Bardin
46dbb3dde5 use Dependencies to connect creator and destroyer
The DestroyEdgeTransformer cannot determine ordering from the graph when
the destroyers are from orphaned resources, because there are no
references to resolve. The new stored Dependencies provides what we need
to connect the instances in this case.

We also add the StateDependencies method directly in the
GraphNodeResourceInstance interface, since all instances already
implement this, and we don't need another optional interface to check.

The old code in DestroyEdgeTransformer may no longer be needed in the
long run, but that can be determined separately, since too many of the
tests start with an incomplete state and rely on the Dependencies being
determined from the configuration alone.
2019-11-07 17:49:03 -05:00
James Bardin
10152da478 failing test for update from orphaned instance
Updates resulting from orphaned instances should happen after the
deletion of the instances.
2019-11-07 17:49:03 -05:00
James Bardin
5e16e8eece append dependencies during refresh
Refresh should load any new dependencies found because of configuration
or state changes, but retain any dependencies already in the state.
Orphaned resources would not be in config, but we do not want to lose
the destroy ordering for the later apply.
2019-11-07 17:49:03 -05:00
James Bardin
886af20f07 fixup some test comparisons 2019-11-07 17:49:03 -05:00
James Bardin
42bb4a644c make use of the new state Dependencies
Make use of the new Dependencies field in the instance state.

The inter-instance dependencies will be determined from the complete
reference graph, so that absolute addresses can be stored, rather than
just references within a module. The Dependencies are added to the node
in the same manner as state, i.e. via an "attacher" interface and
transformer.  This is because dependencies are calculated from the graph
itself, and not from the config.
2019-11-07 17:49:03 -05:00
James Bardin
2c3c011f20 change state dependencies to AbsResource addrs
We need to be able to reference all possible dependencies for ordering
when the configuration is no longer present, which means that absolute
addresses must be used. Since this is only to recreate the proper
ordering for instance destruction, only resources addresses need to be
listed rather than individual instance addresses.
2019-10-30 17:25:53 -04:00
James Bardin
f11c836181 failing test with for_each self reference 2019-10-29 12:14:30 -04:00
James Bardin
f766bb8380 prune unused resources from apply
If a resource is only destroying instances, there is no reason to
prepare the state and we can remove the Resource (prepare state) nodes.
They normally have pose no issue, but if the instances are being
destroyed along with their dependencies, the resource node may fail to
evaluate due to the missing dependencies (since destroy happens in the
reverse order).

These failures were previously blocked by there being a cycle when the
destroy nodes were directly attached to the resource nodes.
2019-10-24 12:04:46 -04:00
James Bardin
7108560228 failing test with destroy cycle
after the previous commit the cycle is broken, but we can't evaluate
resource counts that depends on data sources being destroyed.
2019-10-24 12:04:46 -04:00
James Bardin
eb6e7bba2e do not connect destroy and resource nodes
Destroy nodes do not need to be connected to the resource (prepare
state) node when adding them to the graph. Destroy nodes already have a
complete state in the graph (which is being destroyed), any references
will be added in the ReferenceTransformer, and the proper
connection to the create node will be added in the
DestroyEdgeTransformer.

Under normal circumstances this makes no difference, as create and
destroy nodes always have an dependency, so having the prepare state
handled before both only linearizes the operation slightly in the
normal destroy-then-create scenario.

However if there is a dependency on a resource being replaced in another
module, there will be a dependency between the destroy nodes in each
module (to complete the destroy ordering), while the resource node will
depend on the variable->output->resource chain. If both the destroy and
create nodes depend on the resource node, there will be a cycle.
2019-10-24 12:04:46 -04:00
James Bardin
3eb0e74ef9 failing test for module instance replace cycle
Destroy-time references are not correctly or fully inverted when
crossing module boundaries, causing cycle during apply.
2019-10-24 12:04:46 -04:00
James Bardin
470362b051 fix CBD tests to work on real data
The CBDEdgeTransformer tests worked on fake data structures, with a
synthetic graph, and configs that didn't match. Update them to generate
a more complete graph, with real node implementations, from real
configs.

The output graph is filtered down to instances, and the results still
functionally match the original expected test results, with some minor
additions due to using the real implementation.
2019-10-24 12:01:50 -04:00
James Bardin
8f35984791 Use Descendants rather than UpEdges in CBD
When looking for dependencies to fix when handling
create_before_destroy, we need to look past more than one edge, as
dependencies may appear transitively through outputs and variables. Use
Descendants rather than UpEdges.

We have the full graph to use for the CBD transformation, so there's no
longer any need to create a temporary graph, which may differ from the
original.
2019-10-24 12:01:50 -04:00
James Bardin
7c703b1bbf apply edge transforamtions after references
We can't correctly resolve the destroy ordering if all references
haven't been assigned to each node.
2019-10-24 12:01:50 -04:00
Pam Selle
90ecd17bd5 Fix err test 2019-10-23 14:04:39 -04:00
Pam Selle
86d69bbe18 Tweak the message mildly 2019-10-23 11:18:10 -04:00
Pam Selle
13ff341447 Add a special error for each.value 2019-10-22 15:44:28 -04:00
Radek Simko
7860f55e4f
Version tools per Go convention under tools.go 2019-10-17 22:23:39 +02:00
Martin Atkins
e21f0fa61e backend/local: Handle interactive prompts for variables in UI layer
During the 0.12 work we intended to move all of the variable value
collection logic into the UI layer (command package and backend packages)
and present them all together as a unified data structure to Terraform
Core. However, we didn't quite succeed because the interactive prompts
for unset required variables were still being handled _after_ calling
into Terraform Core.

Here we complete that earlier work by moving the interactive prompts for
variables out into the UI layer too, thus allowing us to handle final
validation of the variables all together in one place and do so in the UI
layer where we have the most context still available about where all of
these values are coming from.

This allows us to fix a problem where previously disabling input with
-input=false on the command line could cause Terraform Core to receive an
incomplete set of variable values, and fail with a bad error message.

As a consequence of this refactoring, the scope of terraform.Context.Input
is now reduced to only gathering provider configuration arguments. Ideally
that too would move into the UI layer somehow in a future commit, but
that's a problem for another day.
2019-10-10 10:07:01 -07:00
Martin Atkins
564b57b1f6 core: Require variables correctly set during NewContext
We previously deferred this to Validate, but all of our operations require
a valid set of variables and so checking this up front makes it more
obvious when a call into Terraform Core from the CLI layer isn't
populating variables correctly, instead of having it fail downstream
somewhere.

It's the caller's responsibility to ensure that it has collected values
for all of the variables in some way before calling NewContext, because
in the main case driven by the CLI there are many different places that
variable values can be collected from and so handling the main user-facing
validation in the CLI allows us to return better error messages that take
into account which way a variable is (or is not) being set.
2019-10-10 10:07:01 -07:00
James Bardin
1ee851f256
Merge pull request #22846 from hashicorp/jbardin/evaluate-resource
Always evaluate resources in their entirety
2019-10-08 07:57:15 -04:00
Pam Selle
43c66c2048
Merge pull request #22811 from notchairmk/b-apply-node-removed-targeted
bug: fix terraform trying to clean up orphan modules on target
2019-10-07 16:04:50 -04:00
Martin Atkins
39e609d5fd vendor: switch to HCL 2.0 in the HCL repository
Previously we were using the experimental HCL 2 repository, but now we'll
shift over to the v2 import path within the main HCL repository as part of
actually releasing HCL 2.0 as stable.

This is a mechanical search/replace to the new import paths. It also
switches to the v2.0.0 release of HCL, which includes some new code that
Terraform didn't previously have but should not change any behavior that
matters for Terraform's purposes.

For the moment the experimental HCL2 repository is still an indirect
dependency via terraform-config-inspect, so it remains in our go.sum and
vendor directories for the moment. Because terraform-config-inspect uses
a much smaller subset of the HCL2 functionality, this does still manage
to prune the vendor directory a little. A subsequent release of
terraform-config-inspect should allow us to completely remove that old
repository in a future commit.
2019-10-02 15:10:21 -07:00
James Bardin
aacfaa4fd7 ensure data sources are always written to state
The old logic for `depends_on` was to short-circuit evaluation of the
data source, but that prevented a plan and state from being recorded.
Use the (currently unused) ForcePlanRead to ensure that the plan is
recorded when the config contains `depends_on`.

This does not fix the fact that depends on does not work with data
sources, and will still produce a perpetual diff. This is only to fix
evaluation errors when an indexed data source is evaluated during
refresh.
2019-09-24 17:47:01 -04:00
Kristin Laemmert
54661ec1df
command/import: fix error during import when implied provider was not used (#22855)
* command/import: properly use `-provider` supplied on the command line

The import command now attaches the provider configuration in the resource
instance, if set. That config is attached to the NodeAbstractResource
during the import graph building. This prevents errors when the implied
provider is not actually in the configuration at all, which may happen
when a configuration is using the `-beta` version of a provider (and
only that `-beta` version).

* command/import: fix variable reassignment and update docs

Fixes #22564
2019-09-20 10:02:42 -04:00
James Bardin
fed4e82c25 check resource config for unknowns during apply
Now that the most common cause of unknowns (invalid resource indexes) is
caught earlier, we can validate that the final apply config is wholly
known before attempting to apply it. This ensures that we're applying
the configuration we intend, and not silently dropping values.
2019-09-19 11:46:09 -04:00
James Bardin
da8ec6b483 change state eval GetResource to return all
Always return the entire resource object from
evaluationStateData.GetResource, rather than parsing the references for
individual instances. This allows for later evaluation of resource
indexes so we can return errors when they don't exist, and prevent
errors when short-circuiting around invalid indexes in conditionals.
2019-09-19 09:23:12 -04:00
James Bardin
86bf674246 change GetResourceInstance to GetResource
In order to allow lazy evaluation of resource indexes, we can't index
resources immediately via GetResourceInstance. Change the evaluation to
always return whole Resources via GetResource, and index individual
instances during expression evaluation.

This will allow us to always check for invalid index errors rather than
returning an unknown value and ignoring it during apply.
2019-09-19 09:19:14 -04:00
Martin Atkins
7e29b9b5d4 core: Warn when creating and applying with -target
The documentation for the -target option warns that it's intended for
exceptional circumstances only and not for routine use, but that's not a
very prominent location for that warning and so some users miss it.

Here we make the warning more prominent by including it directly in the
Terraform output when -target is in use. We first warn during planning
that the plan might be incomplete, and then warn again after apply
concludes and direct the user to run "terraform plan" to make sure that
there are no further changes outstanding. The latter message is intended
to reinforce that -target should only be a one-off operation and that you
should always run without it soon after to ensure that the workspace is
left in a consistent, converged state.
2019-09-17 14:36:05 -07:00
Taylor Chaparro
cfcc878c2d NodeModuleRemoved implements RemovableIfNotTargeted 2019-09-16 08:47:22 -07:00
Pam Selle
80dec63917
Merge pull request #22691 from pselle/customizediff
Evaluate ignore changes before sending plan to CustomizeDiff
2019-09-11 14:45:21 -04:00
Pam Selle
4f198fc797
Merge pull request #22760 from pselle/add-source-to-diag-for_each
Add source addressing to make error more useful
2019-09-11 09:36:24 -04:00
Pam Selle
3be3e3c089 Add source addressing to make error more useful 2019-09-10 14:56:11 -04:00
Pam Selle
a25abbe7d8 Cover empty sets 2019-09-10 10:57:35 -04:00
Pam Selle
c8dd5d3923 add a comment 2019-09-09 09:35:11 -04:00
Pam Selle
de531fd2c9 Remove dead code 2019-09-09 09:35:11 -04:00
Pam Selle
bafd8ced7c Put back in old spot 2019-09-09 09:35:11 -04:00
Pam Selle
96d84da032 Do ignore changes processing again after plan, in case strange things are done during plan that should not be 2019-09-09 09:35:10 -04:00
Pam Selle
2e5a8c0f6e Update when ignore_changes are evaluated, to impact customizediff 2019-09-09 09:35:10 -04:00
James Bardin
102c575f04 don't send an invalid config to a provisioner
The config diagnostics weren't checked before sending to the
provisioner, and may have contained invalid values.
2019-09-06 17:42:53 -04:00
James Bardin
a94f5ee132 prevent panics when encountering nil diffs
While we can't change the behavior of helper/schema at this point, we
can protect against panics in the case of unexpected nils in the
instance diff.
2019-09-04 16:51:42 -04:00
Pam Selle
37e8147c4f Spiffy comment 2019-08-28 15:13:36 -04:00
Pam Selle
35016a5ea3 Move things around, add test for resource references 2019-08-28 14:02:11 -04:00
Pam Selle
0f3d8b4884 More explicit err testing 2019-08-26 15:30:42 -04:00
Pam Selle
cce35e4a36 update name 2019-08-26 15:27:07 -04:00
Pam Selle
a4d2bf45fc Update tests, plan test is able to reproduce panic without fix 2019-08-26 15:25:03 -04:00
Pam Selle
0eb7cfd0d9 Check for wholly known for forEach evaluation, add some tests 2019-08-26 15:16:25 -04:00
James Bardin
b1025a9d29 update tests to reflect correct provisioners
We no longer create new provisioners for every module.
2019-08-21 19:41:56 -04:00
James Bardin
2b4695eecb only create one provisioner instance per type
There's no reason to start individual provisioners per module path, as
they are not configured per module (or independently at all for that
matter).
2019-08-21 19:41:56 -04:00
Pam Selle
901ec990ed
Merge pull request #22156 from binlab/feature/bastion-ca-ssh
Add SSH certificate authentication method for connection via Bastion
2019-08-15 16:01:54 -04:00
Pam Selle
e2931763bf
Merge pull request #22454 from pselle/nokeymoregraph
Add edge when calculating for_each orphans
2019-08-13 17:35:20 -04:00
Pam Selle
e6817f6319 Reordering, comment update 2019-08-13 17:22:14 -04:00
Pam Selle
06e72c693f Add test 2019-08-13 16:31:07 -04:00
Pam Selle
6fae69f07b Creating the node would be nice 2019-08-13 10:38:52 -04:00
Pam Selle
b9fa724273 Add edge 2019-08-13 10:24:26 -04:00
Pam Selle
3662dbc03d Ensure no key added to graph first 2019-08-13 10:10:36 -04:00
Radek Simko
98a796c3a3
Merge pull request #22272 from hashicorp/f-httpclient-ua
httpclient: Introduce composable UserAgent()
2019-08-12 20:20:03 +01:00
Alex Pilon
07d6289701
add comment noting deprecated type 2019-08-08 11:14:26 -04:00
Alex Pilon
77757d9f5b
prune references to config/module
delete config/module
prune references to config except in terraform/resource.go
move, cleanup, and delete inert code
2019-08-07 17:50:59 -04:00
Alex Pilon
4bf43efcfd
move hcl2shim package to configs 2019-08-06 19:58:58 -04:00
Alex Pilon
2a01704208
restore legacy functions for back compat 2019-08-06 15:22:28 -04:00
Alex Pilon
732211617e
fix typo 2019-08-05 22:08:04 -04:00
Alex Pilon
83aa07f907
prune NewResourceConfig and update tests 2019-08-05 22:08:03 -04:00
Radek Simko
34d90d4be0
httpclient: Introduce composable UserAgent()
This interface is meant to replace the following ones (in use by some providers):

 - httpclient.UserAgentString() (e.g. AzureRM, Google)
 - terraform.UserAgentString (e.g. OpenStack, ProfitBricks)
 - terraform.VersionString (e.g. AWS, AzureStack, DigitalOcean, Kubernetes)

This also proposes the initial UA string to be set to

    HashiCorp Terraform/X.Y.Z (+https://www.terraform.io)
2019-08-05 11:07:21 +01:00
Pam Selle
c6692c108b
Merge pull request #22291 from hashicorp/pselle/fix-foreach-provisioners
Ensure each references evaluated in provisioners block
2019-08-01 14:24:38 -04:00
Pam Selle
2015dd293f Fix #22289 2019-08-01 11:04:48 -04:00
tmatias
c20c40c9aa
diagnose tuple values being passed as argument to for_each 2019-08-01 11:33:46 -03:00
tmatias
e825dd0428
make validation on for_each argument more precise 2019-07-31 19:29:50 -03:00
Thayne McCombs
b9af3cd86b Support using self in the provisioner of resources that use for_each 2019-07-29 01:18:33 -06:00
Pam Selle
e7d8ac5ad7 Remove panic, update comment 2019-07-26 11:22:10 -04:00
Thayne McCombs
7c678d104f Add support for for_each for data blocks.
This also fixes a few things with resource for_each:

It makes validation more like validation for count.

It makes sure the index is stored in the state properly.
2019-07-25 16:59:06 -04:00
Pam Selle
7d905f6777 Resource for_each 2019-07-22 10:51:16 -04:00
Mark
3031aca971 Add SSH cert authentication method for connection via Bastion 2019-07-21 09:32:48 +03:00
Alex Pilon
c2bc88fc23
prune ResourceProviderFullName and its callers 2019-07-18 15:24:34 -04:00
Alex Pilon
e3bc1e7d5c
move VarEnvPrefix out of terraform pkg 2019-07-18 14:19:39 -04:00
Alex Pilon
7f8f198719
remove UnknownVariabeValue from config and update references to shim 2019-07-17 22:41:24 -04:00
James Bardin
5ac920223f
Merge pull request #22001 from hashicorp/jbardin/update-cty
Update cty and use Path.Equals in depends_on comparisons
2019-07-11 08:49:10 -04:00
James Bardin
a0338df4d4 update ignore_changes to use cty.Path.Equals
Remove reflect.DeepEqual from path comparisons to get reliable results.

The equality issues were only noticed going the grpc interface, so add a
corresponding test to the test provider.
2019-07-10 14:49:37 -04:00
Bodo Petermann
a22891deba Fixes crash #21896
Fix for a crash during terraform plan: If there is a multi-instance
resource (count > 1) where one of the instances was deleted in the
deployment but was still present in the terraform state,
getResourceInstancesAll crashed.

Check not only for rs.Instances[key] to exist, but also to have a
valid Current pointer.
2019-07-04 17:11:19 +02:00
Radek Simko
5b9f2fafc8 Standardise directory name for test data 2019-06-30 10:16:15 +02:00
Martin Atkins
1bba574fe9 website: Document ignore_changes for individual map elements
This also includes a previously-missing test that verifies the behavior
described here, implemented as a planning context test for consistency
with how the other ignore_changes tests are handled.
2019-06-18 17:37:24 -07:00
James Bardin
d33c5163a7
Merge pull request #21555 from hashicorp/jbardin/re-validate
Allow providers to re-validate the final resource config
2019-06-07 16:43:38 -04:00
tinocode
51a4055198 core: Fix panic on invalid depends_on root. (#21589)
Report an invalid reference to the user instead of crashing when there is in error traversing the `depends_on` attribute.

Fixes #21590
2019-06-06 08:43:22 -04:00
James Bardin
49fee6ba78 don't lose private data during destroy
Makre sure private data is maintained all the way to destroy. This
slipped through, since private data isn't used much for current
providers, except for timeouts.
2019-06-05 19:22:46 -04:00
James Bardin
dcab82e897 send and receive Private through ReadResource
Send Private data blob through ReadResource as well. This will allow for
extra flexibility for future providers that may want to pass data out of
band through to their resource Read functions.
2019-06-03 18:08:26 -04:00
James Bardin
2b200dd9bb re-validate data source config during read
Just like resources, early data soruce validation will contain many
unknown values. Re-validate once we have the config config.
2019-06-01 11:40:54 -04:00
James Bardin
c41eb0e6e4 re-validate config during Plan
The config is statically validated early on for structural issues, but
the provider can't validate any inputs that were unknown at the time.
Run ValidateResourceTypeConfig during Plan, so that the provider can
validate the final config values, including those interpolated from
other resources.
2019-06-01 11:09:48 -04:00
Kristin Laemmert
9869fbc5c4
core: don't panic in NodeAbstractResourceInstance References() (#21445)
* core: don't panic in NodeAbstractResourceInstance References()

It is possible for s.Current to be nil. This was hard to reproduce, so
the root cause is still unknown, but we can guard against the symptom.

* add log statement
2019-05-28 17:27:38 -04:00
Martin Atkins
bec4641867 core: Don't panic if NodeApplyableResourceInstance has no config
This is a "should never happen" case, because we shouldn't ever have
resources in the plan that aren't in the configuration, but since we've
got a report of a crash here (which went away before we got a chance to
debug it) here's just an extra guard to ensure that we'll still exit
gracefully in that case.

If we see this error crop up again in future, it'd be nice to gather a
full trace log so we can see what GraphNodeAttachResourceConfig did and
why it did not attach a configuration.
2019-05-14 16:54:12 -07:00
James Bardin
059dcf1e25 implement UpgradeResourceState in the mock provider
The default is a Noop that simply encodes the provided state
2019-05-14 18:00:01 -04:00
Martin Atkins
bcf2aa06dd core: Call providers' UpgradeResourceState every time
Previously we tried to short-circuit this if the schema version hadn't
changed and we were already using JSON serialization. However, if we
instead call into UpgradeResourceState every time we can let the provider
or the SDK do some general, systematic normalization and cleanup steps
without always requiring a schema version increase.

What exactly would be fixed up this way is for the SDK to decide, but for
example the SDK might choose to automatically delete from the state
anything that is no longer present in the schema, avoiding the need to
write explicit state migration functions for that common case where the
remedy is always the same.

The actual update logic is delegated to the provider/SDK intentionally so
that it can evolve over time and potentially differ depending on how
each SDK thinks about schema.
2019-05-14 13:30:42 -04:00
Radek Simko
8a6d1d62b6
stringer: Regenerate files with latest version 2019-05-13 15:34:27 +01:00
Paul Tyng
15f7f8049f
Fix incorrect direction of TestConformance 2019-05-09 09:16:54 -04:00
James Bardin
8250b2e4de more precise handling of ComputedKeys in config
With the new ConfigModeAttr, we can now have complex structures come in
as attributes rather than blocks. Previously attributes were either
known, or unknown, and there was no reason to descend into them. We now
need to record the complete path to unknown values within complex
attributes to create a proper diff after shimming the config.
2019-05-05 09:02:33 -04:00
James Bardin
06babb6cc3 remove dead code for the next reader 2019-05-03 17:29:16 -04:00
James Bardin
35e087fe5b always re-read datasource if there are dep changes
If a datasource's dependencies have planned changes, then we need to
plan a read for the data source, because the config may change once the
dependencies have been applied.
2019-05-03 17:29:16 -04:00
James Bardin
43f0468829 fix ComputedKeys for nested blocks in shimmed cfg
The indexes were added out of order in the attribute keys
2019-05-02 14:08:40 -07:00
Martin Atkins
3309be9721 core: Allow data resource count to be unknown during refresh
The count for a data resource can potentially depend on a managed resource
that isn't recorded in the state yet, in which case references to it will
always return unknown.

Ideally we'd do the data refreshes during the plan phase as discussed in
#17034, which would avoid this problem by planning the managed resources
in the same walk, but for now we'll just skip refreshing any data
resources with an unknown count during refresh and defer that work to the
apply phase, just as we'd do if there were unknown values in the main
configuration for the data resource.
2019-04-25 14:22:57 -07:00
Martin Atkins
cbc8d1eba2 core: Input variables are always unknown during validate
Earlier on in the v0.12 development cycle we made the decision that the
validation walk should consider input values to always be unknown so that
validation is checking validity for all possible inputs rather than for
a specific set of inputs; checking for a specific set of inputs is the
responsibility of the plan walk.

However, we didn't implement that in the best way: we made the
"terraform validate" command force all of the input variables to unknown
but that was insufficient because it didn't also affect the implicit
validation walk we do as part of "terraform plan" and "terraform apply",
causing those to produce confusingly-different results.

Instead, we'll address the problem directly in the reference resolver code,
ensuring that all variable values will always be treated as an unknown
(of the declared type, so type checking is still possible) during any
validate walk, regardless of which command is running it.
2019-04-17 10:09:46 -07:00
Martin Atkins
861a2ebf26 helper/schema: Use a more targeted shim for nested set diff applying
We previously attempted to make the special diff apply behavior for nested
sets of objects work with attribute mode by totally discarding attribute
mode for all shims.

In practice, that is too broad a solution: there are lots of other shimming
behaviors that we _don't_ want when attribute mode is enabled. In
particular, we need to make sure that the difference between null and
empty can be seen in configuration.

As a compromise then, we will give all of the shims access to the real
ConfigMode and then do a more specialized fixup within the diff-apply
logic: we'll construct a synthetic nested block schema and then use that
to run our existing logic to deal with nested sets of objects, while
using the previous behavior in all other cases.

In effect, this means that the special new behavior only applies when the
provider uses the opt-in ConfigMode setting on a particular attribute,
and thus this change has much less risk of causing broad, unintended
regressions elsewhere.
2019-04-17 07:47:31 -07:00
Martin Atkins
ff2de9c818 core: Keep old value on error even for delete
When an operation fails, providers may return a null new value rather than
returning a partial state. In that case, we'd prefer to keep the old value
so that we stand the best chance of being able to retry on a subsequent
run.

Previously we were making an exception for the delete action, allowing
the result of that to be null even when an error is returned. In practice
that was a bad idea because it would cause Terraform to lose track of the
object even though it might not actually have been deleted.

Now we'll retain the old object even in the delete case. Providers can
still return partial new objects if they were able to partially complete
a delete operation, in which case we'll discard what we had before, but
if the result is null with errors then we'll assume the delete failed
entirely and so just keep the old state as-is, giving us the opportunity
to refresh it on the next run to see if anything actually happened after
all.

(This also includes a new resource in the test provider which isn't used
by the patch but was useful for some manual UX testing here, so I thought
I'd include it in case it's similarly useful in future, given how simple
its implementation is.)
2019-04-17 07:40:15 -07:00
Martin Atkins
88e76fa9ef configs/configschema: Introduce the NestingGroup mode for blocks
In study of existing providers we've found a pattern we werent previously
accounting for of using a nested block type to represent a group of
arguments that relate to a particular feature that is always enabled but
where it improves configuration readability to group all of its settings
together in a nested block.

The existing NestingSingle was not a good fit for this because it is
designed under the assumption that the presence or absence of the block
has some significance in enabling or disabling the relevant feature, and
so for these always-active cases we'd generate a misleading plan where
the settings for the feature appear totally absent, rather than showing
the default values that will be selected.

NestingGroup is, therefore, a slight variation of NestingSingle where
presence vs. absence of the block is not distinguishable (it's never null)
and instead its contents are treated as unset when the block is absent.
This then in turn causes any default values associated with the nested
arguments to be honored and displayed in the plan whenever the block is
not explicitly configured.

The current SDK cannot activate this mode, but that's okay because its
"legacy type system" opt-out flag allows it to force a block to be
processed in this way anyway. We're adding this now so that we can
introduce the feature in a future SDK without causing a breaking change
to the protocol, since the set of possible block nesting modes is not
extensible.
2019-04-10 14:53:52 -07:00
James Bardin
6868ddd085 don't set NewRemoved values as empty strings
NewRemoved diff values were somtimes left in maps and lists.
2019-03-29 13:56:43 -04:00
Martin Atkins
003317d7c8 lang: Detect references when a list/set attr is defined using blocks
For compatibility with documented patterns from existing providers we are
now allowing (via a pre-processing step) any attribute whose type is a
list-of-object or set-of-object type to optionally be assigned using one
or more blocks whose type is the attribute name.

The pre-processing functionality was implemented in previous commits but
we were not correctly detecting references within these blocks that are,
from the perspective of the primary schema, invalid. Now we'll use an
alternative implementation of variable detection that is able to apply the
same schema rewriting technique we used to implement the transform and
thus can find all of the references as if they were already in their
final locations.
2019-03-28 10:41:01 -07:00
James Bardin
3c8b46fffe merge connection blocks for validation
The resource connection block was not being validated. Merge the two
bodies, with the provider as the override, before validation.
2019-03-26 11:59:23 -04:00
Martin Atkins
2a64a00983 core: Upgrade flatmap to JSON when dynamic attributes are present
When a resource type schema includes dynamically-typed attributes we can't
do any automatic conversion from flatmap to JSON because we don't know
how to interpret the keys that start with the dynamically-typed
attribute's prefix.

To work around that, we'll instead just ask the SDK to do a no-op upgrade
(current and target versions are the same) which will convert from flatmap
to JSON using the SDK's own logic as a side-effect.

This situation should rarely arise in real-world use, but it ends up being
very important for the helper/resource provider test harness because it
is forced to lower the state back to flatmap repeatedly after every step
in order to run legacy checking and processing code.
2019-03-21 15:19:59 -07:00
Martin Atkins
50a101afbd lang: Consider "dynamic" blocks when resolving references
The hcldec package has no awareness of the dynamic block extension, so the
hcldec.Variables function misses any variables declared inside dynamic
blocks.

dynblock.VariablesHCLDec is a drop-in replacement for hcldec.Variables
that _is_ aware of dynamic blocks, returning all of the same variables
that hcldec would find naturally plus also any variables used inside
the dynamic block "for_each" and "labels" arguments and inside the
nested "content" block.
2019-03-19 10:04:45 -07:00
Martin Atkins
04cbf249aa core: Don't fail on dynamic attribute values during refresh
Our post-refresh safety check had the constraint and real type inverted,
so previously any refresh of a resource type with a dynamically-typed
attribute would fail this type check.

Also includes a small tweak to the error message from this check since the
old one was a little awkward to read in practice when the error is a
cty.PathError rendered with an attribute path prefix.
2019-03-18 09:18:06 -07:00
James Bardin
892674a3f9 don't recalculate existing block counts in diff
If a block is uneffected by diffs, keep the block count value regardless
of what it is. Blocks containing zero values will often be represented
by only the count value.
2019-03-12 12:04:35 -04:00
Sander van Harmelen
973e2a7cf9 core: add a context to the UIInput interface 2019-03-08 10:24:40 +01:00
James Bardin
5c09f94695 remove eval TODO for NormalizeObjectFromLegacySDK
The normalization will take place in the provider shims, locating it
with the rest of the code that attempts to match the new and legacy
behavior.
2019-03-06 16:23:56 -05:00
Martin Atkins
f193b11073 command/format: Normalize before/after values before rendering
We are now allowing the legacy SDK to opt out of the safety checks we try
to do after plan and apply, and so in such cases the before/after values
in planned changes may be inconsistent with our usual rules.

To avoid adding lots of extra complexity to the diff renderer to deal with
these situations, instead we'll normalize the handling of nested blocks
prior to using these values.

In the long run it'd be better to do this normalization at the source,
immediately after we receive an object from a provider using the opt-out,
but we're doing this at the outermost layer for now to avoid risking
unintended impacts on other Terraform Core components when we're just
about to enter the beta phase of the v0.12.0 release cycle.
2019-02-27 16:53:29 -08:00
Martin Atkins
ac6e0e42dc configs/configupgrade: Upgrade the bodies of "connection" blocks
This uses the fixed "superset" schema from the main terraform package to
apply our standard expression mapping, with the exception of "type" where
interpolation sequences are not supported due to the type being evaluated
early to retrieve the schema for decoding the rest.
2019-02-22 12:32:56 -08:00
Sherod Taylor
c456d9608b updated ssh authentication and testing for ssh 2019-02-22 14:30:50 -05:00
James Bardin
44afe5b6ff remove unused ResourceProviderError 2019-02-20 14:23:56 -05:00
James Bardin
6cc3e1d0bd move init error to where it is generated
The init error was output deep in the backend by detecting a
special ResourceProviderError and formatted directly to the CLI.

Create some Diagnostics closer to where the problem is detected, and
passed that back through the normal diagnostic flow. While the output
isn't as nice yet, this restores the helpful error message and makes the
code easier to maintain. Better formatting can be handled later.
2019-02-20 14:18:37 -05:00
James Bardin
da389d6cd4 simple list diffs may also have missing elements
Like was done for list blocks, simple lists of strings may be missing
empty string elements, and any list may be implicitly truncated.
2019-02-14 13:06:04 -05:00
James Bardin
c34c37fbd5 missed .% suffixes in diff.Apply
Diff.Apply checks for unneeded container count diffs, but was missing
the check for maps.

Add an early return for planning a destroy.
2019-02-13 19:09:46 -05:00
Martin Atkins
12a6d22589 core: Better handle providers failing updates with no new value
A provider may react to a create or update failing by returning error
diagnostics and a partially-updated or nil new value, in which case we
do not expect our AssertObjectCompatible consistency check to succeed: the
provider is just assumed to be doing the best it can to preserve whatever
partial outcome it was able to achieve.

However, if errors are accompanied with a nil new value after an update,
we'll assume that the provider is telling us it wasn't able to get far
enough to make any change at all, and so we'll retain the prior value in
state. This ensures that a provider can't cause an object to be forgotten
from the state just because an update failed.
2019-02-12 18:13:14 -08:00
James Bardin
b758628e51
Merge pull request #20308 from hashicorp/jbardin/requires-replace
Requires replace should not error on missing index steps
2019-02-12 15:08:38 -05:00
James Bardin
c6daf9fb24 don't error on all invalid RequiresReplace paths
RequiresReplace paths with IndexSteps that have been added or removed
may fail to apply against one of the two state values. Only error out if
the path cannot be applied to both values.
2019-02-12 14:43:41 -05:00
Martin Atkins
eb1346447f
Merge #20282: Enforce expected behaviors for provider PlanResourceChange
An exception remains for the legacy SDK, which does not meet all of these requirements.
2019-02-12 09:19:05 -08:00
Martin Atkins
f4e6431da2 core: Ensure context tests comply with plan/apply safety checks
Prior to Terraform 0.12 there were certain behaviors we expected from
providers that were actually just details of the SDK and not part of the
enforced contract.

For 0.12 we're now codifying some of these behaviors explicitly via safety
checks in core, thus ensuring that all future providers will behave in a
consistent way that users can rely on.

Unfortunately, due to the hand-written nature of the mock provider
implementations we use in tests, they have been getting away with some
unusual behaviors that don't match our usual expectations, and our safety
checks now detect those as incorrect behaviors.

To address this, we make the minimal changes to each test to ensure that
its mock provider behaves in a consistent way, which requires that values
set in config be represented correctly in the plan and ultimately saved
in the new state, without any changes along the way. In particular, the
common testDiffFn implementation has historically used a number of special
hidden attributes to trigger special behaviors, and our new rules require
that these special settings propagate properly through the plan and into
the state.
2019-02-11 17:26:50 -08:00
Martin Atkins
31299e688d core: Allow legacy SDK to opt out of plan-time safety checks
Due to the inprecision of our shimming from the legacy SDK type system to
the new Terraform Core type system, the legacy SDK produces a number of
inconsistencies that produce only minor quirky behavior or broken
edge-cases. To retain compatibility with those existing weird behaviors,
the legacy SDK opts out of our safety checks.

The intent here is to allow existing providers to continue to do their
previous unsafe behaviors for now, accepting that this will allow certain
quirky bugs from previous releases to persist, and then gradually migrate
away from the legacy SDK and remove this opt-out on a per-resource basis
over time.

As with the apply-time safety check opt-out, this is reserved only for
the legacy SDK and must not be used in any new SDK implementations. We
still include any inconsistencies as warnings in the logs as an aid to
anyone debugging weird behavior, so that they can see situations where
blame may be misplaced in the user-visible error messages.
2019-02-11 17:26:49 -08:00
Martin Atkins
5649ae6abf core: Improve warnings for legacy SDK apply-time inconsistencies
We've allowed the legacy SDK an opt-out from the post-apply safety checks,
but previously we produced only a generic warning message in that case.
Now instead we'll still run the safety checks, but report the results in
the logs instead of as error diagnostics.

This should allow developers who are debugging strange interactions
between buggy legacy providers to get better insight into what's going
on upstream in order to help explain what's going on when these problems
inevitably get caught by other downstream safety checks when trying to
make use of these invalid results.
2019-02-11 17:26:49 -08:00
Martin Atkins
419f5e58cd core: Enforce the validity of planned new objects
We've been gradually adding safety checks of this sort throughout the
lifecycle to help ensure that buggy providers can't introduce
hard-to-diagnose downstream failures and misbehavior. This completes the
set by verifying during plan time that the provider has produced a plan
that actually achieves the goals defined in the configuration.

In particular, this catches the situation where a provider may incorrectly
override a value explicitly set in configuration, which avoids creating
confusion by betraying the reasonable user expectation that referencing an
explicitly-defined attribute will produce exactly the value shown in
configuration.
2019-02-11 17:26:49 -08:00
James Bardin
1ca7531cc7 allow implicit empty strings in lists
The helper/schema handling of lists loses empty string values, but
retains the correct count. Only re-count the values if the count is
missing entirely, and allow our shims to re-populate the zero values.
2019-02-11 19:24:14 -05:00
Martin Atkins
312d798a89 core: Restore our EvalReadData behavior
In an earlier commit we changed objchange.ProposedNewObject so that the
task of populating unknown values for attributes not known during apply
is the responsibility of the provider's PlanResourceChange method, rather
than being handled automatically.

However, we were also using objchange.ProposedNewObject to construct the
placeholder new object for a deferred data resource read, and so we
inadvertently broke that deferral behavior. Here we restore the old
behavior by introducing a new function objchange.PlannedDataResourceObject
which is a specialized version of objchange.ProposedNewObject that
includes the forced behavior of populating unknown values, because the
provider gets no opportunity to customize a deferred read.

TestContext2Plan_createBeforeDestroy_depends_datasource required some
updates here because its implementation of PlanResourceChange was not
handling the insertion of the unknown value for attribute "computed".
The other changes here are just in an attempt to make the flow of this
test more obvious, by clarifying that it is simulating a -refresh=false
run, which effectively forces a deferred read since we skip the eager
read that would normally happen in the refresh step.
2019-02-07 18:33:14 -08:00
Martin Atkins
8882dcaf86 core: Fix TestContext2Plan_dataResourceBecomesComputed
Now that ProposedNewState uses null to represent Computed attributes not
set in the configuration, the provider must fill in the unknown value for
"computed" in its plan result.

It seems that this test was incorrectly updated during our bulk-fix after
integrating the HCL2 work, but it didn't really matter because the
ReadDataSource function isn't called in the happy path anyway. But to
make the intent clearer here, we also now make ReadDataSource return an
error if it is called, making it explicit that no call is expected.
2019-02-07 18:33:14 -08:00
Martin Atkins
c3e7efec35 core: Reject unknown values after reading a data resource
Data resources do not have a plan/apply distinction, so it is never valid
for a data resource to produce unknown values in its result object.

Unknown values in the data resource _config_ cause us to postpone the read
altogether, so a data source never receives unknown values as input and
therefore may never produce unknown values as output.
2019-02-07 18:33:14 -08:00
Martin Atkins
1530fe52f7 core: Legacy SDK providers opt out of our new apply result check
The shim layer for the legacy SDK type system is not precise enough to
guarantee it will produce identical results between plan and apply. In
particular, values that are null during plan will often become zero-valued
during apply.

To avoid breaking those existing providers while still allowing us to
introduce this check in the future, we'll introduce a rather-hacky new
flag that allows the legacy SDK to signal that it is the legacy SDK and
thus disable the check.

Once we start phasing out the legacy SDK in favor of one that natively
understands our new type system, we can stop setting this flag and thus
get the additional safety of this check without breaking any
previously-released providers.

No other SDK is permitted to set this flag, and we will remove it if we
ever introduce protocol version 6 in future, assuming that any provider
supporting that protocol will always produce consistent results.
2019-02-06 11:40:30 -08:00
Martin Atkins
a81bc23611 core: Verify that objects don't change unexpectedly during apply
Previously we would allow providers to change anything about the planned
object value during apply, possibly returning an entirely-unrelated object
of the same type. In practice this led to some subtle bugs where a single
planned attribute value would change during apply and cause a downstream
failure due to a dependent resource now seeing input other than what
_it_ expected during plan.

Now we'll produce an explicit error message for this case which places the
blame with the correct party: the upstream resource that changed. Without
this, unexpected changes would often lead to the downstream resource
implementation being blamed in error message even though it was just
reacting to the change from upstream.

As with most errors during apply, we'll still save the updated value in
the state but we'll halt the walk to ensure that the unexpected value
cannot propagate further and cause the result to potentially diverge
greatly from the changeset shown in the plan.

Compared to Terraform 0.11, we expect to see this error in many of the
same cases we saw the "diffs didn't match during apply" error in earlier
versions, since it is likely that many errors of that sort were the result
of unexpected upstream changes being incorrectly blamed on the downstream
resource that then used the result.
2019-02-06 11:40:30 -08:00
Martin Atkins
07930aa7fb core: Context apply tests should produce consistent apply results
Because Terraform Core has traditionally not checked that the final apply
result is consistent with what was planned, some of our apply tests were
producing inconsistent results.

Here we fix all of that so that they produce something compatible with
what they planned. This doesn't actually achieve anything in isolation,
but we're about to start enforcing this consistency in a subsequent
commit.
2019-02-06 11:40:30 -08:00
James Bardin
411df99f33 only force top-level id's back to unknown
Nested structures may have "id" fields, which should be treated
normally.
2019-02-05 16:16:08 -05:00
Martin Atkins
a8f97a0805 core: Use hcl.ApplyPath for ignore_changes and "requires replace"
We were previously using cty.Path.Apply, which serves a similar purpose
but implements the more restrictive traversal behaviors down at the cty
layer. hcl.ApplyPath uses the same rules as HCL expressions and so ensures
consistent behavior with normal user expressions.

cty.Path.Apply also previously had a crashing bug (discussed in #20084)
that was causing a panic here. That has now been fixed in cty, but since
we're no longer using it here that's a moot point. The HCL traversing
implementation has been fuzz-tested and unit tested a lot more thoroughly
so should not run into the same crashers we saw with cty before.
2019-01-31 11:58:30 -08:00
James Bardin
6f7e1ff8eb more precise handling of removed list elements
When elements are removed from a list, all attributes may not be present
in the diff. Once the individual attributes diffs are applied, use the
length to truncate the flatmapped list to the correct length.
2019-01-30 14:55:04 -05:00
James Bardin
7dd0acc46b don't count empty containers in diff.Apply
If there were no matching keys, and there was no diff at all, don't set
a zero count for the container. Normally Providers can't reliably detect
empty vs unset here, but there are some cases that worked.
2019-01-23 19:34:11 -05:00
James Bardin
9b30da500d missing prefix in recounted map
Missing prefix in map recount. This generally passes tests since the
actual count should already be there and be correct, then ethe extra key
is ignored by the shims.
2019-01-23 14:57:04 -05:00
James Bardin
46a4628782
Merge pull request #20081 from hashicorp/jbardin/list-block
New Diff.Apply method
2019-01-22 19:20:53 -05:00
James Bardin
273f20ec8b update comment and fix core test
One terraform test was broken when the result became more correct.
2019-01-22 18:38:17 -05:00
James Bardin
7257258f18 new Diff.Apply
The previous version assumed the diff could be applied verbatim, and
only used the schema at the top level since diffs are "flat". This
turned out to not work reliably with nested blocks. The new Apply method
is driven completely by the schema, and handles nested blocks separately
from other collections.
2019-01-22 18:10:12 -05:00
James Bardin
c37147d876 fix computed set keys in shims
When generated a config, the computed set keys were missing the leading
set name.
2019-01-22 18:10:12 -05:00
Martin Atkins
15cd6d8300 core: Retain prior state if update fails with no new state
In an ideal world, providers are supposed to respond to errors during
apply by returning a partial new state alongside the error diagnostics.
In practice though, our SDK leaves the new value set to nil for certain
errors, which was causing Terraform to "forget" the object altogether by
assuming that the provider intended to say "null".

We now adjust that assumption to apply only in the delete case. In all
other cases (including updates) we retain the prior state if the new
state is given as nil. Although we could potentially fix this in the SDK
itself, I expect this is a likely bug in other future SDKs for other
languages too, so this new assumption is a safer one to make to be
resilient to data loss when providers don't behave perfectly.

Providers that return both nil new value and no errors are considered
buggy, but unfortunately that applies to the mocks in many of our tests,
so for pragmatic reasons we can't generate an error for that case as we do
for other "should never happen" situations. Instead, we'll just retain the
prior value in the state so the user can retry.
2019-01-18 16:54:52 -08:00
Martin Atkins
86c02d5c35 command: "terraform init" can partially initialize for 0.12upgrade
There are a few constructs from 0.11 and prior that cause 0.12 parsing to
fail altogether, which previously created a chicken/egg problem because
we need to install the providers in order to run "terraform 0.12upgrade"
and thus fix the problem.

This changes "terraform init" to use the new "early configuration" loader
for module and provider installation. This is built on the more permissive
parser in the terraform-config-inspect package, and so it allows us to
read out the top-level blocks from the configuration while accepting
legacy HCL syntax.

In the long run this will let us do version compatibility detection before
attempting a "real" config load, giving us better error messages for any
future syntax additions, but in the short term the key thing is that it
allows us to install the dependencies even if the configuration isn't
fully valid.

Because backend init still requires full configuration, this introduces a
new mode of terraform init where it detects heuristically if it seems like
we need to do a configuration upgrade and does a partial init if so,
before finally directing the user to run "terraform 0.12upgrade" before
running any other commands.

The heuristic here is based on two assumptions:
- If the "early" loader finds no errors but the normal loader does, the
  configuration is likely to be valid for Terraform 0.11 but not 0.12.
- If there's already a version constraint in the configuration that
  excludes Terraform versions prior to v0.12 then the configuration is
  probably _already_ upgraded and so it's just a normal syntax error,
  even if the early loader didn't detect it.

Once the upgrade process is removed in 0.13.0 (users will be required to
go stepwise 0.11 -> 0.12 -> 0.13 to upgrade after that), some of this can
be simplified to remove that special mode, but the idea of doing the
dependency version checks against the liberal parser will remain valuable
to increase our chances of reporting version-based incompatibilities
rather than syntax errors as we add new features in future.
2019-01-14 11:33:21 -08:00
Martin Atkins
0c0a437bcb Move module install functionality over to internal/initwd 2019-01-14 11:33:21 -08:00