It's become apparent that passing in a provider config for an implicitly
used provider would be very useful. While the ProviderConfigTransformer
efficiently added providers to the graph, the algorithm was reversed
from what would be needed to allow overriding implicit providers.
Change the ProviderConfigTransformer to fist add all configured
provider, even if they are empty stubs. Then run through all providers
being passed in from the parent, and replace the provider nodes we
created with proxies, and add implicit proxies where none existed. The
extra nodes will then be pruned later.
There was a bug where all references would be discarded in the case when
a self-reference was encountered. Since a module references all
descendants by it's own path, it returns a self-reference by definition.
Our new resource-to-provider matching is stricter about explicitly
matching aliases when config is present (no longer automatically
inherited) and with locating providers to destroy removed resources.
With this in mind, this is an attempt to expand slightly on this error
message now that users are more likely to see it.
In future it would be nice to do some explicit validation of this a bit
closer to the UI, so we can have room for more explanatory text, but this
additional messaging is intended to help users understand why they might
be seeing this message after removing a provider configuration block from
configuration, whether directly or as a side-effect of removing a module.
Remove the module entry from the state if a module is no longer in the
configuration. Modules are not removed if there are any existing
resources with the module path as a prefix. The only time this should be
the case is if a module was removed in the config, but the apply didn't
target that module.
Create a NodeModuleRemoved and an associated EvalDeleteModule to track
the module in the graph then remove it from the state. The
NodeModuleRemoved dependencies are simply any other node which contains
the module path as a prefix in its path.
This could have probably been done much easier as a step in pruning the
state, but modules are going to have to be promoted to full graph nodes
anyway in order to support count.
You can't find orphans by walking the config, because by definition
orphans aren't in the config.
Leaving the broken test for when empty modules are removed from the
state as well.
Now that the resolved provider is always stored in state, we need to
udpate all the test data to match. There will probably be some more
breakage once the provider field is properly diffed.
Use the ResourceState.Provider field to store the full name of the
provider used during apply. This field is only used when a resource is
removed from the config, and will allow that resource to be removed by
the exact same provider with which it was created.
Modify the locations which might accept the alue of the
ResourceState.Provider field to detect that the name is resolved.
Here we complete the passing of providers between modules via the
module/providers configuration, add another test and update broken test
outputs.
The DisbableProviderTransformer is being removed, since it was really
only for provider configuration inheritance. Since configuration is no
longer inherited, there's no need to keep around unused providers. The
actually shouldn't be any unused providers going into the graph any
longer, but put off verifying that condition for later. Replace it's
usage with the PruneProviderTransformer, and use that to also remove the
unneeded proxy provider nodes.
Implement the adding of provider through the module/providers map in the
configuration.
The way this works is that we start walking the module tree from the
top, and for any instance of a provider that can accept a configuration
through the parent's module/provider map, we add a proxy node that
provides the real name and a pointer to the actual parent provider node.
Multiple proxies can be chained back to the original provider. When
connecting resources to providers, if that provider is a proxy, we can
then connect the resource directly to the proxied node. The proxies are
later removed by the DisabledProviderTransformer.
This should re-instate the 0.11 beta inheritance behavior, but will
allow us to later store the actual concrete provider used by a resource,
so that it can be re-connected if it's orphaned by removing its module
configuration.
A missing provider alias should not be implicitly added to the graph.
Run the AttachaProviderConfigTransformer immediately after adding the
providers, since the ProviderConfigTransformer should have just added
these nodes.
We have a few pesky functions that don't act like proper functions and
instead return different values on each call. These are tricky because
we need to make sure we don't trip over ourselves by re-generating these
between plan and apply.
Here we add a context test to verify correct behavior in the presence
of such functions.
There's actually a pre-existing bug which this test caught as originally
written: we re-evaluate the interpolation expressions during apply,
causing these unstable functions to produce new values, and so the
applied value ends up not exactly matching the plan. This is a bug that
needs fixing, but it's been around at least since v0.7.6 (random old
version I tried this with to see) so we'll put it on the list and address
it separately. For now, this part of the test is commented out with a
TODO attached.
We previously didn't compare values but had a TODO to start doing so,
which we then recently did. Unfortunately it turns out that we _depend_
on not comparing values here, because when we use EvalCompareDiff (a key
user of Diff.Same) we pass in a diff made from a fresh re-interpolation
of the configuration and so any non-pure function results (timestamp,
uuid) have produced different values.
We can remove the AllowAny option which is no longer used, and providers
don't need to be connected to their resources at this stage, since that
will happen in the ProviderTransformer.
Simplify the MissingProviderTransformer so that it only adds missing
providers at the root level. There's no need for the multitple providers
added at every level of the path
ParentProviderTransformer then only needs to connect providers with the
equivalent type at the root level.
The the grandChild missing test has a provider declared in a child
module which is missing in a grandchildmodule. Verify that the
grandchild gets connected to the child provider, and they all are
connected to the root providers.
Update some test outputs to match the expected behavior of only adding
missing providers at the root level.
The CloserProviderTransformer requires that the resources be connected
to their provider first, so that it cen get the correct dependencies,
and adding the ProviderTransformer changed the test output slightly.
This turned out to be a big messy commit, since the way providers are
referenced is tightly coupled throughout the code. That starts to unify
how providers are referenced, using the format output node Name method.
Add a new field to the internal resource data types called
ResolvedProvider. This is set by a new setter method SetProvider when a
resource is connected to a provider during graph creation. This allows
us to later lookup the provider instance a resource is connected to,
without requiring it to have the same module path.
The InitProvider context method now takes 2 arguments, one if the
provider type and the second is the full name of the provider. While the
provider type could still be parsed from the full name, this makes it
more explicit and, and changes to the name format won't effect this
code.
The first step in only using the required provider nodes in a graph is
to be able to specifically add them from the configuration.
The MissingProviderTransformer was previously responsible for adding
all providers. Now it is really just adding any that are missing from
the config.
Needed to add more cases to support value comparison exceptions that the
rest of TF expects to work (this fixes tests in various places).
Also moved things to a switch block so that it's a little more compact.
A diff new needs to pass basic value checks to be considered the
"same". Several provisions have been added to ensure that the list, set,
and RequiresNew behaviours that have needed some exceptions in the past
are preserved in this new logic.
This ensures that we are checking for value equality as much as
possible, which will be more important when we transition to the
possibility of diffs being sourced from external data.
While merging the cached Input configs in the correct order prevents
overwriting existing config values, it doesn't prevent an earlier
provider from inserting unwanted values into later provider
configurations.
Diff the key-values returned by Input with the pre-input config, and
store only the "answers" that were added during the Input call.
Always call Input, even if we already have some values, since a
previously cached config may not be complete.
Previously when looking up cached provider input, the Input was taken in
its entirety, and only provider configuration fields that weren't in the
saved input were added. This would cause providers in modules to use the
entire configuration from parent modules, even if they themselves had
entirely different configs.
Note: this is only marginally beter than the old behavior. It may be
slightly more correct, but stil can't account for the user's intent, and
may be adding configured values from one provider into another.
Change the PathCacheKey to just join the path on a non-path character
(|), which makes for easier debugging.
Use the configured providers directly, rather than looking for inherited
provider configuration during graph evaluation.
First remove the provider config cache, and the associated
SetProviderConfig and ParentProviderConfig methods on the eval context.
Every provider must be configured, so there's no need to look for
configuration from other provider instances.
The config.ProviderConfig struct now has a Scope field which stores the
proper path for the interpolation scope. To get this metadata to the
interpolator, we add an EvalInterpolatProvider node which can carry the
ProviderConfig, and an InterpolateProvider context method to carry the
ProviderConfig.Scope into the InterplationScope.
Some of the tests could be adjusted to account for the new inheritance
behavior, and some were simply no longer valid and will be removed.
The remaining tests have questions on how they should work in practice.
This mostly concerns orphaned modules where there is no longer a way to
obtain a provider. In some cases we may require that a minimal provider
config be present to handle the destroy process, but we need further
testing.
All disabled code was commented out in this commit to record any
additional comments. The following commit will be a cleanup pass.
Update all references to the version values to use the new package.
The VersionString function was left in the terraform package
specifically for the aws provider, which is vendored. We can remove that
last call once the provider is updated.
In order to parse provider, resource and data source configuration from
HCL2 config files, we need to know the relevant configuration schema.
This new method allows Terraform Core to request these from a provider.
This is a breaking change to this interface, so all of its implementers
in this package are updated too. This includes concrete implementations
of the new method in helper/schema that use the schema conversion code
added in an earlier commit to produce a configschema.Block automatically.
Plugins compiled against prior versions of helper/schema will not have
support for this method, and so calls to them will fail. Callers of
this new method will therefore need to sniff for support using the
SchemaAvailable field added to both ResourceType and DataSource.
This careful handling will need to persist until next time we increment
the plugin protocol version, at which point we can make the breaking
change of requiring this information to be available.
DestroyValueReferenceTransformer is used during destroy to reverse the
edges for output and local values. Because destruction is going to
remove these from the state, nodes that depend on their value need to be
visited first.
When working on an existing plan, the context always used walkApply,
even if the plan was for a full destroy. Mark in the plan if it was
icreated for a destroy, and transfer that to the context when reading
the plan.
A Targeted graph may include outputs that were transitively included,
but if they are missing any dependencies they will fail to interpolate
later on.
Prune any outputs in the TargetsTransformer that have missing
dependencies, and are not depended on by any resource. This will
maintain the existing behavior of outputs failing silently ni most
cases, but allow errors to be surfaced where the output value is
required.
Module outputs may not have complete information during Input, because
it happens before refresh. Continue process on output interpolation
errors during the Input walk.
Remove the Input flag threaded through the input graph creation process
to prevent interpolation failures on module variables.
Use an EvalOpFilter instead to inset the correct EvalNode during
walkInput. Remove the EvalTryInterpolate type, and use the same
ContinueOnErr flag as the output node for consistency and to try and
keep the number possible eval node types down.
Locals don't need to be evaluated during destroy. Rather than simply
skipping them, remove them from the state as they are encountered. Even
though they are not persisted in the state, it keeps the state up to
date as the destroy happens, and we reduce the chance of other
inconstancies later on.
The fact that we clean up data source state by applying a "destroy" action
for them is an implementation detail, and so should not be visible to
outside callers or to the user.
Signalling these as real destroys creates confusion for users because
they see Terraform say things like:
data.template_file.foo: Refreshing state..."
...which, to an understandably-nervous sysadmin, might make them suspect
that the underlying object was deleted, rather than just Terraform's
record of it.
Previously the rendered plan output was constructed directly from the
core plan and then annotated with counts derived from the count hook.
At various places we applied little adjustments to deal with the fact that
the user-facing diff model is not identical to the internal diff model,
including the special handling of data source reads and destroys. Since
this logic was just muddled into the rendering code, it behaved
inconsistently with the tally of adds, updates and deletes.
This change reworks the plan formatter so that it happens in two stages:
- First, we produce a specialized Plan object that is tailored for use
in the UI. This applies all the relevant logic to transform the
physical model into the user model.
- Second, we do a straightforward visual rendering of the display-oriented
plan object.
For the moment this is slightly overkill since there's only one rendering
path, but it does give us the benefit of letting the counts be derived
from the same data as the full detailed diff, ensuring that they'll stay
consistent.
Later we may choose to have other UIs for plans, such as a
machine-readable output intended to drive a web UI. In that case, we'd
want the web UI to consume a serialization of the _display-oriented_ plan
so that it doesn't need to re-implement all of these UI special cases.
This introduces to core a new diff action type for "refresh". Currently
this is used _only_ in the UI layer, to represent data source reads.
Later it would be good to use this type for the core diff as well, to
improve consistency, but that is left for another day to keep this change
focused on the UI.
The implementation of ResourceAddress.Less was flawed because it was only
testing each field in the "less than" direction, and falling through in
cases where an earlier field compared greater than a later one.
Now we test for inequality first as the selector, and only fall through
if the two values for a given field are equal.
There is some additional, early validation on the "count" meta-argument
that verifies that only suitable variable types are used, and adding local
values to this whitelist was missed in the initial implementation.
It seems that this somehow got lost in the commit/rebase shuffle and
wasn't caught by the tests that _did_ make it because they were all using
just one file.
As a result of this bug, locals would fail to work correctly in any
configuration with more than one .tf file.
Along with restoring the append/merge behavior, this also reworks some of
the tests to exercise the multi-file case as better insurance against
regressions of this sort in future.
This fixes#15969.
Previously we were checking required_version only during "real" operations, and not during initialization. Catching it during init is better because that's the first command users run on a new working directory.
Go 1.9 adds this new function which, when called, marks the caller as
being a "helper function". Helper function stack frames are then skipped
when trying to find a line of test code to blame for a test failure, so
that the code in the main test function appears in the test failure output
rather than a line within the helper function itself.
This covers many -- but probaly not all -- of our test helpers across
various packages.
The shadow graph was incredibly useful during the 0.7 cycle but these days
it is idle, since we're not planning any significant graph-related changes
for the forseeable future.
The shadow graph infrastructure is somewhat burdensome since any change
to the ResourceProvider interface must have shims written. Since we _are_
expecting changes to the ResourceProvider interface in the next few
releases, I'm calling "YAGNI" on the shadow graph support to reduce our
maintenence burden.
If we do end up wanting to use shadow graph again in future, we'll always
be able to pull it out of version control and then make whatever changes
we skipped making in the mean time, but we can avoid that cost in the
mean time while we don't have any evidence that we'll need to pay it.
We stash the locals in the module state in a map that is ignored for JSON
serialization. We don't include locals in the persisted state because they
can be trivially recomputed and this allows us to assume that they will
pass through verbatim, without any normalization or other transforms
caused by the JSON serialization.
From a user standpoint a local is just a named alias for an expression,
so it's desirable that the result passes through here in as raw a form
as possible, so it behaves as closely as possible to simply using the
given expression directly.
A local value is similar to an output in that it exists only within state
and just always evaluates its value as best it can with the current state.
Therefore it has a single graph node type for all walks, which will
deal with that evaluation operation.
Allow module variables to fail interpolation during input. This is OK
since they will be verified again during Plan. Because Input happens
before Refresh, module variable interpolation can fail when referencing
values that aren't yet in the state, but are expected after Refresh.
Forward-port the plan state check from the 0.9 series.
0.10 has improved the serial handling for the state, so this adds
relevant comments and some more test coverage for the case of an
incrementing serial during apply.
The state returned from the testState helper shouldn't rely on any
mutations caused by WriteState. The Init function (which is analogous to
NewState) shoudl set any required fields.
Previously we relied on a constellation of coincidences for everything to
work out correctly with state serials. In particular, callers needed to
be very careful about mutating states (or not) because many different bits
of code shared pointers to the same objects.
Here we move to a model where all of the state managers always use
distinct instances of state, copied when WriteState is called. This means
that they are truly a snapshot of the state as it was at that call, even
if the caller goes on mutating the state that was passed.
We also adjust the handling of serials so that the state managers ignore
any serials in incoming states and instead just treat each Persist as
the next version after what was most recently Refreshed.
(An exception exists for when nothing has been refreshed, e.g. because
we are writing a state to a location for the first time. In that case
we _do_ trust the caller, since the given state is either a new state
or it's a copy of something we're migrating from elsewhere with its
state and lineage intact.)
The intent here is to allow the rest of Terraform to not worry about
serials and state identity, and instead just treat the state as a mutable
structure. We'll just snapshot it occasionally, when WriteState is called,
and deal with serials _only_ at persist time.
This is intended as a more robust version of #15423, which was a quick
hotfix to an issue that resulted from our previous slopping handling
of state serials but arguably makes the problem worse by depending on
an additional coincidental behavior of the local backend's apply
implementation.
Skips checksum validation if the `TF_SKIP_PROVIDER_VERIFY` environment variable is set. Undocumented variable, as the primary goal is to significantly improve the local provider development workflow.
When the InstanceState.Meta fields are marshaled, numeric values may
change types. The timeout system currently inserts integer values, which
will be unmarshal as float64s.
To ensure that a state which has round-tripped through json is equal to
itself, compare the json representation of the Meta values.
Added a new test that ensures that pre/post-diff hooks are not called
when EvalDiff is run with Stub set, tested through a full refresh run.
This helps test the expected behaviour of EvalDiff itself, versus the
end result of the diff being counted in a plan, which is what the
TestLocal_planScaleOutNoDupeCount test in backend/local checks.
Rather than overloading InstanceDiff with a "Stub" attribute that is
going to be largely meaningless, we are just going to skip
pre/post-diff hooks altogether. This is under the notion that we will
eventually not need to "stub" a diff for scale-out, stateless nodes on
refresh at all, so diff behaviour won't be necessary at that point, so
we should not assume that hooks will run at this stage anyway.
Also as part of this removed the CountHook test that is now failing
because CountHook is out of scope of the new behaviour.
This should make things a bit more clear as to what we are doing in the
EvalTree scale-out test - ensuring that we get the correct eval sequence
for a node with no state through EvalTree.
During plan and apply, because the provider constraints need to be built
from a plan, they are not checked until the terraform.Context is
created. Since the context is always requested by the backend during the
Operation, the backend needs to be responsible for generating contextual
error messages for the user.
Instead of formatting the ResolveProviders errors during NewContext,
return a special error type, ResourceProviderError to signal that
init will be required. The backend can then extract and format the
errors.
This is a specialized thin wrapper around parseResourceAddressInternal
that can be used to obtain a ResourceAddress from the keys in
ModuleDiff.Resources.
This is not something we'd ideally expose, but since the internal address
format is already exposed in the ModuleDiff object this ends up being
necessary to process the ModuleDiff from other packages, e.g. for
display in the UI.
Lexicographic sorting by the string form produces the wrong result because
[9] sorts after [10], so this custom comparison function takes that into
account and compares each portion separately to get a more intuitive
result.
This transformer is no longer needed, as we are not transforming
scale-out resource nodes into plannable nodes anymore, but rather just
taking a different eval sequence for resource refresh nodes with no
state.
This test ensures that the right EvalSequence gets set for a refresh
node with no state. This will ultimately assert that nodes on scale out
will not go down the regular refresh path, which would result in an
error due to the nil state - instead, we stub this node so that we get a
diff on it that can be used to effect computed/unknown values on
interpolations that may depend on this node.
Since the transformer that changed stateless nodes in refresh to
NodePlannableResourceInstance is not being used anymore, this test
needed to be adjusted to ensure that the right output was expected.
Changed the language of this field to indicate that this diff is not a
"real" diff, in that it should not be acted on, versus a "quiet" mode,
which would indicate just simply to act silently.
This fixes a bug with the new refresh graph behaviour where a resource
was being counted twice in the UI on part of being scaled out:
* We are no longer transforming refresh nodes without state to
plannable resources (the transformer will be removed shortly)
* A Quiet flag has been added to EvalDiff and InstanceDiff - this
allows for the flagging of a diff that should not be treated as real
diff for purposes of planning
* When there is no state for a refresh node now, a new path is taken
that is similar to plan, but flags Quiet, and does nothing with the
diff afterwards.
Tests pending - light testing has confirmed this should fix the double
count issue, but we should have some tests to actually confirm the bug.
Previously the behavior for -target when given a module address was to
target only resources directly within that module, ignoring any resources
defined in child modules.
This behavior turned out to be counter-intuitive, since users expected
the -target address to be interpreted hierarchically.
We'll now use the new "Contains" function for addresses, which provides
a hierarchical "containment" concept that is more consistent with user
expectations. In particular, it allows module.foo to match
module.foo.module.bar.aws_instance.baz, where before that would not have
been true.
Since Contains isn't commutative (unlike Equals) this requires some
special handling for targeting specific indices. When given an argument
like -target=aws_instance.foo[0], the initial graph construction (for
both plan and refresh) is for the resource nodes from configuration, which
have not yet been expanded to separate indexed instances. Thus we need
to do the first pass of TargetsTransformer in mode where indices are
ignored, with the work then completed by the DynamicExpand method which
re-applies the TargetsTransformer in index-sensitive mode.
This is a breaking change for anyone depending on the previous behavior
of -target, since it will now select more resources than before. There is
no way provided to obtain the previous behavior. Eventually we may support
negative targeting, which could then combine with positive targets to
regain the previous behavior as an explicit choice.
This is similar in purpose to Equals but it takes a hierarchical approach
where modules contain their child modules, resources are contained by
their modules, and indexed resource instances are contained by their
resource names.
Unlike "Equals", Contains is intended to be transitive, so if A contains B
and B contains C, then C necessarily contains A. It is also directional:
if A contains B then B does not also contain A unless A and B are
identical. This results in more intuitive behavior for use-cases where
the goal is to select a portion of the address space for an operation.
As part of our terminology shift, the interpolation variable for the name
of the current workspace changes to terraform.workspace. The old name
continues to be supported for compatibility.
We can't generate a deprecation warning from here so for now we'll just
silently accept terraform.env as an alias, but not mention it at all in
the error message in the hope that its use phases out over time before we
actually remove it.
The information stored in a plan is tightly coupled to the Terraform core
and provider plugins that were used to create it, since we have no
mechanism to "upgrade" a plan to reflect schema changes and so mismatching
versions are likely to lead to the "diffs didn't match during apply"
error.
To allow us to catch this early and return an error message that _doesn't_
say it's a bug in Terraform, we'll remember the Terraform version and
plugin binaries that created a particular plan and then require that
those match when loading the plan in order to apply it.
The planFormatVersion is increased here so that plan files produced by
earlier Terraform versions _without_ this information won't be accepted
by this new version, and also that older versions won't try to process
plans created by newer versions.
When set, this information gets passed on to the provider resolver as
part of the requirements information, causing us to reject any plugins
that do not match during initialization.
Previously one of the errors had a built-in context message and the other
did not, making it hard for callers to present a user-friendly message
in both cases.
Now we generate an error message of the same form in both cases, with one
case providing additional information. Ideally the main case would be
able to give more specific guidance too, but that's hard to achieve with
the current regexp-based parsing implementation.
This is a useful building block for filtering configuration based on a
resource address. It is similar in principle to state filtering, but for
specific resource configuration blocks.
This allows growing the scope of a resource address to include all of the
resources in the same module as the targeted resource. This is useful to
give context in error messages.
The resource address documentation defines a resource address as being in
two parts: the module path and the resource spec. The resource spec can
be omitted, which represents addressing _all_ resources in a module.
In some cases (such as import) it doesn't make sense to address an entire
module, so this helper makes it easy for validation code to check for
this to reject insufficiently-specific resource addresses.
We're going to use config to determine provider dependencies, so we need
to always provide a config when instantiating a context or we'll end up
loading no providers at all.
We previously had a test for running "terraform import -config=''" to
disable the config entirely, but this test is now removed because it makes
no sense. The actual functionality its testing still remains for now,
but it will be removed in a subsequent commit when we start requiring that
a resource to be imported must already exist in configuration.
Rather than providing an already-resolved map of plugins to core, we now
provide a "provider resolver" which knows how to resolve a set of provider
dependencies, to be determined later, and produce that map.
This requires the context to be instantiated in a different way, so this
very noisy diff is a mostly-mechanical update of all of the existing
places where contexts get created for testing, using some adapted versions
of the pre-existing utilities for passing in mock providers.
Previously the set of providers was fixed early on in the command package
processing. In order to be version-aware we need to defer this work until
later, so this interface exists so we can hold on to the possibly-many
versions of plugins we have available and then later, once we've finished
determining the provider dependencies, select the appropriate version of
each provider to produce the final set of providers to use.
This commit establishes the use of this new mechanism, and thus populates
the provider factory map with only the providers that result from the
dependency resolution process.
This disables support for internal provider plugins, though the
mechanisms for building and launching these are still here vestigially,
to be cleaned up in a subsequent commit.
This also adds a new awkward quirk to the "terraform import" workflow
where one can't import a resource from a provider that isn't already
mentioned (implicitly or explicitly) in config. We will do some UX work
in subsequent commits to make this behavior better.
This breaks many tests due to the change in interface, but to keep this
particular diff reasonably easy to read the test fixes are split into
a separate commit.
ResourceProviderResolver is an extra level of indirection before we
get to a map[string]ResourceProviderFactory, which accepts a map of
version constraints and uses it to choose from potentially-many available
versions of each provider to produce a single ResourceProviderFactory
for each one requested.
As of this commit the ResourceProviderResolver interface is not used. In
a future commit the ContextOpts.Providers map will be replaced with a
resolver instance, with the creation of the factory delayed until the
version constraints have been resolved.
Previously the Type of a ResourceState was generally ignored, but we're
now starting to use it to figure out which providers are needed to
support the resources in state so our tests need to set it accurately
in order to get the expected result.
This new private function takes a configuration tree and a state structure
and finds all of the explicit and implied provider dependencies
represented, returning them as a moduledeps.Module tree structure.
It annotates each dependency with a "reason", which is intended to be
useful to a user trying to figure out where a particular dependency is
coming from, though we don't yet have any UI to view this.
Nothing calls this yet, but a subsequent commit will use the result of
this to produce a constraint-conforming map of provider factories during
context initialization.
Previously the logic for inferring a provider type from a resource name
was buried a utility function in the 'terraform' package. Instead here we
lift it up into the 'config' package where we can make broader use of it
and where it's easier to discover.
* core: Add 'UserAgentString' helper function to generate a standard UserAgent string. Example generation: 'Terraform 0.9.7-dev (go1.8.1)'
* provider/openstack: Add Terraform version to UserAgent string
Prior to Terraform 0.7, lists in Terraform were just a shallow abstraction
on top of strings with a magic delimiter between items. Wrapping a single
string in brackets in the configuration was Terraform's prompt that it
needed to split the string on that delimiter during interpolation.
In 0.7, when first-class lists were added, this convention was preserved
by flattening lists-of-lists by one level when they were encountered in
configuration. However, there was an oversight in that change where it
did not correctly handle the case where the inner list was unknown.
In #14135 we removed some code that was flattening partially-unknown lists
into fully-unknown (untyped) values. This inadvertently exposed the missed
case from the previous paragraph, causing issues for list-wrapped splat
expressions with unknown members. While this worked fine for resources,
due to some fixup done inside helper/schema, this did not work for other
interpolation contexts such as module blocks.
Various attempts to fix this up and restore the flattening behavior
selectively were unsuccessful, due to a proliferation of assumptions all
over the core code that would be too risky to change just to fix this bug.
This change, then, takes the different approach of removing the
requirement that splats be presented inside list brackets. This
requirement didn't make much sense anymore anyway, since no other
list-returning expression had this constraint and so the rest of Terraform
was already successfully dealing with both cases.
This leaves us with two different scenarios:
- For resource arguments, existing normalization code in helper/schema
does its own flattening that preserves compatibility with the common
practice of using bracketed splats. This change proves this with a test
within the "test" provider that exercises the whole Terraform core and
helper/schema stack that assigns bracketed splats to list and set
attributes.
- For arguments in other blocks, such as in module callsites, the
interpolator's own flattening behavior applies to known lists,
preserving compatibility with configurations from before
partially-computed splats were possible, but those wishing to use
partially-computed splats are required to drop the surrounding brackets.
This is less concerning because this scenario was introduced only in
0.9.5, so the scope for breakage is limited to those who adopted this
new feature quickly after upgrading.
As of this commit, the recommendation is to stop using brackets around
splats but the old form continues to be supported for backward
compatibility. In a future _major_ version of Terraform we will probably
phase out this legacy form to improve consistency, but for now both
forms are acceptable at the expense of some (pre-existing) weird behavior
when _actual_ lists-of-lists are used.
This addresses #14521 by officially adopting the suggested workaround of
dropping the brackets around the splat. However, it doesn't yet allow
passing of a partially-unknown list between modules: that still violates
assumptions in Terraform's core, so for the moment partially-unknown lists
work only within a _single_ interpolation expression, and cannot be
passed around between expressions. Until more holistic work is done to
improve Terraform's type handling, passing a partially-unknown splat
through to a module will result in a fully-unknown list emerging on
the other side, just as was the case before #14135; this change just
addresses the fact that this was failing with an error in 0.9.5.
Instead of using a hardcoded version prerelease string, which makes release automation difficult, set the version prerelease string from an environment variable via the go linker tool during compile time.
The environment variable `TF_RELEASE` should only be set via the `make bin` target, and thus leaves the version prerelease string unset. Otherwise, when running a local compile of terraform via the `make dev` makefile target, the version prerelease string is set to `"dev"`, as usual.
This also requires some changes to both the circonus and postgresql providers, as they directly used the `VersionPrerelease` constant. We now simply call the `VersionString()` function, which returns the proper interpolated version string with the prerelease string populated correctly.
`TF_RELEASE` is unset:
```sh
$ make dev
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/22 10:38:19 Generated command/internal_plugin_list.go
==> Removing old directory...
==> Building...
Number of parallel builds: 3
--> linux/amd64: github.com/hashicorp/terraform
==> Results:
total 209M
-rwxr-xr-x 1 jake jake 209M May 22 10:39 terraform
$ terraform version
Terraform v0.9.6-dev (fd472e4a86500606b03c314f70d11f2bc4bc84e5+CHANGES)
```
`TF_RELEASE` is set (mimicking the `make bin` target):
```sh
$ TF_RELEASE=1 make dev
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/22 10:40:39 Generated command/internal_plugin_list.go
==> Removing old directory...
==> Building...
Number of parallel builds: 3
--> linux/amd64: github.com/hashicorp/terraform
==> Results:
total 121M
-rwxr-xr-x 1 jake jake 121M May 22 10:42 terraform
$ terraform version
Terraform v0.9.6
```
For child modules, a ModuleState isn't allocated until the first time a
module instance is inserted into the state under the module's path.
Normally interpolations of resource attributes are delayed until at least
one resource has been created due to the nature of the dependency graph,
but if the interpolation value is a multi-var (splat) then it is possible
that the referenced resource has count=0 and thus created _no_ resource
states when it was visited.
Previously we would crash when trying to access the resource map for the
nil module in order to count how many instances are present. Since we know
there can't be any instances present in a nil module, we now preempt
this crash by returning zero early.
This edge-case does not apply to the root module because its ModuleState
is allocated as part of initializing the main State instance.
This fixes#14438.
Currently, the refresh graph uses the resources from state as a base,
with data sources then layered on. Config is not consulted for resources
and hence new resources that are added with count (or any new resource
from config, for that matter) do not get added to the graph during
refresh.
This is leading to issues with scale in and scale out when the same
value for count is used in both resources, and data sources that may
depend on that resource (and possibly vice versa). While the resources
exist in config and can be used, the fact that ConfigTransformer for
resources is missing means that they don't get added into the graph,
leading to "index out of range" errors and what not.
Further to that, if we add these new resources to the graph for scale
out, considerations need to be taken for scale in as well, which are not
being caught 100% by the current implementation of
NodeRefreshableDataResource. Scale-in resources should be treated as
orphans, which according to the instance-form NodeRefreshableResource
node, should be NodeDestroyableDataResource nodes, but this this logic
is currently not rolled into NodeRefreshableDataResource. This causes
issues on scale-in in the form of race-ish "index out of range" errors
again.
This commit updates the refresh graph so that StateTransformer is no
longer used as the base of the graph. Instead, we add resources from the
state and config in a hybrid fashion:
* First off, resource nodes are added from config, but only if
resources currently exist in state. NodeRefreshableManagedResource
is a new expandable resource node that will expand count and add
orphans from state. Any count-expanded node that has config but no
state is also transformed into a plannable resource, via a new
ResourceRefreshPlannableTransformer.
* The NodeRefreshableDataResource node type will now add count orphans
as NodeDestroyableDataResource nodes. This achieves the same effect
as if the data sources were added by StateTransformer, but ensures
there are no races in the dependency chain, with the added benefit of
directing these nodes straight to the proper
NodeDestroyableDataResource node.
* Finally, config orphans (nodes that don't exist in config anymore
period) are then added, to complete the graph.
This should ensure as much as possible that there is a refresh graph
that best represents both the current state and config with updated
variables and counts.
The previous behavior of targets was that targeting a particular node
would implicitly target everything it depends on. This makes sense when
the dependencies in question are between resources, since we need to
make sure all of a resource's dependencies are in place before we can
create or update it.
However, it had the undesirable side-effect that targeting a resource
would _exclude_ any outputs referring to it, since the dependency edge
goes from output to resource. This then causes the output to be "stale",
which is problematic when outputs are being consumed by downstream
configs using terraform_remote_state.
GraphNodeTargetDownstream allows nodes to opt-in to a new behavior where
they can be targeted by _inverted_ dependency edges. That is, it allows
outputs to be considered targeted if anything they directly depend on
is targeted.
This is different than the implied targeting behavior in the other
direction because transitive dependencies are not considered unless the
intermediate nodes themselves have TargetDownstream. This means that
an output1→output2→resource chain can implicitly target both outputs, but
an output→resource1→resource2 chain _won't_ target the output if only
resource2 is targeted.
This behavior creates a scenario where an output can be visited before
all of its dependencies are ready, since it may have a mixture of both
targeted and untargeted dependencies. This is fine for outputs because
they silently ignore any errors encountered during interpolation anyway,
but other hypothetical future implementers of this interface may need to
be more careful.
This fixes#14186.
This was actually redundant anyway since HIL itself applied a similar
rule where any partially-unknown list would be automatically flattened
to a single unknown value.
However, now we're changing HIL to explicitly permit partially-unknown
lists so that we can allow the index operator [...] to succeed when
applied to one of the elements that _is_ known.
This, in conjunction with hashicorp/hil#51 and hashicorp/hil#52,
fixes#3449.
stringer has changed the boilerplate it generates in a recent version.
We'd previously updated to the new format but accientally rolled back
to the old while merging a long-running feature branch.
This restores us back to the new format again.
Moving the transformer wholesale looks like it broke some tests, with
some actually doing legit work in normalizing singular resources from a
foo.0 notation to just foo.
Adjusted the TestPlanGraphBuilder to account for the extra
meta.count-boundary nodes in the graph output now, as well as added
another context test that tests this case. It appears the issue happens
during validate, as this is where the state can be altered to a broken
state if things are not properly transformed in the plan graph.
This fixes interpolation issues on grandchild data sources that have
multiple instances (ie: counts). For example, baz depends on bar, which
depends on foo.
In this instance, after an initial TF run is done and state is saved,
the next refresh/plan is not properly transformed, and instead of the
graph/state coming through as data.x.bar.0, it comes through as
data.x.bar. This breaks interpolations that rely on splat operators -
ie: data.x.bar.*.out.
* Revert #11245, #11321, #11498 and #11757
These PR’s are all related to issue #11170 for which I would like to propose a different solution then the one currently implemented.
* A different approach to solve #11170
This approach has (IMHO) a few advantages with regards to the solution currently implemented. I will elaborate on this in the PR.
The documentation for Refresh indicates that it will always return a
valid state, but that wasn't true in the case of a graph builder error.
While this same concept wasn't documented for Apply, it was still
assumed in the terraform apply code.
Since the helper testing framework relies on the absence of a state to
determine if it can call Destroy, the Context can't can't start
returning a state in all cases. Document this, and use the State method
to fetch the correct state value after Apply.
Add a nil check to the WriteState function, so that writing a nil state
is a noop.
Make sure to init before sorting the state, to make sure we're not
attempting to sort nil values. This isn't technically needed with the
current code, but it's just safer in general.
Make sure duplicate depends_on entries are pruned from existing states
on read.
Make sure new state built from configs with multiple references to the
same resource only add it once to the Dependencies.
duplicate entries could end up in "depends_on" in the state, which could
possible lead to erroneous state comparisons. Remove them when walking
the graph, and remove existing duplicates when pruning the state.
Previously this function was depending on the mapstructure behavior of
failing with an error when trying to decode a map into a list or
vice-versa, but mapstructure's WeakDecode behavior changed so that it
will go to greater lengths to coerce the given value to fit into the
target type, causing us to mis-handle certain ambigous cases.
Here we exert a bit more control over what's going on by using 'reflect'
to first check whether we have a slice or map value and only then try
to decode into one with mapstructure. This allows us to still rely on
mapstructure's ability to decode nested structures but ensure that lists
and maps never get implicitly converted to each other.
Since the validation of connection blocks is delegated to the communicator
selected by "type", we were not previously doing any validation of the
attribute names in these blocks until running provisioners during apply.
Proper validation here requires us to already have the instance state,
since the final connection info is a merge of values provided in config
with values assigned automatically by the resource. However, we can do
some basic name validation to catch typos during the validation pass, even
though semantic validation and checking for missing attributes will still
wait until the provisioner is instantiated.
This fixes#6582 as much as we reasonably can.
This previously lacked tests altogether. This new test verifies the
"happy path", ensuring that both literal and computed values pass through
correctly into the VariableValues map.
This crash resulted because the type switch checked for either of two
types but the type assertion within it assumed only one of them.
A straightforward (if inelegant) fix is to simply duplicate the relevant
case block and change the type assertion, thus allowing the types to match
up in all cases.
This fixes#13297.
During the input walk we stash the values resulting from user input
(if any) in the eval context for use when later walks need to resolve
the provider config.
However, this repository of input results is only able to represent
literal values, since it does not retain the record of which of the keys
have values that are "computed".
Previously we were blindly stashing all of the results, failing to
consider that some of them might be computed. That resulted in the
UnknownValue placeholder being misinterpreted as a literal value when
the data is used later, which ultimately resulted in it clobbering the
actual expression evaluation result and thus causing the provider to
fail to configure itself.
Now we are careful to only retain in this repository the keys whose values
are known statically during the input phase. This eventually gets merged
with the dynamic evaluation results on subsequent walks, with the dynamic
keys left untouched due to their absence from the stored input map.
This fixes#11264.
This method mirrors that of config.Backend, so we can compare the
configration of a backend read from a config vs that of a backend read
from a state. This will prevent init from reinitializing when using
`-backend-config` options that match the existing state.
golang/tools commit 23ca8a263 changed the format of the leading comment
to comply with some new standards discussed here:
https://golang.org/issue/13560
This is the result of running generate with the latest version of
stringer. Everyone working on Terraform will need to update stringer
after this is merged, to avoid reverting this:
go get -u golang.org/x/tools/cmd/stringer
It appears there are no tests for this as far as I can find.
We change V1 states (very old) to assume a nil path is a root path.
Staet.Validate() later will catch any duplicate paths.
When transforming a diff from DestroyCreate to a simple Update,
ignore_changes can cause keys from flatmapped objects to be filtered
form the diff. We need to filter each flatmapped container as a whole to
ensure that unchanged keys aren't lost in the update.
ignore_changes is causing changes in other flatmapped sets to be
filtered out incorrectly.
This required fixing the testDiffFn to create diffs which include the
old value, breaking one other test.
Fixes#12836
Realistically, these should be caught during validation anyways. In this
case, this was causing 12386 because refresh with a data source will
attempt to use module variables. I don't see any clear logic to prune
those module variables or not add them so its easier to return unknown
to cause the data to be computed and not run.
the terraform package doesn't know about TestProvider, so don't put the
hooks in terraform.MockResourceProvider. Wrap the mock in the test where
we need to check the TestProvider functionality.
Always wait for watchStop to return during context.walk.
Context.walk would often complete immediately after sending the close
signal to watchStop, which would in turn call the deferred releaseRun
cancelling the runContext.
Without any synchronization points after the select statement in
watchStop, that goroutine was not guaranteed to be scheduled
immediately, and in fact it often didn't continue until after the
runContext was canceled. This in turn left the select statement with
multiple successful cases, and half the time it would chose to Stop the
providers.
Stopping the providers after the walk of course didn't cause any
immediate failures, but if there was another walk performed, the
provider StopContext would no longer be valid and could cause
cancellation errors in the provider.
Starting with Go 1.8 betas, we've periodically received SIGQUITs on our
tests in Travis. The stack trace looks like this:
https://gist.github.com/mitchellh/abf09b0980f8ea01269f8d9d6133884d
The tests are timing out! This is a test that hasn't been touched really
in a very long time and has always passed. I've **reproduced this
locally** by setting `GOMAXPROCS=1` and running the test. By yielding
the scheduler in the hot loop, it now passes almost instantly every
time.
Perhaps the test can be written in a different way, but this gets tests
passing and I think will fix our periodic errors.
A couple interpolation tests were using invalid state that didn't match
the config. These will still pass but were flushed out by an attempt to
make this an error. The repl however still required interpolation
without a config, and tests there will provide a indication if this
behavior changes.
It turns out that a few use cases depend on not finding a resource
without an error.
The other code paths had sufficient nil checks for this, but there was
one place where we called Count() that needed to be checked. If the
existence of the resource matters, it would be caught at a higher level
and still return an "unknown resource" error to the user.
Module resource were being sorted lexically by name by the state filter.
If there are 10 or more resources, the order won't match the index
order, and resources will have different indexes in their new location.
Sort the FilterResults by index numerically when the names match.
Clean up the module String output for visual inspection by sorting
Resource name parts numerically when they are an integer value.
Due to the change to `interface{}` we need to use `reflect.DeepEqual`
here. With the restriction of primitive types this should always be
safe. We'll never get functions, channels, etc.
This changes the type of values in Meta for InstanceState to
`interface{}`. They were `string` before.
This will allow richer structures to be persisted to this without
flatmapping them (down with flatmap!). The documentation clearly states
that only primitives/collections are allowed here.
The only thing using this was helper/schema for schema versioning.
Appropriate type checking was added to make this change safe.
The timeout work @catsby is doing will use this for a richer structure.
Fixes#12183
The fix is in flatmap for this but the entire issue is a bit more
complex. Given a schema with a computed set, if you reference it like
this:
lookup(attr[0], "field")
And "attr" contains a computed set within it, it would panic even though
"field" is available. There were a couple avenues I could've taken to
fix this:
1.) Any complex value containing any unknown value at any point is
entirely unknown.
2.) Only the specific part of the complex value is unknown.
I took route 2 so that the above works without any computed (since
"name" is not computed but something else is). This may actually have an
effect on other parts of Terraform configs, however those similar
configs would've simply crashed previously so it shouldn't break any
pre-existing configs.
Fixes#10911
Outputs that aren't targeted shouldn't be included in the graph.
This requires passing targets to the apply graph. This is unfortunate
but long term should be removable since I'd like to move output changes
to the diff as well.
During backend initialization, especially during a migration, there is a
chance that an existing state could be overwritten.
Attempt to get a locks when writing the new state. It would be nice to
always have a lock when reading the states, but the recursive structure
of the Meta.Backend config functions makes that quite complex.
Fixes#11749
I'm **really** surprised this didn't come up earlier.
When only the state is available for a node, the advertised
referenceable name (the name used for dependency connections) included
the module path. This module path is automatically prepended to the
name. This means that probably every non-root resource for state-only
operations (destroys) didn't order properly.
This fixes that by omitting the path properly.
Multiple tests added to verify both graph correctness as well as a
higher level context test.
Will backport to 0.8.x
To avoid chasing down issues like #11635 I'm proposing we disable the
shadow graph for end users now that we have merged in all the new
graphs. I've kept it around and default-on for tests so that we can use
it to test new features as we build them. I think it'll still have value
going forward but I don't want to hold us for making it work 100% with
all of Terraform at all times.
I propose backporting this to 0-8-stable, too.
Fixes#11349
I tracked this bug back to the early 0.7 days so this has been around a
really long time. I wanted to confirm that this wasn't introduced by any
new graph changes and it appears to predate all of that. I couldn't find
a single 0.7.x release where this worked, and I didn't want to go back
to 0.6.x since it was pre-vendoring.
The test case shows the logic the best, but the basic idea is: for
collections that go to zero elements, the "RequiresNew" sameness check
should be ignored, since the new diff can choose to not have that at all
in the diff.
This adds a Meta field (similar to InstanceState.Meta) to InstanceDiff.
This allows providers to store arbitrary k/v data as part of a diff and
have it persist through to the Apply. This will be used by helper/schema
for timeout storage being done by @catsby.
The type here is `map[string]interface{}`. A couple notes:
* **Not using `string`**: The Meta field of InstanceState is a string
value. We've learned that forcing things to strings is bad. Let's
just allow types.
* **Primitives only**: Even though it is type `interface{}`, it must
be able to cleanly pass the go-plugin RPC barrier as well as be
encoded to a file as Gob. Given these constraints, the value must
only comprise of primitive types and collections. No structs,
functions, channels, etc.
Read state would assume that having a reader meant there should be a
valid state. Check for an empty file and return ErrNoState to
differentiate a bad file from an empty one.
This disables the computed value check for `count` during the validation
pass. This enables partial support for #3888 or #1497: as long as the
value is non-computed during the plan, complex values will work in
counts.
**Notably, this allows data source values to be present in counts!**
The "count" value can be disabled during validation safely because we
can treat it as if any field that uses `count.index` is computed for
validation. We then validate a single instance (as if `count = 1`) just
to make sure all required fields are set.
This switches to the Go "context" package for cancellation and threads
the context through all the way to evaluation to allow behavior based on
stopping deep within graph execution.
This also adds the Stop API to provisioners so they can quickly exit
when stop is called.
Fixes#11212
The import graph builder was missing the transform to setup links to
parent providers, so provider inheritance didn't work properly. This
adds that.
This also removes the `PruneProviderTransform` since that has no value
in this graph since we'll never add an unused provider.
This was possible with test fixtures but it is also conceiably possible
with older states or corrupted states. We can also extract the type from
the key so we do that now so that StateFilter is more robust.
Removal of empty nested containers from a flatmap would sometimes fail a
sanity check when removed in the wrong order. This would only fail
sometimes due to map iteration. There was also an off-by-one error in
the prefix check which could match the incorrect keys.
When a InstanceState is merged with an InstanceDiff, any maps arrays or
sets that no longer exist are shown as empty with a count of 0. If these
are left in the flatmap structure, they will cause errors during
expansion because their existing in the map affects the counts for
parent structures.