Commit Graph

1946 Commits

Author SHA1 Message Date
James Bardin
25f6e61047 release: clean up after v0.10.4 2017-09-06 20:33:48 +00:00
James Bardin
10df48ef40
v0.10.4 2017-09-06 20:22:10 +00:00
Martin Atkins
892f60efe0 core: test that we skip hooks for data source destroy
Data source destroy is an implementation detail and not something that
external callers should see or expect.
2017-09-01 17:55:05 -07:00
Martin Atkins
e7a0aa96c8 core: add testHook for testing correct interaction with hooks 2017-09-01 17:55:05 -07:00
Martin Atkins
6712192724 core: don't advertise data source destroy via hooks
The fact that we clean up data source state by applying a "destroy" action
for them is an implementation detail, and so should not be visible to
outside callers or to the user.

Signalling these as real destroys creates confusion for users because
they see Terraform say things like:

    data.template_file.foo: Refreshing state..."

...which, to an understandably-nervous sysadmin, might make them suspect
that the underlying object was deleted, rather than just Terraform's
record of it.
2017-09-01 17:55:05 -07:00
Martin Atkins
d4efc95191 command: show resource actions using resource addresses
Previously we were using the internal resource id syntax in the UI. Now
we'll use the standard user-facing resource address syntax instead.
2017-09-01 17:55:05 -07:00
Martin Atkins
3ea159297c command/format: improve consistency of plan results
Previously the rendered plan output was constructed directly from the
core plan and then annotated with counts derived from the count hook.
At various places we applied little adjustments to deal with the fact that
the user-facing diff model is not identical to the internal diff model,
including the special handling of data source reads and destroys. Since
this logic was just muddled into the rendering code, it behaved
inconsistently with the tally of adds, updates and deletes.

This change reworks the plan formatter so that it happens in two stages:
- First, we produce a specialized Plan object that is tailored for use
  in the UI. This applies all the relevant logic to transform the
  physical model into the user model.
- Second, we do a straightforward visual rendering of the display-oriented
  plan object.

For the moment this is slightly overkill since there's only one rendering
path, but it does give us the benefit of letting the counts be derived
from the same data as the full detailed diff, ensuring that they'll stay
consistent.

Later we may choose to have other UIs for plans, such as a
machine-readable output intended to drive a web UI. In that case, we'd
want the web UI to consume a serialization of the _display-oriented_ plan
so that it doesn't need to re-implement all of these UI special cases.

This introduces to core a new diff action type for "refresh". Currently
this is used _only_ in the UI layer, to represent data source reads.
Later it would be good to use this type for the core diff as well, to
improve consistency, but that is left for another day to keep this change
focused on the UI.
2017-09-01 17:55:05 -07:00
Martin Atkins
4750f0607d core: stabilize ResourceAddress.Less results
The implementation of ResourceAddress.Less was flawed because it was only
testing each field in the "less than" direction, and falling through in
cases where an earlier field compared greater than a later one.

Now we test for inequality first as the selector, and only fall through
if the two values for a given field are equal.
2017-09-01 17:55:05 -07:00
Martin Atkins
0a342e8dc2 config: allow local value interpolations in count
There is some additional, early validation on the "count" meta-argument
that verifies that only suitable variable types are used, and adding local
values to this whitelist was missed in the initial implementation.
2017-09-01 17:54:05 -07:00
Martin Atkins
8cd0ee80e5 config: merge/append for local values
It seems that this somehow got lost in the commit/rebase shuffle and
wasn't caught by the tests that _did_ make it because they were all using
just one file.

As a result of this bug, locals would fail to work correctly in any
configuration with more than one .tf file.

Along with restoring the append/merge behavior, this also reworks some of
the tests to exercise the multi-file case as better insurance against
regressions of this sort in future.

This fixes #15969.
2017-09-01 17:51:13 -07:00
Martin Atkins
adb6a089ff release: clean up after v0.10.3 2017-08-30 21:56:26 +00:00
Martin Atkins
1511d447e7
v0.10.3 2017-08-30 21:41:28 +00:00
James Bardin
593bf683dc Merge pull request #15448 from hashicorp/jbardin/state-meta-equal
make sure marshaled Meta fields are still equal
2017-08-30 16:00:00 -04:00
Sunny
2d849f8650 command/init: check required_version
Previously we were checking required_version only during "real" operations, and not during initialization. Catching it during init is better because that's the first command users run on a new working directory.
2017-08-28 11:25:16 -07:00
Martin Atkins
c12d64f340 Use t.Helper() in our test helpers
Go 1.9 adds this new function which, when called, marks the caller as
being a "helper function". Helper function stack frames are then skipped
when trying to find a line of test code to blame for a test failure, so
that the code in the main test function appears in the test failure output
rather than a line within the helper function itself.

This covers many -- but probaly not all -- of our test helpers across
various packages.
2017-08-28 09:59:30 -07:00
Martin Atkins
1da54955c6 core: remove shadow graph infrastructure
The shadow graph was incredibly useful during the 0.7 cycle but these days
it is idle, since we're not planning any significant graph-related changes
for the forseeable future.

The shadow graph infrastructure is somewhat burdensome since any change
to the ResourceProvider interface must have shims written. Since we _are_
expecting changes to the ResourceProvider interface in the next few
releases, I'm calling "YAGNI" on the shadow graph support to reduce our
maintenence burden.

If we do end up wanting to use shadow graph again in future, we'll always
be able to pull it out of version control and then make whatever changes
we skipped making in the mean time, but we can avoid that cost in the
mean time while we don't have any evidence that we'll need to pay it.
2017-08-28 08:40:22 -07:00
Martin Atkins
3a30bfe845 core: evaluate locals and return them for interpolation
We stash the locals in the module state in a map that is ignored for JSON
serialization. We don't include locals in the persisted state because they
can be trivially recomputed and this allows us to assume that they will
pass through verbatim, without any normalization or other transforms
caused by the JSON serialization.

From a user standpoint a local is just a named alias for an expression,
so it's desirable that the result passes through here in as raw a form
as possible, so it behaves as closely as possible to simply using the
given expression directly.
2017-08-21 15:15:25 -07:00
Martin Atkins
5b66953d1d core: graph nodes and edges for local values
A local value is similar to an output in that it exists only within state
and just always evaluates its value as best it can with the current state.
Therefore it has a single graph node type for all walks, which will
deal with that evaluation operation.
2017-08-21 15:15:25 -07:00
James Bardin
f1042a1338 Merge pull request #15835 from hashicorp/jbardin/mock-provider-race
fix race in MockResourceProvider
2017-08-16 16:33:58 -04:00
Martin Atkins
bf97909b8a core: document all of the fields on the Plan struct 2017-08-16 13:30:02 -07:00
James Bardin
db6ef69e5b fix race in MockResourceProvider
Input can be called concurrently from multiple nodes in the graph.
2017-08-16 15:19:17 -04:00
James Bardin
08339b004b release: clean up after v0.10.2 2017-08-16 17:38:16 +00:00
James Bardin
a1d06eb973
v0.10.2 2017-08-16 17:25:37 +00:00
James Bardin
bb00fd47c0 release: clean up after v0.10.1 2017-08-15 22:21:17 +00:00
James Bardin
f6d16263a0
v0.10.1 2017-08-15 21:50:02 +00:00
Radek Simko
93613ee526
terraform+dag: Set lower log levels 2017-08-14 11:43:45 +02:00
James Bardin
1664d4e228 test with bad interpolation during Input
The interpolation going into a module variable here will be valid after
Refresh, but Refresh doesn't happen for the Input phase.
2017-08-10 14:14:29 -04:00
James Bardin
97bb7cb65c Don't allow interpolation failure to stop Input
Allow module variables to fail interpolation during input. This is OK
since they will be verified again during Plan.  Because Input happens
before Refresh, module variable interpolation can fail when referencing
values that aren't yet in the state, but are expected after Refresh.
2017-08-10 14:14:29 -04:00
James Bardin
11668d5c8a Merge pull request #15599 from alrs/terraform-tests-swallowed-errors
Fix swallowed tests in terraform package tests
2017-08-04 12:09:00 -04:00
Jake Champlin
8e6a0845c1
Cleanup after 0.10 release 2017-08-02 14:40:37 -04:00
Lars Lehtonen
822a98a0b4
Fix swallowed tests in terraform package tests 2017-07-20 02:23:43 -07:00
Martin Atkins
1fac5de738 release: clean up after v0.10.0-rc1 2017-07-19 14:07:06 -07:00
Martin Atkins
243951c70a
v0.10.0-rc1 2017-07-19 14:05:43 -07:00
James Bardin
a1727ec4c2 Add warning to mismatched plan state
Forward-port the plan state check from the 0.9 series.
0.10 has improved the serial handling for the state, so this adds
relevant comments and some more test coverage for the case of an
incrementing serial during apply.
2017-07-17 10:41:29 -04:00
James Bardin
501cbeaffe testState shouldn't rely on mods from WriteState
The state returned from the testState helper shouldn't rely on any
mutations caused by WriteState. The Init function (which is analogous to
NewState) shoudl set any required fields.
2017-07-05 17:47:05 -04:00
Martin Atkins
4d53eaa6df state: more robust handling of state Serial
Previously we relied on a constellation of coincidences for everything to
work out correctly with state serials. In particular, callers needed to
be very careful about mutating states (or not) because many different bits
of code shared pointers to the same objects.

Here we move to a model where all of the state managers always use
distinct instances of state, copied when WriteState is called. This means
that they are truly a snapshot of the state as it was at that call, even
if the caller goes on mutating the state that was passed.

We also adjust the handling of serials so that the state managers ignore
any serials in incoming states and instead just treat each Persist as
the next version after what was most recently Refreshed.

(An exception exists for when nothing has been refreshed, e.g. because
we are writing a state to a location for the first time. In that case
we _do_ trust the caller, since the given state is either a new state
or it's a copy of something we're migrating from elsewhere with its
state and lineage intact.)

The intent here is to allow the rest of Terraform to not worry about
serials and state identity, and instead just treat the state as a mutable
structure. We'll just snapshot it occasionally, when WriteState is called,
and deal with serials _only_ at persist time.

This is intended as a more robust version of #15423, which was a quick
hotfix to an issue that resulted from our previous slopping handling
of state serials but arguably makes the problem worse by depending on
an additional coincidental behavior of the local backend's apply
implementation.
2017-07-05 12:34:30 -07:00
Jake Champlin
9944ea6886
core: Skip provider checksum validation based on env var
Skips checksum validation if the `TF_SKIP_PROVIDER_VERIFY` environment variable is set. Undocumented variable, as the primary goal is to significantly improve the local provider development workflow.
2017-07-03 13:59:13 -04:00
James Bardin
124b80398e make sure marshaled Meta fields are still equal
When the InstanceState.Meta fields are marshaled, numeric values may
change types. The timeout system currently inserts integer values, which
will be unmarshal as float64s.

To ensure that a state which has round-tripped through json is equal to
itself, compare the json representation of the Meta values.
2017-06-30 18:29:42 -04:00
Martin Atkins
45a4ba1ea7 Merge #15344: Avoid double-counting resources to create 2017-06-27 10:48:45 -07:00
Chris Marchesi
0ca5eab545 core: Context-level test for stub EvalDiff
Added a new test that ensures that pre/post-diff hooks are not called
when EvalDiff is run with Stub set, tested through a full refresh run.
This helps test the expected behaviour of EvalDiff itself, versus the
end result of the diff being counted in a plan, which is what the
TestLocal_planScaleOutNoDupeCount test in backend/local checks.
2017-06-24 22:41:12 -07:00
Chris Marchesi
5654a676d9 core: Skip diff hooks for stubs on eval altogether
Rather than overloading InstanceDiff with a "Stub" attribute that is
going to be largely meaningless, we are just going to skip
pre/post-diff hooks altogether. This is under the notion that we will
eventually not need to "stub" a diff for scale-out, stateless nodes on
refresh at all, so diff behaviour won't be necessary at that point, so
we should not assume that hooks will run at this stage anyway.

Also as part of this removed the CountHook test that is now failing
because CountHook is out of scope of the new behaviour.
2017-06-24 08:01:17 -07:00
Chris Marchesi
656115387b core: Simplify TestNodeRefreshableManagedResourceEvalTree_scaleOut
This should make things a bit more clear as to what we are doing in the
EvalTree scale-out test - ensuring that we get the correct eval sequence
for a node with no state through EvalTree.
2017-06-23 17:37:51 -07:00
Chris Marchesi
b486780cf0 core: evalTreeManagedScaleOutResource -> evalTreeManagedResourceNoState
We want to be a bit more explicit here as to when this eval sequence is
carried out. The why is now in the top-level comments.
2017-06-23 17:35:30 -07:00
Martin Atkins
c4857bdbaf release: clean up after v0.10.0-beta1 2017-06-22 22:20:23 +00:00
Martin Atkins
a26ff83279
v0.10.0-beta1 2017-06-22 20:55:33 +00:00
James Bardin
77cbd3bfc8 Merge pull request #15371 from hashicorp/jbardin/reinit-error
better UI output for requesting plugin related init
2017-06-22 15:53:13 -04:00
James Bardin
b14677bd9a look for new error output 2017-06-22 15:37:32 -04:00
James Bardin
5be15ed77c have the local backend provide a plugin init msg
During plan and apply, because the provider constraints need to be built
from a plan, they are not checked until the terraform.Context is
created. Since the context is always requested by the backend during the
Operation, the backend needs to be responsible for generating contextual
error messages for the user.

Instead of formatting the ResolveProviders errors during NewContext,
return a special error type, ResourceProviderError to signal that
init will be required. The backend can then extract and format the
errors.
2017-06-22 13:15:30 -04:00
Rob Phoenix
de2927d0b4 core: fix some typos in comments 2017-06-22 07:09:07 -07:00
Martin Atkins
53c0ff4017 core: ParseResourceAddressForInstanceDiff function
This is a specialized thin wrapper around parseResourceAddressInternal
that can be used to obtain a ResourceAddress from the keys in
ModuleDiff.Resources.

This is not something we'd ideally expose, but since the internal address
format is already exposed in the ModuleDiff object this ends up being
necessary to process the ModuleDiff from other packages, e.g. for
display in the UI.
2017-06-22 07:03:23 -07:00
Martin Atkins
482c1f1ea5 core: ResourceAddress.Less for sorting resource addresses
Lexicographic sorting by the string form produces the wrong result because
[9] sorts after [10], so this custom comparison function takes that into
account and compares each portion separately to get a more intuitive
result.
2017-06-22 07:03:23 -07:00
Chris Marchesi
0e3aedcea3 core: Remove ResourceRefreshPlannableTransformer
This transformer is no longer needed, as we are not transforming
scale-out resource nodes into plannable nodes anymore, but rather just
taking a different eval sequence for resource refresh nodes with no
state.
2017-06-22 04:14:35 -07:00
Chris Marchesi
01e3386e13 core: Add resource count scale-out EvalTree test
This test ensures that the right EvalSequence gets set for a refresh
node with no state. This will ultimately assert that nodes on scale out
will not go down the regular refresh path, which would result in an
error due to the nil state - instead, we stub this node so that we get a
diff on it that can be used to effect computed/unknown values on
interpolations that may depend on this node.
2017-06-22 03:44:16 -07:00
Chris Marchesi
42ebbc6e0e core: ScaleIn should have been ScaleOut
We are actually acting on/fixing the scale-out here (ie: new child node
from count with no state), not scale-in.
2017-06-22 03:43:05 -07:00
Chris Marchesi
565790d8da core: Fix scale-out refresh graph test
Since the transformer that changed stateless nodes in refresh to
NodePlannableResourceInstance is not being used anymore, this test
needed to be adjusted to ensure that the right output was expected.
2017-06-21 09:15:50 -07:00
Chris Marchesi
45528b2217 core: Instance/EvalDiff.Quiet -> Stub
Changed the language of this field to indicate that this diff is not a
"real" diff, in that it should not be acted on, versus a "quiet" mode,
which would indicate just simply to act silently.
2017-06-21 09:15:08 -07:00
Chris Marchesi
eef933f2a7 core: Don't count scaled-out resources twice in the UI
This fixes a bug with the new refresh graph behaviour where a resource
was being counted twice in the UI on part of being scaled out:

 * We are no longer transforming refresh nodes without state to
   plannable resources (the transformer will be removed shortly)
 * A Quiet flag has been added to EvalDiff and InstanceDiff - this
   allows for the flagging of a diff that should not be treated as real
   diff for purposes of planning
 * When there is no state for a refresh node now, a new path is taken
   that is similar to plan, but flags Quiet, and does nothing with the
   diff afterwards.

Tests pending - light testing has confirmed this should fix the double
count issue, but we should have some tests to actually confirm the bug.
2017-06-20 07:37:32 -07:00
Martin Atkins
a8c58b081c core: -target option to also select resources in descendant modules
Previously the behavior for -target when given a module address was to
target only resources directly within that module, ignoring any resources
defined in child modules.

This behavior turned out to be counter-intuitive, since users expected
the -target address to be interpreted hierarchically.

We'll now use the new "Contains" function for addresses, which provides
a hierarchical "containment" concept that is more consistent with user
expectations. In particular, it allows module.foo to match
module.foo.module.bar.aws_instance.baz, where before that would not have
been true.

Since Contains isn't commutative (unlike Equals) this requires some
special handling for targeting specific indices. When given an argument
like -target=aws_instance.foo[0], the initial graph construction (for
both plan and refresh) is for the resource nodes from configuration, which
have not yet been expanded to separate indexed instances. Thus we need
to do the first pass of TargetsTransformer in mode where indices are
ignored, with the work then completed by the DynamicExpand method which
re-applies the TargetsTransformer in index-sensitive mode.

This is a breaking change for anyone depending on the previous behavior
of -target, since it will now select more resources than before. There is
no way provided to obtain the previous behavior. Eventually we may support
negative targeting, which could then combine with positive targets to
regain the previous behavior as an explicit choice.
2017-06-16 16:36:08 -07:00
Martin Atkins
d3eb2b2d28 core: ResourceAddress.Contains method
This is similar in purpose to Equals but it takes a hierarchical approach
where modules contain their child modules, resources are contained by
their modules, and indexed resource instances are contained by their
resource names.

Unlike "Equals", Contains is intended to be transitive, so if A contains B
and B contains C, then C necessarily contains A. It is also directional:
if A contains B then B does not also contain A unless A and B are
identical. This results in more intuitive behavior for use-cases where
the goal is to select a portion of the address space for an operation.
2017-06-16 16:36:08 -07:00
Martin Atkins
d4e5abe0eb core: terraform.env variable is now terraform.workspace
As part of our terminology shift, the interpolation variable for the name
of the current workspace changes to terraform.workspace. The old name
continues to be supported for compatibility.

We can't generate a deprecation warning from here so for now we'll just
silently accept terraform.env as an alias, but not mention it at all in
the error message in the hope that its use phases out over time before we
actually remove it.
2017-06-09 15:01:39 -07:00
Martin Atkins
1b673746fd core: don't allow core or providers to change between plan and apply
The information stored in a plan is tightly coupled to the Terraform core
and provider plugins that were used to create it, since we have no
mechanism to "upgrade" a plan to reflect schema changes and so mismatching
versions are likely to lead to the "diffs didn't match during apply"
error.

To allow us to catch this early and return an error message that _doesn't_
say it's a bug in Terraform, we'll remember the Terraform version and
plugin binaries that created a particular plan and then require that
those match when loading the plan in order to apply it.

The planFormatVersion is increased here so that plan files produced by
earlier Terraform versions _without_ this information won't be accepted
by this new version, and also that older versions won't try to process
plans created by newer versions.
2017-06-09 14:03:59 -07:00
Martin Atkins
aa1c644499 core: allow setting required plugin hashes on Context
When set, this information gets passed on to the provider resolver as
part of the requirements information, causing us to reject any plugins
that do not match during initialization.
2017-06-09 14:03:59 -07:00
Martin Atkins
190626e2a8 core: improve consistency of ParseResourceAddress errors
Previously one of the errors had a built-in context message and the other
did not, making it hard for callers to present a user-friendly message
in both cases.

Now we generate an error message of the same form in both cases, with one
case providing additional information. Ideally the main case would be
able to give more specific guidance too, but that's hard to achieve with
the current regexp-based parsing implementation.
2017-06-09 14:03:59 -07:00
Martin Atkins
b82ef2e30e core: ResourceAddress.MatchesConfig method
This is a useful building block for filtering configuration based on a
resource address. It is similar in principle to state filtering, but for
specific resource configuration blocks.
2017-06-09 14:03:59 -07:00
Martin Atkins
edf3cd7159 core: ResourceAddress.WholeModuleAddress method
This allows growing the scope of a resource address to include all of the
resources in the same module as the targeted resource. This is useful to
give context in error messages.
2017-06-09 14:03:59 -07:00
Martin Atkins
05a5eb0047 core: ResourceAddress.HasResourceSpec method
The resource address documentation defines a resource address as being in
two parts: the module path and the resource spec. The resource spec can
be omitted, which represents addressing _all_ resources in a module.

In some cases (such as import) it doesn't make sense to address an entire
module, so this helper makes it easy for validation code to check for
this to reject insufficiently-specific resource addresses.
2017-06-09 14:03:59 -07:00
James Bardin
7d2d951f27 Rename VersionSet to Constraints
VersionSet is a wrapper around version.Contraints, so rename it it as
such.
2017-06-09 14:03:59 -07:00
Martin Atkins
ccb3a7c584 core: expose terraform.ModuleTreeDependencies as a public function
This is a generally-useful utility for computing dependency trees, so no
reason to restrict it to just the terraform package.
2017-06-09 14:03:59 -07:00
Martin Atkins
4ab8973520 core: provide config to all import context tests
We're going to use config to determine provider dependencies, so we need
to always provide a config when instantiating a context or we'll end up
loading no providers at all.

We previously had a test for running "terraform import -config=''" to
disable the config entirely, but this test is now removed because it makes
no sense. The actual functionality its testing still remains for now,
but it will be removed in a subsequent commit when we start requiring that
a resource to be imported must already exist in configuration.
2017-06-09 14:03:59 -07:00
Martin Atkins
c835ef8ff3 Update tests for the new ProviderResolver interface
Rather than providing an already-resolved map of plugins to core, we now
provide a "provider resolver" which knows how to resolve a set of provider
dependencies, to be determined later, and produce that map.

This requires the context to be instantiated in a different way, so this
very noisy diff is a mostly-mechanical update of all of the existing
places where contexts get created for testing, using some adapted versions
of the pre-existing utilities for passing in mock providers.
2017-06-09 14:03:59 -07:00
Martin Atkins
7ca592ac06 core: use ResourceProviderResolver to resolve providers
Previously the set of providers was fixed early on in the command package
processing. In order to be version-aware we need to defer this work until
later, so this interface exists so we can hold on to the possibly-many
versions of plugins we have available and then later, once we've finished
determining the provider dependencies, select the appropriate version of
each provider to produce the final set of providers to use.

This commit establishes the use of this new mechanism, and thus populates
the provider factory map with only the providers that result from the
dependency resolution process.

This disables support for internal provider plugins, though the
mechanisms for building and launching these are still here vestigially,
to be cleaned up in a subsequent commit.

This also adds a new awkward quirk to the "terraform import" workflow
where one can't import a resource from a provider that isn't already
mentioned (implicitly or explicitly) in config. We will do some UX work
in subsequent commits to make this behavior better.

This breaks many tests due to the change in interface, but to keep this
particular diff reasonably easy to read the test fixes are split into
a separate commit.
2017-06-09 14:03:59 -07:00
Martin Atkins
ba3ee00837 core: ResourceProviderResolver interface
ResourceProviderResolver is an extra level of indirection before we
get to a map[string]ResourceProviderFactory, which accepts a map of
version constraints and uses it to choose from potentially-many available
versions of each provider to produce a single ResourceProviderFactory
for each one requested.

As of this commit the ResourceProviderResolver interface is not used. In
a future commit the ContextOpts.Providers map will be replaced with a
resolver instance, with the creation of the factory delayed until the
version constraints have been resolved.
2017-06-09 14:03:59 -07:00
Martin Atkins
1c0b715999 core: return explicit caption if tests fail to construct context
The previous error was very generic, making it hard to quickly tell from
the test output that the error was during context initialization.
2017-06-09 14:03:59 -07:00
Martin Atkins
8bfc6e7b1c core: add missing ResourceState types in context tests
Previously the Type of a ResourceState was generally ignored, but we're
now starting to use it to figure out which providers are needed to
support the resources in state so our tests need to set it accurately
in order to get the expected result.
2017-06-09 14:03:59 -07:00
Martin Atkins
25a6d8f471 core: build a module dependency tree from config+state
This new private function takes a configuration tree and a state structure
and finds all of the explicit and implied provider dependencies
represented, returning them as a moduledeps.Module tree structure.

It annotates each dependency with a "reason", which is intended to be
useful to a user trying to figure out where a particular dependency is
coming from, though we don't yet have any UI to view this.

Nothing calls this yet, but a subsequent commit will use the result of
this to produce a constraint-conforming map of provider factories during
context initialization.
2017-06-09 14:03:59 -07:00
Martin Atkins
0b14c2cdb3 Resolve resource provider types in config package
Previously the logic for inferring a provider type from a resource name
was buried a utility function in the 'terraform' package. Instead here we
lift it up into the 'config' package where we can make broader use of it
and where it's easier to discover.
2017-06-09 14:03:59 -07:00
Martin Atkins
9e0c52c6db release: clean up after v0.9.8 2017-06-08 00:26:19 +00:00
Martin Atkins
8d560482c3
v0.9.8 2017-06-08 00:14:54 +00:00
stack72
1b78f50db5 release cleanup after v0.9.7 2017-06-07 17:45:11 +03:00
stack72
20ca74d0a0
v0.9.7 2017-06-07 14:34:30 +00:00
Radek Simko
1244309579 Fix stringer comments (#15069) 2017-06-05 10:17:35 +01:00
He Guimin
87562be855 provider/alicloud: Add the function of replacing ecs instance's system disk (#15048)
* add replacing system disk function for ecs

* remove ForceNew of system_disk_size
2017-06-05 11:27:49 +03:00
Gavin Williams
401c6a95a7 provider/openstack: Add Terraform version to UserAgent string (#14955)
* core: Add 'UserAgentString' helper function to generate a standard UserAgent string. Example generation: 'Terraform 0.9.7-dev (go1.8.1)'

* provider/openstack: Add Terraform version to UserAgent string
2017-06-01 22:12:25 -06:00
Jake Champlin
ac177492fb
core: Revert stringer changes from earlier commits 2017-06-01 11:37:12 -04:00
Thomas Schaaf
79c91e11c8 provider/aws: Add aws elastic beanstalk solution stack (#14944)
* Add aws elastic beanstalk solution stack

Signed-off-by: Thomas Schaaf <thomaschaaf@Thomass-MBP.fritz.box>

* Fix incorrect naming

Signed-off-by: Thomas Schaaf <thomaschaaf@Thomass-MBP.fritz.box>

* Use unique go variable/function names

Signed-off-by: Thomas Schaaf <thomaschaaf@Thomass-MacBook-Pro.local>

* Add docs to sidebar

* Sort provider by alphabet

* Fix indent

* Add required statement

* Fix acceptance test
2017-06-01 02:23:06 +03:00
clint
d6fcc82ecc release: clean up after v0.9.6 2017-05-25 16:09:31 +00:00
clint
85e0979c6a
v0.9.6 2017-05-25 15:56:03 +00:00
Martin Atkins
410b60cb7f Stop requiring multi-vars (splats) to be in array brackets
Prior to Terraform 0.7, lists in Terraform were just a shallow abstraction
on top of strings with a magic delimiter between items. Wrapping a single
string in brackets in the configuration was Terraform's prompt that it
needed to split the string on that delimiter during interpolation.

In 0.7, when first-class lists were added, this convention was preserved
by flattening lists-of-lists by one level when they were encountered in
configuration. However, there was an oversight in that change where it
did not correctly handle the case where the inner list was unknown.

In #14135 we removed some code that was flattening partially-unknown lists
into fully-unknown (untyped) values. This inadvertently exposed the missed
case from the previous paragraph, causing issues for list-wrapped splat
expressions with unknown members. While this worked fine for resources,
due to some fixup done inside helper/schema, this did not work for other
interpolation contexts such as module blocks.

Various attempts to fix this up and restore the flattening behavior
selectively were unsuccessful, due to a proliferation of assumptions all
over the core code that would be too risky to change just to fix this bug.

This change, then, takes the different approach of removing the
requirement that splats be presented inside list brackets. This
requirement didn't make much sense anymore anyway, since no other
list-returning expression had this constraint and so the rest of Terraform
was already successfully dealing with both cases.

This leaves us with two different scenarios:

- For resource arguments, existing normalization code in helper/schema
  does its own flattening that preserves compatibility with the common
  practice of using bracketed splats. This change proves this with a test
  within the "test" provider that exercises the whole Terraform core and
  helper/schema stack that assigns bracketed splats to list and set
  attributes.

- For arguments in other blocks, such as in module callsites, the
  interpolator's own flattening behavior applies to known lists,
  preserving compatibility with configurations from before
  partially-computed splats were possible, but those wishing to use
  partially-computed splats are required to drop the surrounding brackets.
  This is less concerning because this scenario was introduced only in
  0.9.5, so the scope for breakage is limited to those who adopted this
  new feature quickly after upgrading.

As of this commit, the recommendation is to stop using brackets around
splats but the old form continues to be supported for backward
compatibility. In a future _major_ version of Terraform we will probably
phase out this legacy form to improve consistency, but for now both
forms are acceptable at the expense of some (pre-existing) weird behavior
when _actual_ lists-of-lists are used.

This addresses #14521 by officially adopting the suggested workaround of
dropping the brackets around the splat. However, it doesn't yet allow
passing of a partially-unknown list between modules: that still violates
assumptions in Terraform's core, so for the moment partially-unknown lists
work only within a _single_ interpolation expression, and cannot be
passed around between expressions. Until more holistic work is done to
improve Terraform's type handling, passing a partially-unknown splat
through to a module will result in a fully-unknown list emerging on
the other side, just as was the case before #14135; this change just
addresses the fact that this was failing with an error in 0.9.5.
2017-05-23 11:22:37 -07:00
Jake Champlin
91ab75991d
core: use codified default for prerelease string 2017-05-22 11:28:15 -04:00
Jake Champlin
bd68789006
core: Use environment variables to set VersionPrerelease at compile time
Instead of using a hardcoded version prerelease string, which makes release automation difficult, set the version prerelease string from an environment variable via the go linker tool during compile time.

The environment variable `TF_RELEASE` should only be set via the `make bin` target, and thus leaves the version prerelease string unset. Otherwise, when running a local compile of terraform via the `make dev` makefile target, the version prerelease string is set to `"dev"`, as usual.

This also requires some changes to both the circonus and postgresql providers, as they directly used the `VersionPrerelease` constant. We now simply call the `VersionString()` function, which returns the proper interpolated version string with the prerelease string populated correctly.

`TF_RELEASE` is unset:

```sh
$ make dev
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/22 10:38:19 Generated command/internal_plugin_list.go
==> Removing old directory...
==> Building...
Number of parallel builds: 3

-->     linux/amd64: github.com/hashicorp/terraform

==> Results:
total 209M
-rwxr-xr-x 1 jake jake 209M May 22 10:39 terraform

$ terraform version
Terraform v0.9.6-dev (fd472e4a86500606b03c314f70d11f2bc4bc84e5+CHANGES)
```

`TF_RELEASE` is set (mimicking the `make bin` target):

```sh
$ TF_RELEASE=1 make dev
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2017/05/22 10:40:39 Generated command/internal_plugin_list.go
==> Removing old directory...
==> Building...
Number of parallel builds: 3

-->     linux/amd64: github.com/hashicorp/terraform

==> Results:
total 121M
-rwxr-xr-x 1 jake jake 121M May 22 10:42 terraform

$ terraform version
Terraform v0.9.6
```
2017-05-22 10:49:15 -04:00
Martin Atkins
45b04c826a core: don't crash if no module state exists for multi var
For child modules, a ModuleState isn't allocated until the first time a
module instance is inserted into the state under the module's path.
Normally interpolations of resource attributes are delayed until at least
one resource has been created due to the nature of the dependency graph,
but if the interpolation value is a multi-var (splat) then it is possible
that the referenced resource has count=0 and thus created _no_ resource
states when it was visited.

Previously we would crash when trying to access the resource map for the
nil module in order to count how many instances are present. Since we know
there can't be any instances present in a nil module, we now preempt
this crash by returning zero early.

This edge-case does not apply to the root module because its ModuleState
is allocated as part of initializing the main State instance.

This fixes #14438.
2017-05-16 09:54:33 -07:00
Chris Marchesi
11b4794612 core: Test for new refresh graph behaviour
Tests on DynamicExpand for both resources and data sources, cover scale
in/out scenarios, and also a verification for the behaviour of config
orphans.
2017-05-12 15:45:06 -07:00
Chris Marchesi
7b1618efde core: Fix destroy factory in data source refresh expander 2017-05-12 15:45:06 -07:00
Chris Marchesi
b807505d55 core: New refresh graph building behaviour
Currently, the refresh graph uses the resources from state as a base,
with data sources then layered on. Config is not consulted for resources
and hence new resources that are added with count (or any new resource
from config, for that matter) do not get added to the graph during
refresh.

This is leading to issues with scale in and scale out when the same
value for count is used in both resources, and data sources that may
depend on that resource (and possibly vice versa). While the resources
exist in config and can be used, the fact that ConfigTransformer for
resources is missing means that they don't get added into the graph,
leading to "index out of range" errors and what not.

Further to that, if we add these new resources to the graph for scale
out, considerations need to be taken for scale in as well, which are not
being caught 100% by the current implementation of
NodeRefreshableDataResource. Scale-in resources should be treated as
orphans, which according to the instance-form NodeRefreshableResource
node, should be NodeDestroyableDataResource nodes, but this this logic
is currently not rolled into NodeRefreshableDataResource. This causes
issues on scale-in in the form of race-ish "index out of range" errors
again.

This commit updates the refresh graph so that StateTransformer is no
longer used as the base of the graph. Instead, we add resources from the
state and config in a hybrid fashion:

 * First off, resource nodes are added from config, but only if
   resources currently exist in state.  NodeRefreshableManagedResource
   is a new expandable resource node that will expand count and add
   orphans from state. Any count-expanded node that has config but no
   state is also transformed into a plannable resource, via a new
   ResourceRefreshPlannableTransformer.
 * The NodeRefreshableDataResource node type will now add count orphans
   as NodeDestroyableDataResource nodes. This achieves the same effect
   as if the data sources were added by StateTransformer, but ensures
   there are no races in the dependency chain, with the added benefit of
   directing these nodes straight to the proper
   NodeDestroyableDataResource node.
 * Finally, config orphans (nodes that don't exist in config anymore
   period) are then added, to complete the graph.

This should ensure as much as possible that there is a refresh graph
that best represents both the current state and config with updated
variables and counts.
2017-05-12 15:45:06 -07:00
Chris Marchesi
dfb5be2413 Rename NodeRefreshableResource to NodeRefreshableResourceInstance
In prep for NodeRefreshableResource becoming an
NodeAbstractCountResource and implementing GraphNodeDynamicExpandable.
2017-05-12 15:40:13 -07:00
Martin Atkins
7bdf4a925d core: Allow downstream targeting of certain node types
The previous behavior of targets was that targeting a particular node
would implicitly target everything it depends on. This makes sense when
the dependencies in question are between resources, since we need to
make sure all of a resource's dependencies are in place before we can
create or update it.

However, it had the undesirable side-effect that targeting a resource
would _exclude_ any outputs referring to it, since the dependency edge
goes from output to resource. This then causes the output to be "stale",
which is problematic when outputs are being consumed by downstream
configs using terraform_remote_state.

GraphNodeTargetDownstream allows nodes to opt-in to a new behavior where
they can be targeted by _inverted_ dependency edges. That is, it allows
outputs to be considered targeted if anything they directly depend on
is targeted.

This is different than the implied targeting behavior in the other
direction because transitive dependencies are not considered unless the
intermediate nodes themselves have TargetDownstream. This means that
an output1→output2→resource chain can implicitly target both outputs, but
an output→resource1→resource2 chain _won't_ target the output if only
resource2 is targeted.

This behavior creates a scenario where an output can be visited before
all of its dependencies are ready, since it may have a mixture of both
targeted and untargeted dependencies. This is fine for outputs because
they silently ignore any errors encountered during interpolation anyway,
but other hypothetical future implementers of this interface may need to
be more careful.

This fixes #14186.
2017-05-11 11:57:46 -07:00
stack72
7cb334b635 release: clean up after v0.9.5 2017-05-11 09:32:32 +00:00
stack72
a59ee0b30e
v0.9.5 2017-05-11 09:22:11 +00:00
Martin Atkins
58f5257678 core: context test for partially-unknown splat lists
This is a context test for the behavior enabled by #14135, as some
insurance to decrease the chance that we break it again.
2017-05-04 16:55:32 -07:00
Martin Atkins
b4df03bca4 core: allow partially-unknown lists from splat syntax
This was actually redundant anyway since HIL itself applied a similar
rule where any partially-unknown list would be automatically flattened
to a single unknown value.

However, now we're changing HIL to explicitly permit partially-unknown
lists so that we can allow the index operator [...] to succeed when
applied to one of the elements that _is_ known.

This, in conjunction with hashicorp/hil#51 and hashicorp/hil#52,
fixes #3449.
2017-05-04 15:56:35 -07:00