Commit Graph

2572 Commits

Author SHA1 Message Date
James Bardin
d33c5163a7
Merge pull request #21555 from hashicorp/jbardin/re-validate
Allow providers to re-validate the final resource config
2019-06-07 16:43:38 -04:00
tinocode
51a4055198 core: Fix panic on invalid depends_on root. (#21589)
Report an invalid reference to the user instead of crashing when there is in error traversing the `depends_on` attribute.

Fixes #21590
2019-06-06 08:43:22 -04:00
James Bardin
49fee6ba78 don't lose private data during destroy
Makre sure private data is maintained all the way to destroy. This
slipped through, since private data isn't used much for current
providers, except for timeouts.
2019-06-05 19:22:46 -04:00
James Bardin
dcab82e897 send and receive Private through ReadResource
Send Private data blob through ReadResource as well. This will allow for
extra flexibility for future providers that may want to pass data out of
band through to their resource Read functions.
2019-06-03 18:08:26 -04:00
James Bardin
2b200dd9bb re-validate data source config during read
Just like resources, early data soruce validation will contain many
unknown values. Re-validate once we have the config config.
2019-06-01 11:40:54 -04:00
James Bardin
c41eb0e6e4 re-validate config during Plan
The config is statically validated early on for structural issues, but
the provider can't validate any inputs that were unknown at the time.
Run ValidateResourceTypeConfig during Plan, so that the provider can
validate the final config values, including those interpolated from
other resources.
2019-06-01 11:09:48 -04:00
Kristin Laemmert
9869fbc5c4
core: don't panic in NodeAbstractResourceInstance References() (#21445)
* core: don't panic in NodeAbstractResourceInstance References()

It is possible for s.Current to be nil. This was hard to reproduce, so
the root cause is still unknown, but we can guard against the symptom.

* add log statement
2019-05-28 17:27:38 -04:00
Martin Atkins
bec4641867 core: Don't panic if NodeApplyableResourceInstance has no config
This is a "should never happen" case, because we shouldn't ever have
resources in the plan that aren't in the configuration, but since we've
got a report of a crash here (which went away before we got a chance to
debug it) here's just an extra guard to ensure that we'll still exit
gracefully in that case.

If we see this error crop up again in future, it'd be nice to gather a
full trace log so we can see what GraphNodeAttachResourceConfig did and
why it did not attach a configuration.
2019-05-14 16:54:12 -07:00
James Bardin
059dcf1e25 implement UpgradeResourceState in the mock provider
The default is a Noop that simply encodes the provided state
2019-05-14 18:00:01 -04:00
Martin Atkins
bcf2aa06dd core: Call providers' UpgradeResourceState every time
Previously we tried to short-circuit this if the schema version hadn't
changed and we were already using JSON serialization. However, if we
instead call into UpgradeResourceState every time we can let the provider
or the SDK do some general, systematic normalization and cleanup steps
without always requiring a schema version increase.

What exactly would be fixed up this way is for the SDK to decide, but for
example the SDK might choose to automatically delete from the state
anything that is no longer present in the schema, avoiding the need to
write explicit state migration functions for that common case where the
remedy is always the same.

The actual update logic is delegated to the provider/SDK intentionally so
that it can evolve over time and potentially differ depending on how
each SDK thinks about schema.
2019-05-14 13:30:42 -04:00
Radek Simko
8a6d1d62b6
stringer: Regenerate files with latest version 2019-05-13 15:34:27 +01:00
Paul Tyng
15f7f8049f
Fix incorrect direction of TestConformance 2019-05-09 09:16:54 -04:00
James Bardin
8250b2e4de more precise handling of ComputedKeys in config
With the new ConfigModeAttr, we can now have complex structures come in
as attributes rather than blocks. Previously attributes were either
known, or unknown, and there was no reason to descend into them. We now
need to record the complete path to unknown values within complex
attributes to create a proper diff after shimming the config.
2019-05-05 09:02:33 -04:00
James Bardin
06babb6cc3 remove dead code for the next reader 2019-05-03 17:29:16 -04:00
James Bardin
35e087fe5b always re-read datasource if there are dep changes
If a datasource's dependencies have planned changes, then we need to
plan a read for the data source, because the config may change once the
dependencies have been applied.
2019-05-03 17:29:16 -04:00
James Bardin
43f0468829 fix ComputedKeys for nested blocks in shimmed cfg
The indexes were added out of order in the attribute keys
2019-05-02 14:08:40 -07:00
Martin Atkins
3309be9721 core: Allow data resource count to be unknown during refresh
The count for a data resource can potentially depend on a managed resource
that isn't recorded in the state yet, in which case references to it will
always return unknown.

Ideally we'd do the data refreshes during the plan phase as discussed in
#17034, which would avoid this problem by planning the managed resources
in the same walk, but for now we'll just skip refreshing any data
resources with an unknown count during refresh and defer that work to the
apply phase, just as we'd do if there were unknown values in the main
configuration for the data resource.
2019-04-25 14:22:57 -07:00
Martin Atkins
cbc8d1eba2 core: Input variables are always unknown during validate
Earlier on in the v0.12 development cycle we made the decision that the
validation walk should consider input values to always be unknown so that
validation is checking validity for all possible inputs rather than for
a specific set of inputs; checking for a specific set of inputs is the
responsibility of the plan walk.

However, we didn't implement that in the best way: we made the
"terraform validate" command force all of the input variables to unknown
but that was insufficient because it didn't also affect the implicit
validation walk we do as part of "terraform plan" and "terraform apply",
causing those to produce confusingly-different results.

Instead, we'll address the problem directly in the reference resolver code,
ensuring that all variable values will always be treated as an unknown
(of the declared type, so type checking is still possible) during any
validate walk, regardless of which command is running it.
2019-04-17 10:09:46 -07:00
Martin Atkins
861a2ebf26 helper/schema: Use a more targeted shim for nested set diff applying
We previously attempted to make the special diff apply behavior for nested
sets of objects work with attribute mode by totally discarding attribute
mode for all shims.

In practice, that is too broad a solution: there are lots of other shimming
behaviors that we _don't_ want when attribute mode is enabled. In
particular, we need to make sure that the difference between null and
empty can be seen in configuration.

As a compromise then, we will give all of the shims access to the real
ConfigMode and then do a more specialized fixup within the diff-apply
logic: we'll construct a synthetic nested block schema and then use that
to run our existing logic to deal with nested sets of objects, while
using the previous behavior in all other cases.

In effect, this means that the special new behavior only applies when the
provider uses the opt-in ConfigMode setting on a particular attribute,
and thus this change has much less risk of causing broad, unintended
regressions elsewhere.
2019-04-17 07:47:31 -07:00
Martin Atkins
ff2de9c818 core: Keep old value on error even for delete
When an operation fails, providers may return a null new value rather than
returning a partial state. In that case, we'd prefer to keep the old value
so that we stand the best chance of being able to retry on a subsequent
run.

Previously we were making an exception for the delete action, allowing
the result of that to be null even when an error is returned. In practice
that was a bad idea because it would cause Terraform to lose track of the
object even though it might not actually have been deleted.

Now we'll retain the old object even in the delete case. Providers can
still return partial new objects if they were able to partially complete
a delete operation, in which case we'll discard what we had before, but
if the result is null with errors then we'll assume the delete failed
entirely and so just keep the old state as-is, giving us the opportunity
to refresh it on the next run to see if anything actually happened after
all.

(This also includes a new resource in the test provider which isn't used
by the patch but was useful for some manual UX testing here, so I thought
I'd include it in case it's similarly useful in future, given how simple
its implementation is.)
2019-04-17 07:40:15 -07:00
Martin Atkins
88e76fa9ef configs/configschema: Introduce the NestingGroup mode for blocks
In study of existing providers we've found a pattern we werent previously
accounting for of using a nested block type to represent a group of
arguments that relate to a particular feature that is always enabled but
where it improves configuration readability to group all of its settings
together in a nested block.

The existing NestingSingle was not a good fit for this because it is
designed under the assumption that the presence or absence of the block
has some significance in enabling or disabling the relevant feature, and
so for these always-active cases we'd generate a misleading plan where
the settings for the feature appear totally absent, rather than showing
the default values that will be selected.

NestingGroup is, therefore, a slight variation of NestingSingle where
presence vs. absence of the block is not distinguishable (it's never null)
and instead its contents are treated as unset when the block is absent.
This then in turn causes any default values associated with the nested
arguments to be honored and displayed in the plan whenever the block is
not explicitly configured.

The current SDK cannot activate this mode, but that's okay because its
"legacy type system" opt-out flag allows it to force a block to be
processed in this way anyway. We're adding this now so that we can
introduce the feature in a future SDK without causing a breaking change
to the protocol, since the set of possible block nesting modes is not
extensible.
2019-04-10 14:53:52 -07:00
James Bardin
6868ddd085 don't set NewRemoved values as empty strings
NewRemoved diff values were somtimes left in maps and lists.
2019-03-29 13:56:43 -04:00
Martin Atkins
003317d7c8 lang: Detect references when a list/set attr is defined using blocks
For compatibility with documented patterns from existing providers we are
now allowing (via a pre-processing step) any attribute whose type is a
list-of-object or set-of-object type to optionally be assigned using one
or more blocks whose type is the attribute name.

The pre-processing functionality was implemented in previous commits but
we were not correctly detecting references within these blocks that are,
from the perspective of the primary schema, invalid. Now we'll use an
alternative implementation of variable detection that is able to apply the
same schema rewriting technique we used to implement the transform and
thus can find all of the references as if they were already in their
final locations.
2019-03-28 10:41:01 -07:00
James Bardin
3c8b46fffe merge connection blocks for validation
The resource connection block was not being validated. Merge the two
bodies, with the provider as the override, before validation.
2019-03-26 11:59:23 -04:00
Martin Atkins
2a64a00983 core: Upgrade flatmap to JSON when dynamic attributes are present
When a resource type schema includes dynamically-typed attributes we can't
do any automatic conversion from flatmap to JSON because we don't know
how to interpret the keys that start with the dynamically-typed
attribute's prefix.

To work around that, we'll instead just ask the SDK to do a no-op upgrade
(current and target versions are the same) which will convert from flatmap
to JSON using the SDK's own logic as a side-effect.

This situation should rarely arise in real-world use, but it ends up being
very important for the helper/resource provider test harness because it
is forced to lower the state back to flatmap repeatedly after every step
in order to run legacy checking and processing code.
2019-03-21 15:19:59 -07:00
Martin Atkins
50a101afbd lang: Consider "dynamic" blocks when resolving references
The hcldec package has no awareness of the dynamic block extension, so the
hcldec.Variables function misses any variables declared inside dynamic
blocks.

dynblock.VariablesHCLDec is a drop-in replacement for hcldec.Variables
that _is_ aware of dynamic blocks, returning all of the same variables
that hcldec would find naturally plus also any variables used inside
the dynamic block "for_each" and "labels" arguments and inside the
nested "content" block.
2019-03-19 10:04:45 -07:00
Martin Atkins
04cbf249aa core: Don't fail on dynamic attribute values during refresh
Our post-refresh safety check had the constraint and real type inverted,
so previously any refresh of a resource type with a dynamically-typed
attribute would fail this type check.

Also includes a small tweak to the error message from this check since the
old one was a little awkward to read in practice when the error is a
cty.PathError rendered with an attribute path prefix.
2019-03-18 09:18:06 -07:00
James Bardin
892674a3f9 don't recalculate existing block counts in diff
If a block is uneffected by diffs, keep the block count value regardless
of what it is. Blocks containing zero values will often be represented
by only the count value.
2019-03-12 12:04:35 -04:00
Sander van Harmelen
973e2a7cf9 core: add a context to the UIInput interface 2019-03-08 10:24:40 +01:00
James Bardin
5c09f94695 remove eval TODO for NormalizeObjectFromLegacySDK
The normalization will take place in the provider shims, locating it
with the rest of the code that attempts to match the new and legacy
behavior.
2019-03-06 16:23:56 -05:00
Martin Atkins
f193b11073 command/format: Normalize before/after values before rendering
We are now allowing the legacy SDK to opt out of the safety checks we try
to do after plan and apply, and so in such cases the before/after values
in planned changes may be inconsistent with our usual rules.

To avoid adding lots of extra complexity to the diff renderer to deal with
these situations, instead we'll normalize the handling of nested blocks
prior to using these values.

In the long run it'd be better to do this normalization at the source,
immediately after we receive an object from a provider using the opt-out,
but we're doing this at the outermost layer for now to avoid risking
unintended impacts on other Terraform Core components when we're just
about to enter the beta phase of the v0.12.0 release cycle.
2019-02-27 16:53:29 -08:00
Martin Atkins
ac6e0e42dc configs/configupgrade: Upgrade the bodies of "connection" blocks
This uses the fixed "superset" schema from the main terraform package to
apply our standard expression mapping, with the exception of "type" where
interpolation sequences are not supported due to the type being evaluated
early to retrieve the schema for decoding the rest.
2019-02-22 12:32:56 -08:00
Sherod Taylor
c456d9608b updated ssh authentication and testing for ssh 2019-02-22 14:30:50 -05:00
James Bardin
44afe5b6ff remove unused ResourceProviderError 2019-02-20 14:23:56 -05:00
James Bardin
6cc3e1d0bd move init error to where it is generated
The init error was output deep in the backend by detecting a
special ResourceProviderError and formatted directly to the CLI.

Create some Diagnostics closer to where the problem is detected, and
passed that back through the normal diagnostic flow. While the output
isn't as nice yet, this restores the helpful error message and makes the
code easier to maintain. Better formatting can be handled later.
2019-02-20 14:18:37 -05:00
James Bardin
da389d6cd4 simple list diffs may also have missing elements
Like was done for list blocks, simple lists of strings may be missing
empty string elements, and any list may be implicitly truncated.
2019-02-14 13:06:04 -05:00
James Bardin
c34c37fbd5 missed .% suffixes in diff.Apply
Diff.Apply checks for unneeded container count diffs, but was missing
the check for maps.

Add an early return for planning a destroy.
2019-02-13 19:09:46 -05:00
Martin Atkins
12a6d22589 core: Better handle providers failing updates with no new value
A provider may react to a create or update failing by returning error
diagnostics and a partially-updated or nil new value, in which case we
do not expect our AssertObjectCompatible consistency check to succeed: the
provider is just assumed to be doing the best it can to preserve whatever
partial outcome it was able to achieve.

However, if errors are accompanied with a nil new value after an update,
we'll assume that the provider is telling us it wasn't able to get far
enough to make any change at all, and so we'll retain the prior value in
state. This ensures that a provider can't cause an object to be forgotten
from the state just because an update failed.
2019-02-12 18:13:14 -08:00
James Bardin
b758628e51
Merge pull request #20308 from hashicorp/jbardin/requires-replace
Requires replace should not error on missing index steps
2019-02-12 15:08:38 -05:00
James Bardin
c6daf9fb24 don't error on all invalid RequiresReplace paths
RequiresReplace paths with IndexSteps that have been added or removed
may fail to apply against one of the two state values. Only error out if
the path cannot be applied to both values.
2019-02-12 14:43:41 -05:00
Martin Atkins
eb1346447f
Merge #20282: Enforce expected behaviors for provider PlanResourceChange
An exception remains for the legacy SDK, which does not meet all of these requirements.
2019-02-12 09:19:05 -08:00
Martin Atkins
f4e6431da2 core: Ensure context tests comply with plan/apply safety checks
Prior to Terraform 0.12 there were certain behaviors we expected from
providers that were actually just details of the SDK and not part of the
enforced contract.

For 0.12 we're now codifying some of these behaviors explicitly via safety
checks in core, thus ensuring that all future providers will behave in a
consistent way that users can rely on.

Unfortunately, due to the hand-written nature of the mock provider
implementations we use in tests, they have been getting away with some
unusual behaviors that don't match our usual expectations, and our safety
checks now detect those as incorrect behaviors.

To address this, we make the minimal changes to each test to ensure that
its mock provider behaves in a consistent way, which requires that values
set in config be represented correctly in the plan and ultimately saved
in the new state, without any changes along the way. In particular, the
common testDiffFn implementation has historically used a number of special
hidden attributes to trigger special behaviors, and our new rules require
that these special settings propagate properly through the plan and into
the state.
2019-02-11 17:26:50 -08:00
Martin Atkins
31299e688d core: Allow legacy SDK to opt out of plan-time safety checks
Due to the inprecision of our shimming from the legacy SDK type system to
the new Terraform Core type system, the legacy SDK produces a number of
inconsistencies that produce only minor quirky behavior or broken
edge-cases. To retain compatibility with those existing weird behaviors,
the legacy SDK opts out of our safety checks.

The intent here is to allow existing providers to continue to do their
previous unsafe behaviors for now, accepting that this will allow certain
quirky bugs from previous releases to persist, and then gradually migrate
away from the legacy SDK and remove this opt-out on a per-resource basis
over time.

As with the apply-time safety check opt-out, this is reserved only for
the legacy SDK and must not be used in any new SDK implementations. We
still include any inconsistencies as warnings in the logs as an aid to
anyone debugging weird behavior, so that they can see situations where
blame may be misplaced in the user-visible error messages.
2019-02-11 17:26:49 -08:00
Martin Atkins
5649ae6abf core: Improve warnings for legacy SDK apply-time inconsistencies
We've allowed the legacy SDK an opt-out from the post-apply safety checks,
but previously we produced only a generic warning message in that case.
Now instead we'll still run the safety checks, but report the results in
the logs instead of as error diagnostics.

This should allow developers who are debugging strange interactions
between buggy legacy providers to get better insight into what's going
on upstream in order to help explain what's going on when these problems
inevitably get caught by other downstream safety checks when trying to
make use of these invalid results.
2019-02-11 17:26:49 -08:00
Martin Atkins
419f5e58cd core: Enforce the validity of planned new objects
We've been gradually adding safety checks of this sort throughout the
lifecycle to help ensure that buggy providers can't introduce
hard-to-diagnose downstream failures and misbehavior. This completes the
set by verifying during plan time that the provider has produced a plan
that actually achieves the goals defined in the configuration.

In particular, this catches the situation where a provider may incorrectly
override a value explicitly set in configuration, which avoids creating
confusion by betraying the reasonable user expectation that referencing an
explicitly-defined attribute will produce exactly the value shown in
configuration.
2019-02-11 17:26:49 -08:00
James Bardin
1ca7531cc7 allow implicit empty strings in lists
The helper/schema handling of lists loses empty string values, but
retains the correct count. Only re-count the values if the count is
missing entirely, and allow our shims to re-populate the zero values.
2019-02-11 19:24:14 -05:00
Martin Atkins
312d798a89 core: Restore our EvalReadData behavior
In an earlier commit we changed objchange.ProposedNewObject so that the
task of populating unknown values for attributes not known during apply
is the responsibility of the provider's PlanResourceChange method, rather
than being handled automatically.

However, we were also using objchange.ProposedNewObject to construct the
placeholder new object for a deferred data resource read, and so we
inadvertently broke that deferral behavior. Here we restore the old
behavior by introducing a new function objchange.PlannedDataResourceObject
which is a specialized version of objchange.ProposedNewObject that
includes the forced behavior of populating unknown values, because the
provider gets no opportunity to customize a deferred read.

TestContext2Plan_createBeforeDestroy_depends_datasource required some
updates here because its implementation of PlanResourceChange was not
handling the insertion of the unknown value for attribute "computed".
The other changes here are just in an attempt to make the flow of this
test more obvious, by clarifying that it is simulating a -refresh=false
run, which effectively forces a deferred read since we skip the eager
read that would normally happen in the refresh step.
2019-02-07 18:33:14 -08:00
Martin Atkins
8882dcaf86 core: Fix TestContext2Plan_dataResourceBecomesComputed
Now that ProposedNewState uses null to represent Computed attributes not
set in the configuration, the provider must fill in the unknown value for
"computed" in its plan result.

It seems that this test was incorrectly updated during our bulk-fix after
integrating the HCL2 work, but it didn't really matter because the
ReadDataSource function isn't called in the happy path anyway. But to
make the intent clearer here, we also now make ReadDataSource return an
error if it is called, making it explicit that no call is expected.
2019-02-07 18:33:14 -08:00
Martin Atkins
c3e7efec35 core: Reject unknown values after reading a data resource
Data resources do not have a plan/apply distinction, so it is never valid
for a data resource to produce unknown values in its result object.

Unknown values in the data resource _config_ cause us to postpone the read
altogether, so a data source never receives unknown values as input and
therefore may never produce unknown values as output.
2019-02-07 18:33:14 -08:00
Martin Atkins
1530fe52f7 core: Legacy SDK providers opt out of our new apply result check
The shim layer for the legacy SDK type system is not precise enough to
guarantee it will produce identical results between plan and apply. In
particular, values that are null during plan will often become zero-valued
during apply.

To avoid breaking those existing providers while still allowing us to
introduce this check in the future, we'll introduce a rather-hacky new
flag that allows the legacy SDK to signal that it is the legacy SDK and
thus disable the check.

Once we start phasing out the legacy SDK in favor of one that natively
understands our new type system, we can stop setting this flag and thus
get the additional safety of this check without breaking any
previously-released providers.

No other SDK is permitted to set this flag, and we will remove it if we
ever introduce protocol version 6 in future, assuming that any provider
supporting that protocol will always produce consistent results.
2019-02-06 11:40:30 -08:00