Commit Graph

1093 Commits

Author SHA1 Message Date
Alex Somesan
01a169c66f make fmt 2019-04-17 19:58:06 +02:00
Alex Somesan
0395c3ac58
Re-phrase docs to enumerate types which support ValidateFunc 2019-04-17 18:58:58 +02:00
Martin Atkins
861a2ebf26 helper/schema: Use a more targeted shim for nested set diff applying
We previously attempted to make the special diff apply behavior for nested
sets of objects work with attribute mode by totally discarding attribute
mode for all shims.

In practice, that is too broad a solution: there are lots of other shimming
behaviors that we _don't_ want when attribute mode is enabled. In
particular, we need to make sure that the difference between null and
empty can be seen in configuration.

As a compromise then, we will give all of the shims access to the real
ConfigMode and then do a more specialized fixup within the diff-apply
logic: we'll construct a synthetic nested block schema and then use that
to run our existing logic to deal with nested sets of objects, while
using the previous behavior in all other cases.

In effect, this means that the special new behavior only applies when the
provider uses the opt-in ConfigMode setting on a particular attribute,
and thus this change has much less risk of causing broad, unintended
regressions elsewhere.
2019-04-17 07:47:31 -07:00
Martin Atkins
bd1a215580 helper/resource: Ignore Removed attributes for ImportStateVerify
Due to the lossiness of our legacy models for diff and state, shimming a
diff and then creating a state from it produces a different result than
shimming a state directly. That means that ImportStateVerify no longer
works as expected if there are any Computed attributes in the schema where
d.Set isn't called during Read.

Fixing that for every case would require some risky changes to the shim
behavior, so we're instead going to ask provider developers to address it
by adding `d.Set` calls where needed, since that is the contract for
"Computed" anyway -- a default value should be produced during Create, and
thus by extension during Import.

However, since a common situation where this occurs is attributes marked
as "Removed", since all of the code that deals with them has generally
been deleted, we'll avoid problems in that case here by treating Removed
attributes as ignored for the purposes of ImportStateVerify.

This required exporting some functionality that was formerly unexported
in helper/schema, but it's a relatively harmless schema introspection
function so shouldn't be a big deal to export it.
2019-04-16 11:14:49 -07:00
Martin Atkins
88e76fa9ef configs/configschema: Introduce the NestingGroup mode for blocks
In study of existing providers we've found a pattern we werent previously
accounting for of using a nested block type to represent a group of
arguments that relate to a particular feature that is always enabled but
where it improves configuration readability to group all of its settings
together in a nested block.

The existing NestingSingle was not a good fit for this because it is
designed under the assumption that the presence or absence of the block
has some significance in enabling or disabling the relevant feature, and
so for these always-active cases we'd generate a misleading plan where
the settings for the feature appear totally absent, rather than showing
the default values that will be selected.

NestingGroup is, therefore, a slight variation of NestingSingle where
presence vs. absence of the block is not distinguishable (it's never null)
and instead its contents are treated as unset when the block is absent.
This then in turn causes any default values associated with the nested
arguments to be honored and displayed in the plan whenever the block is
not explicitly configured.

The current SDK cannot activate this mode, but that's okay because its
"legacy type system" opt-out flag allows it to force a block to be
processed in this way anyway. We're adding this now so that we can
introduce the feature in a future SDK without causing a breaking change
to the protocol, since the set of possible block nesting modes is not
extensible.
2019-04-10 14:53:52 -07:00
James Bardin
af8115dc9b removing the ~ set flag is no longer needed
The computed set sigil ~ should no longer appear in the diffs, because
the config will be cleaned before generating the diff.
2019-04-10 09:39:45 -04:00
James Bardin
5f52aba3ae Remove unknown value strings from apply diffs
The synthetic config value used to create the Apply diff should contain
no unknown config values. Any remaining UnknownConfigValues were due to
that being used as a placeholder for values yet to be computed, and
these should be marked NewComputed in the diff.
2019-04-10 09:34:39 -04:00
James Bardin
3ec93710fc PromoteSingle is used in 0.11 mode 2019-04-08 17:12:39 -04:00
James Bardin
a3d58665ad use LegacyResourceSchema
rather than the previous .CoreConfigSchemaForShimming
2019-04-08 16:45:35 -04:00
James Bardin
8730d99309 LegacyResourceSchema to remove 0.12 features
This allows us to call CoreConfigSchema and return something that looks
like the original schema.
2019-04-08 16:45:35 -04:00
James Bardin
c5023c7702 cleanup after AsSingle removal 2019-04-08 16:45:35 -04:00
James Bardin
1a9c06d0f5 Revert "helper/schema: Implementation of the AsSingle mechanism"
This reverts commit 1987a92386.
2019-04-08 16:45:35 -04:00
James Bardin
4dbe6add77 Revert "helper/schema: Schema.AsSingle flag"
This reverts commit 4c0c74571de9c96ad2902ccf4af962ec495cd5d4.
2019-04-08 16:45:35 -04:00
James Bardin
7b67105407 don't strip new-computeds from plan diffs
Stripping these was a patch for some provider behavior which was fixed
in other ways, and is no longer needed.
Removing this allows us to implement correct CusomizeDiffFuncs in
providers so that they can mark fields with empty values as computed
during a plan.
2019-04-03 17:37:58 -04:00
James Bardin
51ddc554f5
Merge pull request #20909 from hashicorp/jbardin/validate-config-nulls
Validate configuration shims for nulls in lists
2019-04-03 14:06:22 -04:00
James Bardin
e024960c74 Revert "filter nulls when shimming a config"
This reverts commit 97bde5467c.
2019-04-03 13:52:56 -04:00
James Bardin
f5395bd98a validate null values in shimmed configs
A list-like attribute containing null values will present a list to
helper/schema with nils, which can cause panics. Since null values were
not possible in configuration before HCL2 and not supported by the
legacy SDK, return an error to the user.
2019-04-03 11:10:24 -04:00
Brian Flad
a8e3787afc
helper/schema: Prevent setSet() panic with typed nil
References:

* https://github.com/hashicorp/terraform/issues/14418
* v0.9.5 (original bug report): a59ee0b30e/helper/schema/field_writer_map.go (L311)
* v0.11.12 (Terraform AWS Provider discovery): 057286e522/helper/schema/field_writer_map.go (L343)

When creating flatten functions in Terraform Providers that return *schema.Set, its possible to return a typed `nil`, e.g.

```go
func flattenHeaders(h *cloudfront.Headers) *schema.Set {
	if h.Items != nil {
		return schema.NewSet(schema.HashString, flattenStringList(h.Items))
	}
	return nil
}
```

This previously could cause a panic, e.g.

```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x1881911]

goroutine 1325 [running]:
github.com/hashicorp/terraform/helper/schema.(*MapFieldWriter).setSet(0xc00054bf00, 0xc00073efa0, 0x5, 0x5, 0x5828140, 0x0, 0xc0002cea50, 0xc000e996a8, 0xc001026e40)
	/Users/bflad/go/pkg/mod/github.com/hashicorp/terraform@v0.11.12/helper/schema/field_writer_map.go:343 +0x211
```

Here we catch the typed `nil` and return an empty list flatmap result instead. Unit testing result prior to code update:

```
--- FAIL: TestMapFieldWriter (0.00s)
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
  panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x1777cdc]

goroutine 913 [running]:
testing.tRunner.func1(0xc00045b800)
  /usr/local/Cellar/go/1.12.1/libexec/src/testing/testing.go:830 +0x392
panic(0x192cf20, 0x2267ca0)
  /usr/local/Cellar/go/1.12.1/libexec/src/runtime/panic.go:522 +0x1b5
github.com/hashicorp/terraform/helper/schema.(*MapFieldWriter).setSet(0xc0004648a0, 0xc0004408d0, 0x1, 0x1, 0x19e3de0, 0x0, 0xc00045c600, 0x30, 0x19e0080)
  /Users/bflad/src/github.com/hashicorp/terraform/helper/schema/field_writer_map.go:344 +0x68c
github.com/hashicorp/terraform/helper/schema.(*MapFieldWriter).set(0xc0004648a0, 0xc0004408d0, 0x1, 0x1, 0x19e3de0, 0x0, 0x1, 0x18)
  /Users/bflad/src/github.com/hashicorp/terraform/helper/schema/field_writer_map.go:107 +0x28b
github.com/hashicorp/terraform/helper/schema.(*MapFieldWriter).WriteField(0xc0004648a0, 0xc0004408d0, 0x1, 0x1, 0x19e3de0, 0x0, 0x0, 0x0)
  /Users/bflad/src/github.com/hashicorp/terraform/helper/schema/field_writer_map.go:89 +0x504
github.com/hashicorp/terraform/helper/schema.TestMapFieldWriter(0xc00045b800)
  /Users/bflad/src/github.com/hashicorp/terraform/helper/schema/field_writer_map_test.go:337 +0x2ddd
testing.tRunner(0xc00045b800, 0x1a44f90)
  /usr/local/Cellar/go/1.12.1/libexec/src/testing/testing.go:865 +0xc0
created by testing.(*T).Run
  /usr/local/Cellar/go/1.12.1/libexec/src/testing/testing.go:916 +0x35a
```
2019-04-01 20:10:32 -04:00
James Bardin
64c76be804 fixup lost collections containing unknowns
It turns out that collections containing only unknowns could be lost,
meaning there wasn't a direct correlation between the unknown and null
value which would have otherwise been restored.
2019-03-29 14:54:54 -04:00
James Bardin
86e30add98 fix unknowns added to maps by schemaMap
The legacy diff process inserts unknown values into an optional+computed
map. Fix these up in post-plan normalization process, by looking for
known strings that were changed to unknown.
2019-03-29 13:56:43 -04:00
James Bardin
009df443f7 restore lost unknowns during a planned update.
Because schema.ResourceDiff can't differentiate between unknown
values and new computed values, unknowns can be lost during an update.
If a planned value converted an unknown to a null, restore the unknown
so that it can be correctly replaced in the final plan.
2019-03-29 13:56:43 -04:00
Alex Somesan
ef681e527d Rephrase for clarity 2019-03-29 16:53:13 +01:00
Alex Somesan
adebc65d95 Document that ValidateFunc works on maps. 2019-03-27 20:10:46 +01:00
James Bardin
bb62aba651 add (forces new resource) to provider test diffs
Add the (forces new resource) annotation to the diff output for provider
tests failures when we can. This helps providers narrow down what might
be triggering changes when encountering test failures with the new SDK.
2019-03-22 15:30:51 -04:00
Martin Atkins
135121562e helper/plugin: Implement Schema.SkipCoreTypeCheck
The previous commit added this flag but did not implement it. Here we
implement it by adjusting the shape of schema we return to Terraform Core
to mark the attribute as untyped and then ensure that gets handled
correctly on the SDK side.
2019-03-21 15:19:59 -07:00
Martin Atkins
7f860dc83e helper/schema: Schema.SkipCoreTypeCheck flag
When running in v0.12-and-higher mode, this will cause the SDK to report
the type of the attribute as "any", effectively skipping type checking
on the Core side altogether and checking only in the SDK and provider
code.

The practical impact of this is to restore the v0.11-style checking
behavior of allowing object values to be missing certain attributes as
long as they are marked as optional in the schema. The SDK can do this
because it uses a unified schema model for both object values and nested
blocks, while Terraform Core only supports the idea of "optional" when
talking about attributes in nested blocks.

This is a continuation of the pile of workarounds that also includes
the ConfigMode and AsSingle fields, allowing providers to selectively opt
out of new v0.12 behaviors in situations where they conflict with
decisions made in the design of the providers in our old world where
Terraform Core delegated _all_ validation to providers.

This is designed as an opt-in so that we can limit its impact only to
specific cases where it's needed and minimize the risk of regressions
elsewhere. Providers should use this sparingly only in situations where
prevailing usage disagrees with the new expectations of Terraform Core in
v0.12.

This commit only adds the flag, and does not implement any behavior for it
yet. That means this commit can exist in both the v0.11 and v0.12
codebases, allowing for API compatibility. A subsequent commit for v0.12
(not included in v0.11) will then implement this behavior.
2019-03-21 15:19:59 -07:00
Justin Downing
1e32ae243c grammatical updates to comments and docs (#20195) 2019-03-21 14:05:41 -07:00
Martin Atkins
35df450dc0 helper/resource: Preserve provider address when shimming to old state
It's important to preserve the provider address because during the destroy
phase of provider tests we'll use the references in the state to determine
which providers are required, and so without this attempts to override
the provider using the "provider" meta-argument can cause failures at
destroy time when the wrong provider gets selected.

(This is particularly acute in the google-beta provider tests because that
provider is _always_ used with provider = "google-beta" to override the
default behavior of using the normal "google" provider.)
2019-03-19 15:08:46 -07:00
Paul Tyng
ea7e922007
oxford comma 2019-03-18 14:16:20 -04:00
Paul Tyng
ec9450a262
Fix limitations on Elem for TypeMap 2019-03-18 14:15:25 -04:00
Martin Atkins
1987a92386 helper/schema: Implementation of the AsSingle mechanism
The previous commit added a new flag to schema.Schema which is documented
to make a list with MaxItems: 1 be presented to Terraform Core as a single
value instead, giving a way to switch to non-list nested resources without
it being a breaking change for Terraform v0.11 users as long as it's done
prior to a provider's first v0.12-compatible release.

This is the implementation of that mechanism. It's intentionally
implemented as a suite of extra fixups rather than direct modifications to
existing shim code because we want to ensure that this has no effect
whatsoever on the result of a resource type that _isn't_ using AsSingle.

Although there is some small unit test coverage of the fixup steps here,
the primary testing for this is in the test provider since the integration
of all of these fixup steps in the correct order is the more important
result than any of the intermediate fixup steps.
2019-03-14 15:36:15 -07:00
Martin Atkins
1c8150428f helper/schema: Schema.AsSingle flag
This setting indicates that an attribute defined as TypeList or TypeSet
should be presented to Terraform Core as a single value instead when
running in Terraform v0.12 or later. It has no effect for Terraform v0.10
or v0.11.

This commit just introduces the setting without any associated behavior,
so it can be included in both the v0.12 and v0.11 branches. A subsequent
commit only to the v0.12 branch will introduce the behavior as part of
the protocol version 5 shims.
2019-03-14 15:36:15 -07:00
James Bardin
e080706e2e treat normalization during ReadResource like Plan
This will allow resources to return an unexpected change to set blocks
and attributes, otherwise we could mask these changes during
normalization.

Change the "plan" argument in normalizeNullValues to "preferDst" to more
accurately describe what the option is doing, since it no longer applies
only to PlanResourceChange.
2019-03-13 19:14:17 -04:00
James Bardin
6ecf9b143b we can normalize nulls in Read again
This should be the final change from removing the flatmap normalization.
Since we're no longer trying to a consistent zero or null value in the
flatmap config, rather we're trying to maintain the previously applied
value, ReadResource also needs to apply the normalizeNullValues step in
order to prevent unexpected diffs.
2019-03-12 16:00:25 -04:00
James Bardin
11ec3a420e remove normalizeFlatmapContainers
This method was added early on when the diff was being applied as the
legacy code would have done, which is no longer the case. Everything
that normalizeFlatmapContainers does should be covered by the
combination of the initial diff.Apply and the normalizeNullValues on the
final cty.Value.
2019-03-12 12:04:35 -04:00
Martin Atkins
4de0b33097 helper/schema: Honor ConfigMode when building core schema
This makes some slight adjustments to the shape of the schema we
present to Terraform Core without affecting how it is consumed by the
SDK and thus the provider. This mechanism is designed specifically to
avoid changing how the schema is interpreted by the SDK itself or by the
provider, so that prior behavior can be preserved in Terraform v0.11 mode.

This also includes a new rule that Computed-only (i.e. not also Optional)
schemas _always_ map to attributes, because that is a better mapping of
the intent: they are object values to be used in expressions. Nested
blocks conceptually represent nested objects that are in some sense
independent of what they are embedded in, and so they cannot themselves be
computed.
2019-03-11 17:02:05 -07:00
Martin Atkins
a6d322edec helper/schema: ConfigMode field in *Schema
This allows a provider developer slightly more control over how an SDK
schema is mapped into the Terraform configuration language, overriding
some default assumptions.

ConfigMode overrides the default assumption that a schema with
an Elem of type *Resource is to be mapped to configuration as a nested
block, allowing mapping as an attribute containing an object type instead.

These behaviors only apply when a provider is being used with Terraform
v0.12 or later. They are ignored altogether in Terraform v0.11 mode, to
preserve compatibility. We are adding these primarily to allow the v0.12
version of a resource type schema to be specified to match the prevailing
usage of it in existing configurations, in situations where the default
mapping to v0.12 concepts is not appropriate.

This commit adds only the fields themselves and the InternalValidate rules
for them. A subsequent commit for Terraform v0.12 will add the behavior
as part of the protocol version 5 shim layer.
2019-03-11 17:02:05 -07:00
James Bardin
9d4bb6ec14 stop removing empty flatmap containers
As we've improved the cty.Value normalization, we need to remove
normalization procedures from the flatmap handling. Keeping the empty
containers in the flatmap will prevent unexpected nils from being added
to some schema configurations
2019-03-11 15:14:29 -04:00
James Bardin
6cdf9ff566 Revert "normalize all objects read from the provider"
This reverts commit 209a0a460a.
2019-03-08 17:32:37 -05:00
Sander van Harmelen
973e2a7cf9 core: add a context to the UIInput interface 2019-03-08 10:24:40 +01:00
James Bardin
209a0a460a normalize all objects read from the provider
Use objchange.NormalizeObjectFromLegacySDK to ensure that all objects
returned from the provider match what is expected based on the
configuration according to the schemas.
2019-03-06 14:09:04 -05:00
James Bardin
3600f59bb7
Merge pull request #20525 from hashicorp/jbardin/extra-set-value
remove the partially-known ~ set sigil in diffs
2019-03-05 16:50:02 -05:00
James Bardin
2b4d030a69 don't re-add removed list values even when planned
Providers were not strict (and were not forced to be) about customizing
the diff when a computed attribute needed to be updated during apply.
The fix we have in place to prevent loss of information during the
helper/schema apply process would add in single missing value back in.

The first place this was caught was when we attempt to fix up the
flatmapped attributes. The 1->0 count error is now better handled by our
cty.Value normalization step, so we can remove the special apply case
here altogether

The next place is in normalizeNullValues, and since the intent was to
re-insert missing zero-value lists and sets, adding a check for a length
of 0 protects us from adding in extra elements.

The new test fixture emulated common provider behavior of re-computing
values without customizing the diff. Since we can work around it, and
core will provider appropriate warnings, the shims should try to
maintain the legacy behavior.
2019-03-05 15:31:08 -05:00
James Bardin
47604c36c8 remove the partially-known ~ set sigil in diffs
The NewExtra values are stored outside the diff from plan, and the
original keys may not contain the ~ prefix. Adding the NewExtra back
into the diff with the mismatched key was causing an entire new set
element to be populated. Since this symbol isn't used to apply the diff
in helper/schema, we can simply strip them out.
2019-03-04 17:36:30 -05:00
James Bardin
33d5ddf291 remove empty timeouts blocks in copyTimeoutValues
The hcl2shims will always add in the timeouts block, because there's no
way to differentiate a null single block from an empty one in the
flatmapped state. Since we are only concerned with keeping the prior
timeouts value, always set the new value to null, and then copy over the
prior value if it exists.
2019-03-02 11:30:37 -05:00
James Bardin
2adf5801d9 don't panic of the users aborts backend input
When the user aborts input, it may end up as an unknown value, which
needs to be converted to null for PrepareConfig.

Allow PrepareConfig to accept null config values in order to fill in
missing defaults.
2019-03-01 18:45:06 -05:00
James Bardin
49230f8198 existing fields cannot become computed during plan
Fields with no change can only become computed during initial creation.
2019-02-28 18:45:11 -05:00
James Bardin
9a39af5047 1->0 set changes non longer should happen in Read
The new normalization should make preventing those changes unnecessary,
and will also prevent extra empty elements from being added when
resources are refreshed.
2019-02-28 17:47:11 -05:00
James Bardin
37f391f1f7 insert defaults during Backend.PrepareConfig
Lookup any defaults and insert them into the config value before
validation.
2019-02-25 19:06:09 -05:00
James Bardin
c814f2da37 Change backend.ValidateConfig to PrepareConfig
This mirrors the change made for providers, so that default values can
be inserted into the config by the backend implementation. This is only
the interface and method name changes, it does not yet add any default
values.
2019-02-25 18:37:20 -05:00
Brian Flad
3d908f56aa
helper/schema: Add deprecation to ResourceData.UnsafeSetFieldRaw
This functionality is no longer supported in Terraform 0.12 and above.
2019-02-13 22:12:10 -05:00
James Bardin
f9b62cb5fe
Merge pull request #20335 from hashicorp/jbardin/diff-apply
Diff apply needs to check for both types of containers keys
2019-02-13 19:33:34 -05:00
James Bardin
c34c37fbd5 missed .% suffixes in diff.Apply
Diff.Apply checks for unneeded container count diffs, but was missing
the check for maps.

Add an early return for planning a destroy.
2019-02-13 19:09:46 -05:00
Martin Atkins
fedbd6c3b8 helper/plugin: fix panic with empty objects in normalizeNullValues
cty.Value.AsValueMap can return nil if called on an empty map or object.
The logic above was dealing with that case for maps, but object types
were falling through into this codepath and panicking when trying to
assign a new key into the nil dstMap.

This also includes a bonus fix where we were calling ty.ElementType in
a switch case that accepts object types. Object types don't have a single
element type, so we can't call ElementType on those (that also panics)
but we _can_ use the type of the value we selected from src to construct
our placeholder null value.
2019-02-13 15:56:12 -08:00
Martin Atkins
eb1346447f
Merge #20282: Enforce expected behaviors for provider PlanResourceChange
An exception remains for the legacy SDK, which does not meet all of these requirements.
2019-02-12 09:19:05 -08:00
Martin Atkins
31299e688d core: Allow legacy SDK to opt out of plan-time safety checks
Due to the inprecision of our shimming from the legacy SDK type system to
the new Terraform Core type system, the legacy SDK produces a number of
inconsistencies that produce only minor quirky behavior or broken
edge-cases. To retain compatibility with those existing weird behaviors,
the legacy SDK opts out of our safety checks.

The intent here is to allow existing providers to continue to do their
previous unsafe behaviors for now, accepting that this will allow certain
quirky bugs from previous releases to persist, and then gradually migrate
away from the legacy SDK and remove this opt-out on a per-resource basis
over time.

As with the apply-time safety check opt-out, this is reserved only for
the legacy SDK and must not be used in any new SDK implementations. We
still include any inconsistencies as warnings in the logs as an aid to
anyone debugging weird behavior, so that they can see situations where
blame may be misplaced in the user-visible error messages.
2019-02-11 17:26:49 -08:00
James Bardin
3cecacb660
Merge pull request #20292 from hashicorp/jbardin/sdk
allow 0 and unset to be equal in count tests
2019-02-11 17:01:57 -05:00
James Bardin
1bfc27817e process state even after provider.Apply errors
Terraform core expects a sane state even when the provider returns an
error. Make sure at the prior state is always the default value to
return, and then alway attempt to process any state returned by
provider.Apply.
2019-02-11 15:41:07 -05:00
James Bardin
c02f1d7256 allow 0 and unset to be equal in count tests
This was changed in the single attribute test cases, but the AttrPair
test is used a lot for data source. As far as tests are concerned, 0 and
unset should be treated equally for flatmapped collections.
2019-02-11 11:35:19 -05:00
James Bardin
82588af892 switch blocks based on value type, and check attrs
Check attributes on null objects, and fill in unknowns. If we're
evaluating the object, it either means we are at the top level, or a
NestingSingle block was present, and in either case we need to treat the
attributes as null rather than the entire object.

Switch on the block types rather than Nesting, so we don't need add any
logic to change between List/Tuple or Map/Object when DynamicPseudoType
is involved.
2019-02-08 14:46:29 -05:00
James Bardin
32671241e0 set unknowns during initial PlanResourceChange
If ID is not set, make sure it's unknown.

Use SetUnknowns to set the rest of the computed values to Unknown.
2019-02-07 20:29:24 -05:00
James Bardin
d17ba647a8 add SetUnknowns
SetUnknown walks through a resource and changes any unset (null) values
that are going computed in the schema to Unknown.
2019-02-07 20:24:36 -05:00
Martin Atkins
1530fe52f7 core: Legacy SDK providers opt out of our new apply result check
The shim layer for the legacy SDK type system is not precise enough to
guarantee it will produce identical results between plan and apply. In
particular, values that are null during plan will often become zero-valued
during apply.

To avoid breaking those existing providers while still allowing us to
introduce this check in the future, we'll introduce a rather-hacky new
flag that allows the legacy SDK to signal that it is the legacy SDK and
thus disable the check.

Once we start phasing out the legacy SDK in favor of one that natively
understands our new type system, we can stop setting this flag and thus
get the additional safety of this check without breaking any
previously-released providers.

No other SDK is permitted to set this flag, and we will remove it if we
ever introduce protocol version 6 in future, assuming that any provider
supporting that protocol will always produce consistent results.
2019-02-06 11:40:30 -08:00
James Bardin
3b18dd7c01
Merge pull request #20224 from hashicorp/jbardin/sdk
SDK set fixes
2019-02-05 14:11:51 -05:00
James Bardin
8be864c1c7 don't allow computed set elems to be equal
If set elements are computed, we can't be certain that they are actually
equal. Catch identical computed set hashes when they are added to the
set, and alter the set key slightly to keep the set counts correct.

In previous versions the interpolation string would be included in the
set, and different string values would cause the set to hash
differently, so this is change is only activated for the new protocol.
2019-02-05 12:08:17 -05:00
James Bardin
58c9c2311a Turn on helper/schema proto5 flag in GetSchema
This turns it on at the last moment, and in one place for all uses of
helper/schema. There's no way to use the new protocol without calling
GetSchema, so we can be sure that any subsequent api calls have this set
when required.
2019-02-05 12:08:17 -05:00
James Bardin
55b4307767 add proto5 feature flag
Add feature flag to allow special proto 5 behavior in helper/schema.
This is Meant to be used as a last resort for shim-related bugs.
2019-02-05 12:08:16 -05:00
James Bardin
81a4e705b1 DiffSuppressFunc should noop diffs in sets
Sets rely on diffs being complete for all elements, even when they are
unchanged. When encountering a DiffSuppressFunc inside a set the diffs
were being dropped entirely, possible causing set elements to be lost.
2019-02-05 12:08:16 -05:00
Martin Atkins
bdcac8792d plugin: Use correct schema when marshaling imported resource objects
Previously we were using the type name requested in the import to select
the schema, but a provider is free to return additional objects of other
types as part of an import result, and so it's important that we perform
schema selection separately for each returned object.

If we don't do this, we get confusing downstream errors where the
resulting object decodes to the wrong type and breaks various invariants
expected by Terraform Core.

The testResourceImportOther test in the test provider didn't catch this
previously because it happened to have an identical schema to the other
resource type being imported. Now the schema is changed and also there's
a computed attribute we can set as part of the refresh phase to make sure
we're completing the Read call properly during import. Refresh was working
correctly, but we didn't have any tests for it as part of the import flow.
2019-02-01 15:22:54 -08:00
James Bardin
4a603011c5 don't normalizeNullValues in ReadResource
The required normalization now happens in PlanResourceChange, and this
function is no longer appropriate for ReadResource.
2019-02-01 17:21:37 -05:00
Martin Atkins
4c99864dad helper/resource: TestCheckResourceAttrPair allow nonexist
This checking helper is frequently used in provider tests for data
sources, as a shorthand to verify that an attribute of the data source
matches with the corresponding attribute on a managed resource.

Since we now leave empty collections null in more cases, this function is
sometimes effectively asked to verify that a given attribute is _unset_
in both the data source and the resource, so here we slightly adjust the
definition of the check to consider two nulls to be equal to one another,
which at this layer manifests as the keys not being present in the state
attributes map at all.

This check function didn't previously have tests, so this commit also adds
a basic suite of tests, including coverage for the new behavior.
2019-02-01 08:24:43 -08:00
James Bardin
ba081f5de4 change copyMissingValues to normalizeNullValues
While copyMissingValues was meant to re-insert empty values that were
null after apply, it turns out plan is sometimes not predictable as
well.

normalizeNullValue is meant to fix up any null/empty transitions between
to values, and be useful during plan as well. For plan the function only
concerns itself with individual, known values, and skips sets entirely.
The result of running with plan == true is that only changes between
empty and null collections should be fixed.
2019-01-31 19:02:39 -05:00
James Bardin
9cf8f48239 decode legacy timeouts
The new decoder is more precise, and unpacks the timeout block into a
single map, which ResourceTimeout.ConfigDecode was updated to handle.
We however still need to work with legacy versions of terraform, with
the old decoder.
2019-01-30 16:10:17 -05:00
James Bardin
3b04b41250 fix RequiresNew in diff
With the new diff.Apply we can keep the diff mostly intact, but we need
turn off all RequiresNew flags so that the prior state is not removed
from the apply.
2019-01-30 14:55:04 -05:00
Martin Atkins
477da57a92 helper/plugin: Honor resource type overrides in import
One quirky aspect of our import feature is that we allow the importer to
produce additional resources alongside the one that was imported, such as
to create separate rules for each rule of an imported security group.

Providers need to be able to set the types of these other resources since
they may not match the "main" resource type. They do this by calling
ResourceData.SetType, which in turn sets InstanceState.Ephemeral.Type.

In our shims here we therefore need to copy that out into our new TypeName
field so that the new core import code can see it and create the right
type in the state.

Testing this required a minor change to the test harness to allow the
ImportStateCheck function to see the resource type.
2019-01-30 09:05:08 -08:00
Paul Tyng
bb9ae50279
Copy TF version to helper/schema provider 2019-01-28 14:38:49 -05:00
Martin Atkins
ae0be75ae0 helper/schema: TypeMap of Resource is actually of TypeString
Historically helper/schema did not support non-primitive map attributes
because they cannot be represented unambiguously in flatmap. When we
initially implemented CoreConfigSchema here we mapped that situation to
a nested block of mode NestingMap, even though that'd never worked until
now, assuming that it'd be harmless because providers wouldn't be using
it.

It turns out that some providers are, in fact, incorrectly populating
a TypeMap schema with Elem: &schema.Resource, apparently under the false
assumption that it would constrain the keys allowed in the map. In
practice, helper/schema has just been ignoring this and treating such
attributes as map of string. (#20076)

In order to preserve the behavior of these existing incorrectly-specified
attribute definitions, here we mimic the helper/schema behavior by
presenting as an attribute of type map(string).

These attributes have also been shown in some documentation as nested
blocks (with no equals sign), so that'll need to be fixed in user
configurations as they upgrade to Terraform 0.12. However, the existing
upgrade tool rules will take care of that as a natural consequence of the
name being indicated as an attribute in the schema, rather than as a block
type.

This fixes #20076.
2019-01-25 14:12:58 -08:00
James Bardin
37b5e2dc87 don't remove empty diff values
Our new diff handling no longer requires stripping the empty diffs out,
and provider may be relying on some of the empty-value quirks in
helper/schema.
2019-01-23 17:33:23 -05:00
James Bardin
46a4628782
Merge pull request #20081 from hashicorp/jbardin/list-block
New Diff.Apply method
2019-01-22 19:20:53 -05:00
Martin Atkins
f65b7c5372 helper/plugin: Discard meaningless differences from provider planning
Due to various inprecisions in the old SDK implementation, applying the
generated diff can potentially make changes to the data structure that
have no real effect, such as replacing an empty list with a null list or
vice-versa.

Although we can't totally eliminate such diff noise, here we attempt to
avoid it in situations where there are _only_ meaningless changes -- where
the prior state and planned state are equivalent -- by just echoing back
the prior state verbatim to ensure that Terraform will treat it as a noop
change.

If there _are_ some legitimate changes then the result may still contain
meaningless changes alongside it, but that is just a cosmetic problem for
the diff renderer, because the meaningless changes will be ignored
altogether during a subsequent apply anyway. The primary goal here is just
to ensure we can converge on a fixpoint when there are no explicit changes
in the configuration.
2019-01-22 15:41:10 -08:00
James Bardin
8d302c5bd2 update grpc_provider for new diffs
Keep the diff as-is before applying.
2019-01-22 18:10:12 -05:00
James Bardin
286cb0a39d clean out diff a little more before checking
Check if there wasn't any real diff attributes first, before returning
the original state in PlanResourceChange.
2019-01-17 19:19:13 -05:00
James Bardin
4f691c5988 don't replace null strings with empty strings
This adds unexpected values in some cases, and since the case this
handles is only within set objects, we'll deal woth this when tackling
the sets themselves.
2019-01-17 19:19:13 -05:00
James Bardin
2cc651124e don't overwrite values in plan
Plan can change known values too, which we can't match in sets. We'll
find another way to normalize these eithout losing plan values.
2019-01-17 18:51:18 -05:00
James Bardin
7d05dee08d refactor ApplyResourceChange
Remove a bunch of indentation by returning early, and make sure we don't
fail on non-fatal error without saving the applied value.
2019-01-15 12:35:58 -05:00
James Bardin
0a731167db add a round trip through the shims during apply
Cycle through the shim operations after Apply, to ensure that we can
converge on a stable value for for Plan. While the shims produce valid
values in both directions, helper/schema sometimes does not agree on
which containers should be empty or null.
2019-01-15 11:59:15 -05:00
Martin Atkins
86c02d5c35 command: "terraform init" can partially initialize for 0.12upgrade
There are a few constructs from 0.11 and prior that cause 0.12 parsing to
fail altogether, which previously created a chicken/egg problem because
we need to install the providers in order to run "terraform 0.12upgrade"
and thus fix the problem.

This changes "terraform init" to use the new "early configuration" loader
for module and provider installation. This is built on the more permissive
parser in the terraform-config-inspect package, and so it allows us to
read out the top-level blocks from the configuration while accepting
legacy HCL syntax.

In the long run this will let us do version compatibility detection before
attempting a "real" config load, giving us better error messages for any
future syntax additions, but in the short term the key thing is that it
allows us to install the dependencies even if the configuration isn't
fully valid.

Because backend init still requires full configuration, this introduces a
new mode of terraform init where it detects heuristically if it seems like
we need to do a configuration upgrade and does a partial init if so,
before finally directing the user to run "terraform 0.12upgrade" before
running any other commands.

The heuristic here is based on two assumptions:
- If the "early" loader finds no errors but the normal loader does, the
  configuration is likely to be valid for Terraform 0.11 but not 0.12.
- If there's already a version constraint in the configuration that
  excludes Terraform versions prior to v0.12 then the configuration is
  probably _already_ upgraded and so it's just a normal syntax error,
  even if the early loader didn't detect it.

Once the upgrade process is removed in 0.13.0 (users will be required to
go stepwise 0.11 -> 0.12 -> 0.13 to upgrade after that), some of this can
be simplified to remove that special mode, but the idea of doing the
dependency version checks against the liberal parser will remain valuable
to increase our chances of reporting version-based incompatibilities
rather than syntax errors as we add new features in future.
2019-01-14 11:33:21 -08:00
Martin Atkins
0c0a437bcb Move module install functionality over to internal/initwd 2019-01-14 11:33:21 -08:00
James Bardin
041ed67e46 type names don't imply the resource mode
The addr type doesn't imply the resource mode, so data sources and
managed resources with the same type name could shim incorrectly.
2019-01-12 11:43:48 -05:00
James Bardin
e8096e9c8b normalize values during ReadResource
Match the normalization behavior of Apply, so we don't end up causing
any diffs between zero values when refreshing resources.
2019-01-12 10:41:04 -05:00
James Bardin
bc5eecd7f2 make sure id really gets set in SetId
SetId needs to overwrite the newState as well, since the internal calls
to DataSource.Id() will override the set attribute.
2019-01-10 20:28:11 -05:00
James Bardin
a7b399cb4c use actual schema.Resources for state shims
Provider tests often rely on checking values contained within sets, by
directly accessing their flatmapped representation. In order to provider
the test harness with the expected set hashes, the sets must be
generated by the schema.Resource itself.

During the test we now build a fixed map of the providers, which should
only contain schema.Provider instances, and pass them into each
TestStep. The individual schema.Resource instances can then be pulled
from the providers, and used to recreate the state from the cty.Value
returned by the core operations.
2019-01-10 12:20:03 -05:00
James Bardin
7973872524 allow TestCheckNoResourceAttr for empty containers
Stricter type handling in the new shims may add empty containers into
the state where they were previously elided. Since the detection of
missing and empty containers in the legacy state was never reliable,
allow TestCheckNoResourceAttr to succeed if the key is a container count
index, and the value is "0"
2019-01-09 13:09:02 -05:00
James Bardin
c63040c737 have TestCheckResourceAttr accept missing counts
Missing containers were often erroneously kept in the state, but since
the addition of the new provider shims, they can often be correctly
eliminated. There are however many tests that check for a "0" count in
the flatmap state when there shouldn't be a key at all. This addition
looks for a container count key and "0" pair, and allows for the key to
be missing.

There may be some tests negatively effected by this which were
legitimately checking for empty containers, but those were also not
reliably detected, and there should be much fewer tests involved.
2019-01-09 13:01:17 -05:00
James Bardin
b55ec74c27 add copyMissingValues for normalizing shimmed Vals
Zero values and empty containers can be lost during the shimming
process, and during the provider's Apply step.

If we have known zero value containers and primitives in the source,
which appear as null values in the destination, we copy over the zero
value. Sets (and lists to an extent) are more difficult, since there
before and after indexes may not correlate. In that case we take the
entire container if it's wholly known, expecting the provider to have
correctly handled the value.
2019-01-08 16:26:22 -05:00
James Bardin
8300d65539 don't strip sets with count 1 when normalizing
normalizeFlatmapContainers should retain sets with a count of 1, and
convert sets with a count of 0 if they were 1 before the Apply step.
2019-01-08 16:26:21 -05:00
Martin Atkins
cdad78d69b helper/resource: Allow multiple providers in a single TestCase
Due to incorrect use of a loop iterator variable inside a closure, all of
the given providers were ending up with the same factory function.
Now we copy the factory function to a local within the loop first so that
each iteration has its own variable.

This is the second round of similar bugs in this function, so we'll also
add a test case for it to reduce the risk of future regressions given that
most real callers don't exercise this with multiple providers in practice.
2019-01-07 16:58:36 -08:00
Martin Atkins
b190d3b4f2 helper/resource: Shim back to old state must preserve schema version
We use a shim to convert from the new state model back to the old because
the provider test API is still using the old API throughout. However, the
shim was not preserving the schema version recorded in the new-style state
and so a round-trip through this shim would cause the schema versions to
all revert to zero.

This can cause trouble with the destroy phase of provider tests because
(for API legacy reasons) we round-trip from old state back to new again
before the destroy phase and thus causing the providers to try to upgrade
from state version zero even though the data was already latest, which
can cause errors because state upgrades are generally not idempotent.
2019-01-05 10:00:30 -08:00
Martin Atkins
06acc3f6c8 helper/schema: Skip validation of unknown values
With the introduction of explicit "null" in 0.12 it's possible for a value
that is unknown during plan to become a known null during apply, so we
need to slightly weaken our validation rules to accommodate that, in
particular skipping the validation of conflicting attributes if the result
could potentially be valid after the unknown values become known.

This change is in the codepath that is common to both 0.12 and 0.11
callers, but that's safe because 0.11 re-runs validation during the apply
step and so will still catch problems here, albeit in the apply step
rather than in the plan step, thus matching the 0.12 behavior. This new
behavior is a superset of the old in the sense that everything that was
valid before is still valid.

The implementation here also causes us to skip all other validation for
an attribute whose value is unknown. Most of the downstream validation
functions handle this directly anyway, but again this doesn't add any new
failure cases, and should clean up some of the rough edges we've seen with
unknown values in 0.11 once people upgrade to 0.12-compatible providers.
Any issues we now short-circuit during planning will still be caught
during apply.

While working on this I found that the existing "Not a list" test was not
actually testing the correct behavior, so this also includes a tweak to
that to ensure that it really is checking the "should be a list" path
rather than the "cannot be set" codepath it was inadvertently testing
before.
2019-01-04 14:46:47 -08:00
James Bardin
8ab5698e2a
Merge pull request #19587 from hashicorp/jbardin/safe-appends
don't modify argument slices
2018-12-10 15:10:02 -05:00