The omitUnknowns and unknownAsBool functions were previously trying hard
to preserve the same collection types in the output as they had in the
input, by attempting to keep everything matched up so that the results
would be valid.
Unfortunately, this turns out to be a harder problem than we originally
thought: it was possible for a collection value going in to produce
inconsistent element types out (and thus a panic) in the following
situations:
- when a collection with mixed known and unknown values was passed in
to omitUnknowns.
- when a collection of collections where the inner collections are a
mixture of empty and not empty in unknownAsNull.
The results of these functions are only used to marshal to JSON anyway,
and JSON serialization can't distinguish between the three sequence types
or the two mapping types, so in practice we can just standardize on
converting all sequences to tuple and all mappings to object here and not
change the resulting output at all, and then we don't have to worry about
making sure all of the inner types get preserved exactly.
A nice consequence of that relaxation is that we can now do what we
originally wanted to do with unknownAsBool, and omit map keys and
object attributes altogether if their values would've been false,
producing a much more compact result. This is easiest to do now when
there's only one known user of this JSON plan output, and we know that
user will treat both false and omitted as the same here.
* core: don't panic in NodeAbstractResourceInstance References()
It is possible for s.Current to be nil. This was hard to reproduce, so
the root cause is still unknown, but we can guard against the symptom.
* add log statement
The backend gets to "prepare" the configuration before Configure is
called, in order to validate the values and insert defaults. We don't
want to store this value in the "config state", because it will often
not match the raw config after it is prepared, forcing unecessary
backend migrations during init.
Since PrepareConfig is always called before Configure, we can store the
config value directly, and assume that it will be prepared in the same
manner each time.
If the backend config hashes match during init, and there are no new
backend override options, then we assume the existing config is OK.
Since init should be idempotent, we should be able to run init with no
options or config changes, and not effect the backends at all.
A longer-form guide will follow in the Sentinel section of the Terraform
Enterprise documentation, once it's ready. For now, this section isn't
saying anything useful since it was always just a stub for a guide we
planned to write later.
Previously the type-selection codepath for an input tuple referred
unconditionally to the start and end index values. In the Type callback,
only the types of the arguments are guaranteed to be known, so any access
to the values must be guarded with an .IsKnown check, or else the function
will not short-circuit properly when an unknown value is passed.
Now we will check the start and end indices are in range when we have
enough information to do so, but we'll return an approximate result if
either is unknown.
The upgrade tool is assuming that a type of "list" means list(string) and
a type of "map" means map(string), because that was what we documented
those as meaning.
In practice, Terraform 0.11 was lacking some validation which allowed
more complex nested structures in some cases even though they were pretty
inconvenient to use due to other language limitations.
The upgrade tool doesn't have enough context to make a reliable decision
on this, so instead we'll rely on the upgrade guide for this. We don't
need a TF-UPGRADE-TODO comment in this case because we reserve those for
things where a subsequent operation might cause the configuration to be
misinterpred, rather than just causing an error. Instead, we'll show an
example of the comment in the upgrade guide so the reader can easily
match it, and give some advice in the guide on how to address it.
This is a "should never happen" case, because we shouldn't ever have
resources in the plan that aren't in the configuration, but since we've
got a report of a crash here (which went away before we got a chance to
debug it) here's just an extra guard to ensure that we'll still exit
gracefully in that case.
If we see this error crop up again in future, it'd be nice to gather a
full trace log so we can see what GraphNodeAttachResourceConfig did and
why it did not attach a configuration.
This includes a small fix to ensure the parser doesn't produce an invalid
body for block parsing syntax errors, and instead produces an incomplete
result that calling applications like Terraform can still analyze.
The problem here was affecting our version-constraint-sniffing code, which
intentionally tried to find a core version constraint even if there's a
syntax error so that it can report that a new version of Terraform is a
likely cause of the syntax error. It was working in most cases, unless
it was the "terraform" block itself that contained the error, because then
we'd try to analyze a broken hcl.Block with a nil body.
This includes a new test for "terraform init" that exercises this
recovery codepath.
Having removed the methods, it is straightforward to mechanically update
this file to get rid of all references to the "legacy schema". There is
now only one config schema type to deal with in the sdk.
This experiment is no longer needed for handling computed blocks, since
the legacy SDK can't reasonably handle Dynamic types, we need to remove
this before the final release.
Remove LegacySchema functions as well, since handling SkipCoreTypeCheck
was the only thing left they were handling.
When handling the json state in UpgradeResourceState, the schema
must be what core uses, because that is the schema used for
encoding/decoding the json state.
When converting from flatmap to json state, the legacy schema will be
used to decode the flatmap to a cty value, but the resulting json will
be encoded using the CoreConfigSchema to match what core expects.