We previously deferred this to Validate, but all of our operations require
a valid set of variables and so checking this up front makes it more
obvious when a call into Terraform Core from the CLI layer isn't
populating variables correctly, instead of having it fail downstream
somewhere.
It's the caller's responsibility to ensure that it has collected values
for all of the variables in some way before calling NewContext, because
in the main case driven by the CLI there are many different places that
variable values can be collected from and so handling the main user-facing
validation in the CLI allows us to return better error messages that take
into account which way a variable is (or is not) being set.
Earlier on in the v0.12 development cycle we made the decision that the
validation walk should consider input values to always be unknown so that
validation is checking validity for all possible inputs rather than for
a specific set of inputs; checking for a specific set of inputs is the
responsibility of the plan walk.
However, we didn't implement that in the best way: we made the
"terraform validate" command force all of the input variables to unknown
but that was insufficient because it didn't also affect the implicit
validation walk we do as part of "terraform plan" and "terraform apply",
causing those to produce confusingly-different results.
Instead, we'll address the problem directly in the reference resolver code,
ensuring that all variable values will always be treated as an unknown
(of the declared type, so type checking is still possible) during any
validate walk, regardless of which command is running it.
Both depends_on and ignore_changes contain references to objects that we
can validate.
Historically Terraform has not validated these, instead just ignoring
references to non-existent objects. Since there is no reason to refer to
something that doesn't exist, we'll now verify this and return errors so
that users get explicit feedback on any typos they may have made, rather
than just wondering why what they added seems to have no effect.
This is particularly important for ignore_changes because users have
historically used strange values here to try to exploit the fact that
Terraform was resolving ignore_changes against a flatmap. This will give
them explicit feedback for any odd constructs that the configuration
upgrade tool doesn't know how to detect and fix.
Previously we were making an invalid assumption in evaluating module call
references (like module.foo) that the module must exist, which is
incorrect for that particular case because it's a reference to a child
module, not to an object within the current module.
However, now that we have the mechanism for static validation of
references, we'll deal with this one there so it can be caught sooner.
That then makes the original assumption valid, though for a different
reason.
This is verified by two new context tests for validation:
- TestContext2Validate_invalidModuleRef
- TestContext2Validate_invalidModuleOutputRef
In the initial move to HCL2 we started relying only on full expression
evaluation to catch attribute errors, but that's not sufficient for
resource attributes in practice because during validation we can't know
yet whether a resource reference evaluates to a single object or to a
list of objects (if count is set).
To address this, here we reinstate some static validation of resource
references by analyzing directly the reference objects, disregarding any
instance index if present, and produce errors if the remaining subsequent
traversal steps do not correspond to items within the resource type
schema.
This also allows us to produce some more specialized error messages for
certain situations. In particular, we can recognize a reference like
aws_instance.foo.count, which in 0.11 and prior was a weird special case
for determining the count value of a resource block, and offer a helpful
error showing the new length(aws_instance.foo) usage pattern.
This eventually delegates to the static traversal validation logic that
was added to the configschema package in a previous commit, which also
includes some specialized error messages that distinguish between
attributes and block types in the schema so that the errors relate more
directly to constructs the user can see in the configuration.
In future we could potentially move more of the checks from the dynamic
schema construction step to the static validation step, but resources
are the reference type that most needs this immediately due to the
ambiguity caused by the instance indexing syntax. We can safely refactor
other reference types to be statically validated in later releases.
This is verified by two pre-existing context validate tests which we
temporarily disabled during earlier work (now re-enabled) and also by a
new validate test aimed specifically at the special case for the "count"
attribute.
We can't catch invalid attributes in validate at the moment, because the
lack of count information causes the references to return unknown. Make
sure they fail in plan, and mark the validate tests to fix later.
Remove a test that is no longer needed, since provider must be
explicitly defined for orphaned modules, and is covered in other context
tests.
Udpate a test fixture to better represent the origianl missing map
issue, since the ability to detect nil now made the old test invalid.
Significant changes to the provider interface left a lot of the
tests in a non-buildable state. This set of changes gets the
tests building again but does not attempt to make them run to
completion or pass.
After this commit, it is possible to build a test program for
the ./terraform package but it will panic during its run. That
will be addressed in subsequent commits.
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
The "config" package is no longer used and will be removed as part
of the 0.12 release cleanup. Since configschema is part of the
"new world" of configuration modelling, it makes more sense for
it to live as a subdirectory of the newer "configs" package.
Previously we would attempt to DynamicExpand during the validate walk and
then validate each expanded instance separately. However, this meant that
we would not be able to validate the contents of a block where count = 0
or if count is not yet known.
Here we instead do a more static validation pass against the resource
configuration itself, setting count.index to cty.UnknownVal(cty.Number) so
we can type-check everything inside the block as being correct regardless
of the final count.
This is another step towards repairing the "validate" command for our
changed assumptions in a world where we have a more sophisticated type
checker.
This doesn't yet address the remaining problem that the expression
evaluator can't, with the current state structures, distinguish between
a completed resource with count = 0 and a resource that doesn't exist
at all (during validate), and so we'll still get errors if an expression
elsewhere in configuration refers to a dynamic index of a resource with
"count" set. That's a pre-existing condition that's no longer being masked
by _this_ problem, but can't be addressed until we've introduced the new
state types (states.State, etc) and thus we _can_ distinguish these two
situations. That will therefore be addressed in a later commit.
Previously we fetched schemas during the AttachSchemaTransformer,
potentially multiple times as that was re-run for each graph built. Now
we fetch the schemas just once during context construction, passing that
result into each of the graph builders.
This only addresses the schema accesses during graph construction. We're
still separately loading schemas during the main walk for evaluation
purposes. This will be addressed in a later commit.
This problem should now be caught at validate time rather than plan time,
because we can use the schema to detect the problem before the resource
has been resolved.
EvalValidateSelfRef needs schema in order to extract references. It was
previously expecting a *configschema.Block directly, but we weren't
actually passing that in from anywhere except the tests because it's not
available directly in that form during the evaltree for
node_resource_validate.
Instead, we now pass in the whole *ProviderSchema for the associated
provider and have this EvalNode find the schema itself based on the
address. This breaks some of the generality of this node (now only really
works for resource addresses) but that's okay since we have no other
use-case right now anyway.
After the refactoring to integrate HCL2 many of the tests were no longer
using correct types, attribute names, etc.
This is a bulk update of all of the tests to make them compile again, with
minimal changes otherwise. Although the tests now compile, many of them
do not yet pass. The tests will be gradually repaired in subsequent
commits, as we continue to complete the refactoring and retrofit work.
Validation is the best time to return detailed diagnostics
to the user since we're much more likely to have source
location information, etc than we are in later operations.
This change doesn't actually add any detail to the messages
yet, but it changes the interface so that we can gradually
introduce more detailed diagnostics over time.
While here there are some minor adjustments to some of the
messages to improve their consistency with terminology we
use elsewhere.
Use the configured providers directly, rather than looking for inherited
provider configuration during graph evaluation.
First remove the provider config cache, and the associated
SetProviderConfig and ParentProviderConfig methods on the eval context.
Every provider must be configured, so there's no need to look for
configuration from other provider instances.
The config.ProviderConfig struct now has a Scope field which stores the
proper path for the interpolation scope. To get this metadata to the
interpolator, we add an EvalInterpolatProvider node which can carry the
ProviderConfig, and an InterpolateProvider context method to carry the
ProviderConfig.Scope into the InterplationScope.
Some of the tests could be adjusted to account for the new inheritance
behavior, and some were simply no longer valid and will be removed.
The remaining tests have questions on how they should work in practice.
This mostly concerns orphaned modules where there is no longer a way to
obtain a provider. In some cases we may require that a minimal provider
config be present to handle the destroy process, but we need further
testing.
All disabled code was commented out in this commit to record any
additional comments. The following commit will be a cleanup pass.
The shadow graph was incredibly useful during the 0.7 cycle but these days
it is idle, since we're not planning any significant graph-related changes
for the forseeable future.
The shadow graph infrastructure is somewhat burdensome since any change
to the ResourceProvider interface must have shims written. Since we _are_
expecting changes to the ResourceProvider interface in the next few
releases, I'm calling "YAGNI" on the shadow graph support to reduce our
maintenence burden.
If we do end up wanting to use shadow graph again in future, we'll always
be able to pull it out of version control and then make whatever changes
we skipped making in the mean time, but we can avoid that cost in the
mean time while we don't have any evidence that we'll need to pay it.
Rather than providing an already-resolved map of plugins to core, we now
provide a "provider resolver" which knows how to resolve a set of provider
dependencies, to be determined later, and produce that map.
This requires the context to be instantiated in a different way, so this
very noisy diff is a mostly-mechanical update of all of the existing
places where contexts get created for testing, using some adapted versions
of the pre-existing utilities for passing in mock providers.
This disables the computed value check for `count` during the validation
pass. This enables partial support for #3888 or #1497: as long as the
value is non-computed during the plan, complex values will work in
counts.
**Notably, this allows data source values to be present in counts!**
The "count" value can be disabled during validation safely because we
can treat it as if any field that uses `count.index` is computed for
validation. We then validate a single instance (as if `count = 1`) just
to make sure all required fields are set.
Related to #8036
We have had this behavior for a _long_ time now (since 0.7.0) but it
seems people are still periodically getting bit by it. This adds an
explicit error message that explains that this kind of override isn't
allowed anymore.
This makes all the computed stuff "just work" since HIL uses the same
computed sentinel value (string UUID) and the type differentiates it
from a regular string.
The map output from the module "mod" loses the computed value from the
template when we validate. If the "extra" field is removed from the map,
the validation fails earlier with map "does not have any elements so
cannot determine type".
Apply will work, because the computed value will exist in the map.
This is the first step in allowing overrides of map and list variables.
We convert Context.variables to map[string]interface{} from
map[string]string and fix up all the call sites.
Adding walkValidate to the EvalTree operations, and removing the
walkValidate guard from the Interpolater.valueModuleVar allows the
values to be interpolated for Validate.
This commit adds support for declaring variable types in Terraform
configuration. Historically, the type has been inferred from the default
value, defaulting to string if no default was supplied. This has caused
users to devise workarounds if they wanted to declare a map but provide
values from a .tfvars file (for example).
The new syntax adds the "type" key to variable blocks:
```
variable "i_am_a_string" {
type = "string"
}
variable "i_am_a_map" {
type = "map"
}
```
This commit does _not_ extend the type system to include bools, integers
or floats - the only two types available are maps and strings.
Validation is performed if a default value is provided in order to
ensure that the default value type matches the declared type.
In the case that a type is not declared, the old logic is used for
determining the type. This allows backwards compatiblity with previous
Terraform configuration.
The context_test file has gotten pretty unruly. Let's split it up into
a few files so we can be nicer to our editors and our own sanity.
Definitely lots more we can do to clean up, but with changes like this
I'd rather do small, focused, clear steps instead of one big "cleaned up
lots of stuff" PR.