Unmark values before calling provider's validate
function, this was not tested as the mock
provider does not use Marshall. Update mock
provider funcs to marshall and error if there
was an error in marshalling
Before configuring a provider, we need to unmark the configuration
object, in case it includes any sensitive values. This is required
because configuration occurs over gRPC, which doesn't support sensitive
marks.
So far all of our language experiments have been new constructs handled
statically up in the configs package, but functions are another common
extention point where experiments could be useful to gather feedback and
so this intends to pass the information down into the right place to allow
for that to happen, even though as of this commit there are no
experimental functions to use it.
When a resource has no `provider` argument specified, its provider is
derived from the implied provider type based on the resource type. For
example, a `boop_instance` resource has an implied provider local name
of `boop`. Correspondingly, its provider configuration is specified with
a `provider "boop"` block.
However, users can use the `required_providers` configuration to give a
different local name to a given provider than its defined type. For
example, a provider may be published at `foobar/beep`, but provide
resources such as `boop_instance`. The most convenient way to use this
provider is with a `required_providers` map:
terraform {
required_providers {
boop = {
source = "foobar/beep"
}
}
}
Once that local name is defined, it is used for provider configuration
(a `provider "boop"` block, not `provider "beep"`). It should also be
used when looking up a resource's provider configuration or provider.
This commit fixes a bug with this edge case, where previously we were
looking up the local provider configuration block using the resource's
assigned provider type. Instead, if no provider argument is specified,
we should be using the implied provider type, as that is what binds the
resource to the local provider configuration.
When building a context, we read the dependency locks and ensure that
the provider requirements from the configuration can be satisfied.
If the configured requirements change such that the locks need to be
updated, we explain this and recommend running "terraform init".
This check is ignored for any providers which are locally marked as in
development. This includes unmanaged providers and those listed in the
provider installation `dev_overrides` block.
Prior to Terraform 0.12 these two functions were the only way to construct
literal lists and maps (respectively) in HIL expressions. Terraform 0.12,
by switching to HCL 2, introduced first-class syntax for constructing
tuple and object values, which can then be converted into list and map
values using the tolist and tomap type conversion functions.
We marked both of these functions as deprecated in the Terraform v0.12
release and have since then mentioned in the docs that they will be
removed in a future Terraform version. The "terraform 0.12upgrade" tool
from Terraform v0.12 also included a rule to automatically rewrite uses
of these functions into equivalent new syntax.
The main motivation for removing these now is just to get this change made
prior to Terraform 1.0. as we'll be doing with various other deprecations.
However, a specific reason for these two functions in particular is that
their existence is what caused us to invent the idea of a "type expression"
as a distinct kind of expression in Terraform v0.12, and so removing them
now would allow potentially unifying type expressions with value
expressions in a future release.
We do not have any current specific plans to make that change, but one
potential motivation for doing so would be to take another attempt at a
generalized "convert" function which takes a type as one of its arguments.
Our previous attempt to implement such a function was foiled by the fact
that Terraform's expression validator doesn't have any way to know to
treat one argument of a particular function as special, and so it was
generating incorrect error messages. We won't necessarily do that, but
having these "list" and "map" functions out of the way leaves the option
open.
Core is only using the PrepareProviderConfig call for the validation
part of the method, but we should be re-validating the final config
immediately before Configure.
This change elects to not start using the PreparedConfig here, since
there is no useful reason for it at this point, and it would
introduce a functional difference between terraform releases that can be
avoided.
The validation for ignore_changes was too broad, and makes it difficult
to ignore changes in an attribute that the user does not want to set.
While the goal of ignore_changes is to prevent changes in the
configuration alone, we don't intend to break the use-case of ignoring
drift from the provider. Since we cannot easily narrow the validation to
only detect computed attributes at the moment, we can drop this error
altogether for now.
Our diagnostics model allows for optionally annotating an error or warning
with information about the expression and eval context it was generated
from, which the diagnostic renderer for the UI will then use to give the
user some additional hints about what values may have contributed to the
error.
We previously didn't have those annotations on the results of evaluating
for_each expressions though, because in that case we were using the helper
function to evaluate an expression in one shot and thus we didn't ever
have a reference to the EvalContext in order to include it in the
diagnostic values.
Now, at the expense of having to handle the evaluation at a slightly lower
level of abstraction, we'll annotate all of the for_each error messages
with source expression information. This is valuable because we see users
often confused as to how their complex for_each expressions ended up being
invalid, and hopefully giving some information about what the inputs were
will allow more users to self-solve.
This was mostly unused now, since we no longer needed to interrupt a
series of eval node executions.
The exception was the stopHook, which is still used to halt execution
when there's an interrupt. Since interrupting execution should not
complete successfully, we use a normal opaque error to halt everything,
and return it to the UI.
We can work on coalescing or hiding these if necessary in a separate PR.
Some tests could not handle reading orphaned resources. It also turns
out the ReadResource mock never returned the correct state in the
default case at all.
This forces orphaned resources to be re-read during planning, removing
them from the state if they no longer exist.
This needs to be done for a bare `refresh` execution, since Terraform
should remove instances that don't exist and are not in the
configuration from the state. They should also be removed from state so
there is no Delete change planned, as not all providers will gracefully
handle a delete operation on a resource that does not exist.
If a change exists for a resource instance,
the After value is returned, however, this value
will not have its marks as it as been encoded.
This Marks the return value so the marks follow
that resource reference.
Because ignore_changes configuration can refer to resource arguments
which are assigned sensitive values, we need to unmark the resource
object before processing.
Use a single log writer instance for all std library logging.
Setup the std log writer in the logging package, and remove boilerplate
from test packages.
If the provisioner configuration includes sensitive values, it's a
reasonable assumption that we should suppress its log output. Obvious
examples where this makes sense include echoing a secret to a file using
local-exec or remote-exec.
This commit adds tests for both logging output from provisioners with
non-sensitive configuration, and suppressing logs for provisioners with
sensitive values in configuration.
Note that we do not suppress logs if connection info contains sensitive
information, as provisioners should not be logging connection
information under any circumstances.
If provisioner configuration or connection info includes sensitive
values, we need to unmark them before calling the provisioner. Failing
to do so causes serialization to error.
Unlike resources, we do not need to capture marked paths here, so we
just discard the marks.