This allows us to remove the manual replace directives
github.com/dgrijalva/jwt-go and google.golang.org/grpc, so that we can
remove the CVE warnings and update the grpc packages.
While the etcdv3 backend is also marked as deprecated, the changes here
are done in a manner to keep that backend working for the time being.
* Clarify how the ~> version contraint operator works
* add comma, clarify the actor
* verbiage tweak
* language cleanup from PR review
Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com>
* fix line length
Co-authored-by: Laura Pacilio <83350965+laurapacilio@users.noreply.github.com>
By observing the sorts of questions people ask in the community, and the
ways they ask them, we've inferred that various different people have been
confused by Terraform reporting that a value won't be known until apply
or that a value is sensitive as part of an error message when that message
doesn't actually relate to the known-ness and sensitivity of any value.
Quite reasonably, someone who sees Terraform discussing an unfamiliar
concept like unknown values can assume that it must be somehow relevant to
the problem being discussed, and so in that sense Terraform's current
error messages are giving "too much information": information that isn't
actually helpful in understanding the problem being described, and in the
worst case is a distraction from understanding the problem being described.
With that in mind then, here we introduce an explicit annotation on
diagnostic objects that are directly talking about unknown values or
sensitive values, and then the diagnostic renderer will react to that to
avoid using the terminology "known only after apply" or "sensitive" in the
generated diagnostic annotations unless we're rendering a message that is
explicitly related to one of those topics.
This ends up being a bit of a cross-cutting concern because the code that
generates these diagnostics and the code that renders them are in separate
packages and are not directly aware of each other. With that in mind, the
logic for actually deciding for a particular diagnostic whether it's
flagged in one of these special ways lives inside the tfdiags package as
an intermediation point, which both the diagnostic generator (in the core
package) and the diagnostic renderer can both depend on.
When an error occurs in a function call, the error message text often
includes references to particular parameters in the function signature.
This commit improves that reporting by also including a summary of the
full function signature as part of the diagnostic context in that case,
so a reader can see which parameter is which given that function
arguments are always assigned positionally and so the parameter names
do not appear in the caller's source code.
HCL's diagnostic model now includes the idea of "extra information" which
works by attaching an initially-opaque interface value to each diagnostic
and then asking callers to type-assert against that value to sniff for
particular interfaces in order to discover additional machine-readable
context about a certain diagnostic message.
This commit echoes that idea into our tfdiags API, for now only for
diagnostics that are backed by an hcl.Diagnostic. All other implementations
of the diagnostic interface just always return nil, which means they never
carry any "extra information".
As is typical for our wrapping abstraction, we have here also a modified
copy of HCL's helper function for conveniently probing a diagnostic for
information of a particular type, designed to work with our diagnostic
interface instead of HCL's concrete diagnostic type.
The current implementation does not use the default shell for
the ssh user to execute inline commands, which can be somewhat
confusing to debug. Provide explicit documentation to explain this,
and documentation can be removed if ssh user default shell is supported.
When validating self-references for resource and data source
preconditions and postconditions, we previously did not nil-check the
block's condition field, which caused a panic when the block had no
condition.
While fixing this I noticed that we were not validating that there are
no self-references in the error message, so fixed that.
Combine all plan-time graphs into a single graph builder, because
_everything is a plan_!
Convert the import graph to a plan graph. This should resolve a few edge
cases about things not being properly evaluated during import, and takes
a step towards being able to _plan_ an import.
We originally introduced the idea of language experiments as a way to get
early feedback on not-yet-proven feature ideas, ideally as part of the
initial exploration of the solution space rather than only after a
solution has become relatively clear.
Unfortunately, our tradeoff of making them available in normal releases
behind an explicit opt-in in order to make it easier to participate in the
feedback process had the unintended side-effect of making it feel okay
to use experiments in production and endure the warnings they generate.
This in turn has made us reluctant to make use of the experiments feature
lest experiments become de-facto production features which we then feel
compelled to preserve even though we aren't yet ready to graduate them
to stable features.
In an attempt to tweak that compromise, here we make the availability of
experiments _at all_ a build-time flag which will not be set by default,
and therefore experiments will not be available in most release builds.
The intent (not yet implemented in this PR) is for our release process to
set this flag only when it knows it's building an alpha release or a
development snapshot not destined for release at all, which will therefore
allow us to still use the alpha releases as a vehicle for giving feedback
participants access to a feature (without needing to install a Go
toolchain) but will not encourage pretending that these features are
production-ready before they graduate from experimental.
Only language experiments have an explicit framework for dealing with them
which outlives any particular experiment, so most of the changes here are
to that generalized mechanism. However, the intent is that non-language
experiments, such as experimental CLI commands, would also in future
check Meta.AllowExperimentalFeatures and gate the use of those experiments
too, so that we can be consistent that experimental features will never
be available unless you explicitly choose to use an alpha release or
a custom build from source code.
Since there are already some experiments active at the time of this commit
which were not previously subject to this restriction, we'll pragmatically
leave those as exceptions that will remain generally available for now,
and so this new approach will apply only to new experiments started in the
future. Once those experiments have all concluded, we will be left with
no more exceptions unless we explicitly choose to make an exception for
some reason we've not imagined yet.
It's important that we be able to write tests that rely on experiments
either being available or not being available, so here we're using our
typical approach of making "package main" deal with the global setting
that applies to Terraform CLI executables while making the layers below
all support fine-grain selection of this behavior so that tests with
different needs can run concurrently without trampling on one another.
As a compromise, the integration tests in the terraform package will
run with experiments enabled _by default_ since we commonly need to
exercise experiments in those tests, but they can selectively opt-out
if they need to by overriding the loader setting back to false again.
When attempting to lock a remote state backend, failure due to an
existing lock should return an instance of LockError. This allows the
wrapping code to retry until the specified timeout, instead of
immediately exiting.
This commit adds a test for this in the TestRemoteLocks test helper,
which is used in many of the remote state backend test suites.
We introduced the addrs.UniqueKey and addrs.UniqueKeyer mechanics as part
of implementing the ValidateMoves and ApplyMoves functions, as a way to
better encapsulate the solution to the problem that lots of our address
types aren't comparable and so cannot be used directly as map keys.
However, exposing addrs.UniqueKey handling directly in the logic adds
various noise to the algorithms and, in particular, obscures the fact that
MoveResults.Changes and MoveResult.Blocked both have different map key
types.
Here then we'll use the new addrs.Map helper type, which encapsulates the
idea of a map from an addrs.UniqueKeyer type to an arbitrary value type,
using the unique keys as the map keys internally. This does unfortunately
mean that we lose the conventional Go map access syntax and have to use
a method-based API instead, but I (subjectively) think that's an okay
compromise in return for avoiding the need to keep track inline of which
addrs.UniqueKey values correspond with which real addresses.
This is intended as an entirely-mechanical change, with equivalent
behavior to what it replaced. If anything here is doing something
materially different than what it replaced then that's a mistake.
The addrs.Set type previously snuck in accidentally as part of the work
to add addrs.UniqueKey and addrs.UniqueKeyer, because without support for
generic types the addrs.Set type was a bit of a safety hazard due to not
being able to enforce particular address types at compile time.
However, with Go 1.18 adding support for type parameters we can now turn
addrs.Set into a generic type over any specific addrs.UniqueKeyer type,
and complement it with an addrs.Map type which supports addrs.UniqueKeyer
keys as a way to encapsulate the handling of maps with UniqueKey keys that
we currently do inline in various other parts of Terraform.
This doesn't yet introduce any callers of these types, but we'll convert
existing users of addrs.UniqueKeyer gradually in subsequent commits.