Because we gather together diagnostics from many different parts of the
codebase, the list often ends up being in a non-ideal order. Here we
define a partial ordering for diagnostics that should hopefully make them
easier to scan when many are present, by grouping together diagnostics
that are of the same severity and belong to the same file.
We use sort.Stable here because we have a partial order and so we need
to make sure that diagnostics that do not have a relative ordering will
remain in their original order.
This sorting is applied just in time before rendering the diagnostics
in command.Meta.showDiagnostics.
This doesn't yet include test updates, since there are problems in core
currently blocking these tests from running. The tests will therefore be
updated in a subsequent commit.
Previously we were defaulting the provider configuration selection to a
provider in the root module inferred from the resource type name.
This is close, but not quite right: we need to _start_ with a provider
configuration in the same module as we're importing into, and then our
provider resolution steps during import graph construction will use that
as a starting point for a walk up the tree to find the nearest matching
configuration (which might eventually still be in the root, but not
necessarily).
This now uses the HCL2 parser and evaluator APIs and evaluates in terms
of a new-style *lang.Scope, rather than the old terraform.Interpolator
type that is no longer functional.
The Context.Eval method used here behaves differently than the
Context.Interpolater method used previously: it performs a graph walk
to populate transient values such as input variables, local values, and
output values, and produces its scope in terms of the result of that
graph walk. Because of this, it is a lot more robust than the prior method
when asked to resolve references other than those that are persisted
in the state.
Previously an empty diagnostics would appear as "null" in the JSON output,
since that is how encoding/json serializes a nil slice. It's more
convenient for users of dynamic languages to keep the type consistent
in all cases, since they can then just iterate the list without needing a
special case for when it is null.
Due to how deeply the configuration types go into Terraform Core, there
isn't a great way to switch out to HCL2 gradually. As a consequence, this
huge commit gets us from the old state to a _compilable_ new state, but
does not yet attempt to fix any tests and has a number of known missing
parts and bugs. We will continue to iterate on this in forthcoming
commits, heading back towards passing tests and making Terraform
fully-functional again.
The three main goals here are:
- Use the configuration models from the "configs" package instead of the
older models in the "config" package, which is now deprecated and
preserved only to help us write our migration tool.
- Do expression inspection and evaluation using the functionality of the
new "lang" package, instead of the Interpolator type and related
functionality in the main "terraform" package.
- Represent addresses of various objects using types in the addrs package,
rather than hand-constructed strings. This is not critical to support
the above, but was a big help during the implementation of these other
points since it made it much more explicit what kind of address is
expected in each context.
Since our new packages are built to accommodate some future planned
features that are not yet implemented (e.g. the "for_each" argument on
resources, "count"/"for_each" on modules), and since there's still a fair
amount of functionality still using old-style APIs, there is a moderate
amount of shimming here to connect new assumptions with old, hopefully in
a way that makes it easier to find and eliminate these shims later.
I apologize in advance to the person who inevitably just found this huge
commit while spelunking through the commit history.
For the moment this is just a lightly-adapted copy of
ModuleTreeDependencies named ConfigTreeDependencies, with the goal that
the two can live concurrently for the moment while not all callers are yet
updated and then we can drop ModuleTreeDependencies and its helper
functions altogether in a later commit.
This can then be used to make "terraform init" and "terraform providers"
work properly with the HCL2-powered configuration loader.
This is a rather-messy, complex change to get the "command" package
building again against the new backend API that was updated for
the new configuration loader.
A lot of this is mechanical rewriting to the new API, but
meta_config.go and meta_backend.go in particular saw some major
changes to interface with the new loader APIs and to deal with
the change in order of steps in the backend API.
The new config loader requires some steps to happen in a different
order, particularly in regard to knowing the schema in order to
decode the configuration.
Here we lean directly on the configschema package, rather than
on helper/schema.Backend as before, because it's generally
sufficient for our needs here and this prepares us for the
helper/schema package later moving out into its own repository
to seed a "plugin SDK".
The remote API this talks to will be going away very soon, before our next
major release, and so we'll remove the command altogether in that release.
This also removes the "encodeHCL" function, which was used only for
adding a .tfvars-formatted file to the uploaded archive.
In the long run we'd like to offer machine-readable output for more
commands, but for now we'll just start with a tactical feature in
"terraform validate" since this is useful for automated testing scenarios,
editor integrations, etc, and doesn't include any representations of types
that are expected to have breaking changes in the near future.
As part of some light reorganization of our commands, this new
implementation no longer does validation of variables and will thus avoid
the need to spin up a fully-valid context. Instead, its focus is on
validating the configuration itself, regardless of any variables, state,
etc.
This change anticipates us later adding a -validate-only flag to
"terraform plan" which will then take over the related use-case of
checking if a particular execution of Terraform is valid, _including_ the
state, variables, etc.
Although leaving variables out of validate feels pretty arbitrary today
while all of the variable sources are local anyway, we have plans to
allow per-workspace variables to be stored in the backend in future and
at that point it will no longer be possible to fully validate variables
without accessing the backend. The "terraform plan" command explicitly
requires access to the backend, while "terraform validate" is now
explicitly for local-only validation of a single module.
In a future commit this will be extended to do basic type checking of
the configuration based on provider schemas, etc.
We need to share a single config loader across all callers because that
allows us to maintain the source code cache we'll use for snippets in
error messages.
Nothing calls this yet. Callers will be gradually updated away from Module
and Config in subsequent commits.
If we get a diagnostic message that references a source range, and if the
source code for the referenced file is available, we'll show a snippet of
the source code with the source range highlighted.
At the moment we have no cache of source code, so in practice this
codepath can never be visited. Callers to format.Diagnostic will be
gradually updated in subsequent commits.
In some cases this is needed to keep the UX clean and to make sure any remote exit codes are passed through to the local process.
The most obvious example for this is when using the "remote" backend. This backend runs Terraform remotely and stream the output back to the local terminal.
When an error occurs during the remote execution, all the needed error information will already be in the streamed output. So if we then return an error ourselves, users will get the same errors twice.
By allowing the backend to specify the correct exit code, the UX remains the same while preserving the correct exit codes.
Certain backends (currently only the `remote` backend) do not support using both the default and named workspaces at the same time.
To make the migration easier for users that currently use both types of workspaces, this commit adds logic to ask the user for a new workspace name during the migration process.
This commit fixes a bug that (in the case of the `local` backend) would only check if the selected workspace had a state when deciding to preform a migration.
When the selected workspace didn’t have a state (but other existing workspace(s) did), the migration would not be preformed and the other workspaces would be ignored.
By adding this method you now only have to pass a `*disco.Disco` object around in order to do discovery and use any configured credentials for the discovered hosts.
Of course you can also still pass around both a `*disco.Disco` and a `auth.CredentialsSource` object if there is a need or a reason for that!
- Fixes#11696
- This changes makes `terraform output -json` return '{}' instead of
throwing an error about "no outputs defined"
- If `-json` is not set, the user will receive an error as before
- This UX helps new users to understand how outputs are used
- Allows for easier automation of TF CLI as an empty set of outputs is
usually acceptable, but any other error from `output` would be
re-raised to the user.
Rather than try to modify all the hundreds of calls to the temp helper
functions, and cleanup the temp files at every call site, have all tests
work within a single temp directory that is removed at the end of
TestMain.
The state locking improvements for the regular command had the side
effect of locking the state in the console, import, graph and push
commands. Those commands had been updated to get a state via the
Backend.Context method, which locks the state whenever possible, and now
need to call Unlock directly.
Add Unlock calls to all commands that call Context directly.
Use the new StateLocker field to provide a wrapper for locking the state
during terraform.Context creation in the commands that directly
manipulate the state.
Simplify the use of clistate.Lock by creating a clistate.Locker
instance, which stores the context of locking a state, to allow unlock
to be called without knowledge of how the state was locked.
This alows the backend code to bring the needed UI methods to the point
where the state is locked, and still unlock the state from an outer
scope.
Provide a NoopLocker as well, so that callers can always call Unlock
without verifying the status of the lock.
Add the StateLocker field to the backend.Operation, so that the state
lock can be carried between the different function scopes of the backend
code. This will allow the backend context to lock the state before it's
read, while allowing the different operations to unlock the state when
they complete.
Use the new StateLocker field to provide a wrapper for locking the state
during terraform.Context creation in the commands that directly
manipulate the state.
Simplify the use of clistate.Lock by creating a clistate.Locker
instance, which stores the context of locking a state, to allow unlock
to be called without knowledge of how the state was locked.
This alows the backend code to bring the needed UI methods to the point
where the state is locked, and still unlock the state from an outer
scope.
The error was being silently dropped before.
There is an interpolation error, because the plan is canceled before
some of the resources can be evaluated. There might be a better way to
handle this in the walk cancellation, but the behavior has not changed.
Make the plan and apply shutdown match implementation-wise