* Add viewType to Meta object and use it at the call sites
* Assign viewType passed from flags to state-locking cli commands
* Remove temp files
* Set correct mode for statelocker depending on json flag passed to commands
* Add StateLocker interface conformation check for StateLockerJSON
* Remove empty line at end of comment
* Pass correct ViewType to StateLocker from Backend call chain
* Pass viewType to backend migration and initialization functions
* Remove json processing info in process comment
* Restore documentation style of backendMigrateOpts
There are no good options for inserting diagnostics into the backend
lookup, or creating a backend which reports it's removal because none of
the init or GetSchema functions return any errors.
Keep a registry of the removed backend so that we can at least notify
users that a backend was removed vs an invalid name.
A regression introduced in d72a413ef8
The comment explains, but TLDR: The remote backend actually *depended*
on being able to write it's backend state even though an 'error'
occurred (no workspaces).
When going from a local backend to Terraform Cloud, if you skip the
`terraform init` and run `terraform apply` this will give the user more
clear instructions.
When terraform detects that a user has no workspaces that map to their current configuration, it will prompt the user to create a new workspace and enter a value name. If the user ignores the prompt and exits it, the legacy backend (terraform.tfstate) will be left in a awkward state:
1. This saved backend config will show a diff for the JSON attributes "serial", "tags" and "hash"
2. "Terraform workspace list" will show an empty list
3. "Terraform apply" will run successfully using the previous workspace, from the previous config, not the one from the current saved backend config
4. The cloud config is not reflective of the current working directory
Solution: If the user exits the prompt, the saved backend config should not be updated because they did not select a new workspace. They are back at the beginning where they are force to re run the init cmd again before proceeding with new changes.
* Create a function for logic that assigns value to initReason var after changing backend configuration
Create func determineInitReason() for logic block that assigns value to initReason var after changing backend/cloud configuration block or migrating to a different type of backend configuration. Also clarify 'cloud' configuration block message to say 'Terraform Cloud configuration block has changed' instead of 'Terraform Cloud configuration has changed'.
There are a few command line options for "terraform init" which are only
relevant when working with traditional backends, with the Cloud
integration previously just mostly ignoring them, or sometimes misbehaving
slightly due to them creating an unreasonable situation.
Now we'll catch these and return explicit errors, in order to be clear
that these options are not needed nor supported in Cloud mode.
For Terraform Cloud users using the 'remote' backend, the existing
'pattern' prompt should work just fine - but because their workspaces
are already present in TFC, the 'migration' here is really just
realigning their local workspaces with Terraform Cloud. Instead of
forcing users to do the mental gymnastics of what it means to migrate
from 'prefix' - and because their remote workspaces probably already exist and
already conform to Terraform Cloud's naming concerns - streamline the
process for them and calculate the necessary pattern to migrate as-is,
without any user intervention necessary.
The Meta.backend_C_r_S_unchanged() method was sadly a bit of a mess.
It seems to have originally been used as a method to be called
when the backend is not changing, with an extra assumption that if the
configured backend's hash doesn't match the one in state, surely the
hash should just be updated as an option might have been moved to
command line flags.
However, this function was used throughout this file as 'the method to
load the initialized (but not necessarily configured) backend',
regardless of whether or not it is the same (unchanged). This is in
addition to Meta.backendFromState(), which is used to load the same
thing except in the main codepath of 'init -backend=false'.
These changes separate the concerns of backend_C_r_S_unchanged() by
1) Fetching the saved backend (savedBackend())
2) Updating the hash value in the backend cache when appropriate (either
by leaving it to the caller to do themselves or by calling
updateSavedupdateSavedBackendHash())
This allows migration codepaths to *not* update the hash value until
after a migration has successfully taken place.
Previously, `terraform init` was throwing an error if you configured the cloud
block with `tags` and there weren't any tagged workspaces yet. Confusing and
alienating, since that that's a fairly normal situation! Basically TFC was
handling an empty list of workspaces worse than other backends, because it
doesn't support an unnamed default workspace.
This commit catches that condition during `Meta.selectBackend()` and asks the
user to pick a name for their first tagged workspace. If they cancel out, we
still error, but if we know what name they want, we can handle it the same way
as a nonexistent workspace specified in `name` -- just pass it to
`Meta.SetWorkspace()`, and let the workspace get implicitly created when
`InitCommand.Run()` eventually calls `StateMgr()`.
When using the Terraform Cloud integration - like the 'remote'
backend - resource count output should be suppressed if those counts are
being rendered remotely. This generalizes this to the shared
BackendWithRemoteTerraformVersion interface.
There are actually a few different ways to get to this message.
1. Blank state — no previous terraform applied. Start with a cloud block.
1. Implicit local — start with no backend specified. This actually goes
through the same code execution path as the first scenario.
1. Explicit local — start with a backend local block that has been
applied, then change from the local backend to a cloud block. This
will recognize the state, and is a different path through the code in
the meta backend.
This commit handles the last case. The messaging has also been tweaked.
End to end test included as well.
This is a replacement declaration for using Terraform Cloud as a remote
backend, leaving the literal backend as an implementation detail and not
a user-level concept.
When running `terraform init` against a backend with multiple
workspaces, none of which are the currently indicated local workspace,
Terraform prompts the user to choose a workspace from the list. In
automation, using the `-input=false` argument should disable asking for
input, but previously would hang instead.
When an explicit backend is configured with a configuration which has
not yet been initialized, running `terraform init` performs a state
migration to fetch the remotely stored state in order to operate on it.
Like the previous bug introduced by the recent provider diagnostics
change, this code path was not correctly configured to enable init mode
for the backend, which resulted in a fatal error during init when the
cache dir is deleted.
Setting the `Init` backend option allows this code path to continue
without error when first initializing the backend for state migration.
The new e2e test fails without this change.
The init command needs to initialize a backend, in order to access
state, in turn to derive provider requirements from state. The backend
initialization step requires building provider factories, which
previously would fail if a lockfile was present without a corresponding
local provider cache.
This commit ensures that in this situation only, errors with the
provider factories are temporarily ignored. This allows us to continue
to initialize the backend, fetch providers, and then report any errors
as necessary.
In the original incarnation of Meta.providerFactories we were returning
into a Meta.contextOpts whose signature didn't allow it to return an
error directly, and so we had compromised by making the provider factory
functions themselves return errors once called.
Subsequent work made Meta.contextOpts need to return an error anyway, but
at the time we neglected to update our handling of the providerFactories
result, having it still defer the error handling until we finally
instantiate a provider.
Although that did ultimately get the expected result anyway, the error
ended up being reported from deep in the guts of a Terraform Core graph
walk, in whichever concurrently-visited graph node happened to try to
instantiate the plugin first. This meant that the exact phrasing of the
error message would vary between runs and the reporting codepath didn't
have enough context to given an actionable suggestion on how to proceed.
In this commit we make Meta.contextOpts pass through directly any error
that Meta.providerFactories produces, and then make Meta.providerFactories
produce a special error type so that Meta.Backend can ultimately return
a user-friendly diagnostic message containing a specific suggestion to
run "terraform init", along with a short explanation of what a provider
plugin is.
The reliance here on an implied contract between two functions that are
not directly connected in the callstack is non-ideal, and so hopefully
we'll revisit this further in future work on the overall architecture of
the CLI layer. To try to make this robust in the meantime though, I wrote
it to use the errors.As function to potentially unwrap a wrapped version
of our special error type, in case one of the intervening layers is
changed at some point to wrap the downstream error before returning it.
In historical versions of Terraform the responsibility to check this was
inside the terraform.NewContext function, along with various other
assorted concerns that made that function particularly complicated.
More recently, we reduced the responsibility of the "terraform" package
only to instantiating particular named plugins, assuming that its caller
is responsible for selecting appropriate versions of any providers that
_are_ external. However, until this commit we were just assuming that
"terraform init" had correctly selected appropriate plugins and recorded
them in the lock file, and so nothing was dealing with the problem of
ensuring that there haven't been any changes to the lock file or config
since the most recent "terraform init" which would cause us to need to
re-evaluate those decisions.
Part of the game here is to slightly extend the role of the dependency
locks object to also carry information about a subset of provider
addresses whose lock entries we're intentionally disregarding as part of
the various little edge-case features we have for overridding providers:
dev_overrides, "unmanaged providers", and the testing overrides in our
own unit tests. This is an in-memory-only annotation, never included in
the serialized plan files on disk.
I had originally intended to create a new package to encapsulate all of
this plugin-selection logic, including both the version constraint
checking here and also the handling of the provider factory functions, but
as an interim step I've just made version constraint consistency checks
the responsibility of the backend/local package, which means that we'll
always catch problems as part of preparing for local operations, while
not imposing these additional checks on commands that _don't_ run local
operations, such as "terraform apply" when in remote operations mode.
When initializing a backend, if the currently selected workspace does
not exist, the user is prompted to select from the list of workspaces
the backend provides.
Instead, we should automatically select the only workspace available
_if_ that's all that's there.
Although with being a nice bit of polish, this enables future
improvments with Terraform Cloud in potentially removing the implicit
depenency on always using the 'default' workspace when the current
configuration is mapped to a single TFC workspace.
The presence of this field was confusing because in practice the local
backend doesn't use it for anything and the remote backend was using it
only to return an error if it's set to anything other than the default,
under the assumption that it would always match ContextOpts.Parallelism.
The "command" package is the one actually responsible for handling this
option, and it does so by placing it into the partial ContextOpts which it
passes into the backend when preparing for a local operation. To make that
clearer, here we remove Operation.Parallelism and change the few uses of
it to refer to ContextOpts.Parallelism instead, so that everyone is
reading and writing this value from the same place.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.
This is part of a general effort to move all of Terraform's non-library
package surface under internal in order to reinforce that these are for
internal use within Terraform only.
If you were previously importing packages under this prefix into an
external codebase, you could pin to an earlier release tag as an interim
solution until you've make a plan to achieve the same functionality some
other way.