Use a single log writer instance for all std library logging.
Setup the std log writer in the logging package, and remove boilerplate
from test packages.
Now that the planning process generates a refreshed state, and can
handle changes between the configuration and the state which the refresh
process cannot, we can use the plan for the refresh command.
* remove leftover debug line
* terraform: refactor ProviderEvalTree
This PR refactors the ProviderEvalTree by folding the entire tree into
straight-through code in NodeApplyableProvider Execute (formally
EvalTree). The EvalConfigProvider functions were refactored into
NodeApplyableProvider functions (since that was the only place they were
used).
I also removed the unused node_provider_disabled code.
* revert removal of graphNodeCloseProvider EvalTree, replace with Execute
This also unearthed that the marking must happen
earlier in the eval_diff in order to produce a valid plan
(so that the planned marked value matches the marked config
value)
This new option is intended to address the previous inconsistencies where
some older subcommands supported partially changing the target directory
(where Terraform would use the new directory inconsistently) where newer
commands did not support that override at all.
Instead, now Terraform will accept a -chdir command at the start of the
command line (before the subcommand) and will interpret it as a request
to direct all actions that would normally be taken in the current working
directory into the target directory instead. This is similar to options
offered by some other similar tools, such as the -C option in "make".
The new option is only accepted at the start of the command line (before
the subcommand) as a way to reflect that it is a global command (not
specific to a particular subcommand) and that it takes effect _before_
executing the subcommand. This also means it'll be forced to appear before
any other command-specific arguments that take file paths, which hopefully
communicates that those other arguments are interpreted relative to the
overridden path.
As a measure of pragmatism for existing uses, the path.cwd object in
the Terraform language will continue to return the _original_ working
directory (ignoring -chdir), in case that is important in some exceptional
workflows. The path.root object gives the root module directory, which
will always match the overriden working directory unless the user
simultaneously uses one of the legacy directory override arguments, which
is not a pattern we intend to support in the long run.
As a first step down the deprecation path, this commit adjusts the
documentation to de-emphasize the inconsistent old command line arguments,
including specific guidance on what to use instead for the main three
workflow commands, but all of those options remain supported in the same
way as they were before. In a later commit we'll make those arguments
produce a visible deprecation warning in Terraform's output, and then
in an even later commit we'll remove them entirely so that -chdir is the
single supported way to run Terraform from a directory other than the
one containing the root module configuration.
This pull reverts a recent change to backend/local which created two context, one with and one without state. Instead I have removed the state entirely from the validate graph (by explicitly passing a states.NewState() to the validate graph builder).
This changed caused a test failure, which (ty so much for the help) @jbardin discovered was inaccurate all along: the test's call to `Validate()` was actually what was removing the output from state. The new expected test output matches terraform's actual behavior on the command line: if you use -target to destroy a resource, an output that references only that resource is *not* removed from state even though that test would lead you to believe it did.
This includes two tests to cover the expected behavior:
TestPlan_varsUnset has been updated so it will panic if it gets more than one request to input a variable
TestPlan_providerArgumentUnset covers #26035Fixes#26035, #26027
A side effect of the various changes to the provider installer included losing the initialization required error message which would occur if a user removed or modified the .terraform directory.
Previously, plugin factories were created after the configuration was loaded, in terraform.NewContext. Terraform would compare the required providers (from config and state) to the available providers and return the aforementioned error if a provider was missing.
Provider factories are now loaded at the beginning of any terraform command, before terraform even loads the configuration, and therefore before terraform has a list of required providers.
This commit replaces the current error when a providers' schema cannot be found in the provider factories with the init error, and adds a command test (to plan tests, for no real reason other than that's what I thought of first).
Back when we first introduced provider versioning in Terraform 0.10, we
did the provider version resolution in terraform.NewContext because we
weren't sure yet how exactly our versioning model was going to play out
(whether different versions could be selected per provider configuration,
for example) and because we were building around the limitations of our
existing filesystem-based plugin discovery model.
However, the new installer codepath is new able to do all of the
selections up front during installation, so we don't need such a heavy
inversion of control abstraction to get this done: the command package can
select the exact provider versions and pass their factories directly
to terraform.NewContext as a simple static map.
The result of this commit is that CLI commands other than "init" are now
able to consume the local cache directory and selections produced by the
installation process in "terraform init", passing all of the selected
providers down to the terraform.NewContext function for use in
implementing the main operations.
This commit is just enough to get the providers passing into the
terraform.Context. There's still plenty more to do here, including to
repair all of the tests this change has additionally broken.
missingPlugins was hard-coded to work only with provider plugins, so I
renamed it to clarify the usage.
Also renamed a test provider from greater_than to greater-than as the
underscore is an invalid provider name character and this will become a
hard error in the near future.
This is not used yet, but in future commits will be used as a
"blackboard" to centrally aggregate the information pertaining to
expansion of resources and modules (using "count" or "for_each") to help
ensure consistent treatment of the expansion process during a graph walk.
In practice this only really makes sense for the plan walk, because the
apply walk doesn't do any dynamic expansion.
* terraform/context: use new addrs.Provider as map key in provider factories
* added NewLegacyProviderType and LegacyString funcs to make it explicit that these are temporary placeholders
This PR introduces a new concept, provider fully-qualified name (FQN), encapsulated by the `addrs.Provider` struct.
During the 0.12 work we intended to move all of the variable value
collection logic into the UI layer (command package and backend packages)
and present them all together as a unified data structure to Terraform
Core. However, we didn't quite succeed because the interactive prompts
for unset required variables were still being handled _after_ calling
into Terraform Core.
Here we complete that earlier work by moving the interactive prompts for
variables out into the UI layer too, thus allowing us to handle final
validation of the variables all together in one place and do so in the UI
layer where we have the most context still available about where all of
these values are coming from.
This allows us to fix a problem where previously disabling input with
-input=false on the command line could cause Terraform Core to receive an
incomplete set of variable values, and fail with a bad error message.
As a consequence of this refactoring, the scope of terraform.Context.Input
is now reduced to only gathering provider configuration arguments. Ideally
that too would move into the UI layer somehow in a future commit, but
that's a problem for another day.
We previously deferred this to Validate, but all of our operations require
a valid set of variables and so checking this up front makes it more
obvious when a call into Terraform Core from the CLI layer isn't
populating variables correctly, instead of having it fail downstream
somewhere.
It's the caller's responsibility to ensure that it has collected values
for all of the variables in some way before calling NewContext, because
in the main case driven by the CLI there are many different places that
variable values can be collected from and so handling the main user-facing
validation in the CLI allows us to return better error messages that take
into account which way a variable is (or is not) being set.
The documentation for the -target option warns that it's intended for
exceptional circumstances only and not for routine use, but that's not a
very prominent location for that warning and so some users miss it.
Here we make the warning more prominent by including it directly in the
Terraform output when -target is in use. We first warn during planning
that the plan might be incomplete, and then warn again after apply
concludes and direct the user to run "terraform plan" to make sure that
there are no further changes outstanding. The latter message is intended
to reinforce that -target should only be a one-off operation and that you
should always run without it soon after to ensure that the workspace is
left in a consistent, converged state.
The init error was output deep in the backend by detecting a
special ResourceProviderError and formatted directly to the CLI.
Create some Diagnostics closer to where the problem is detected, and
passed that back through the normal diagnostic flow. While the output
isn't as nice yet, this restores the helpful error message and makes the
code easier to maintain. Better formatting can be handled later.
There are several steps here and a number of them can include reaching out
to remote servers or executing local processes, so it's helpful to have
some trace logs to better narrow down causes of errors and hangs during
this step.
When we're being asked to destroy everything, we ideally want to end up
with a totally empty state. Normally we will conservatively keep around
the "husks" of resources (what's left after all of the instances have been
destroyed) unless they are configured without count or for_each, but in
this special case we'll prune those out.
The implication of this is that in "weird" expression contexts that happen
before the next "terraform plan", such as evaluation in
"terraform console" or expressions in data resources and provider blocks
that get evaluated during the refresh walk, we will see these results
as unknown rather than as empty lists of objects. We accept that weirdness
for now because in a future release we are likely to remove "refresh" as
a separate walk anyway, doing all of that work during the plan walk where
we can ensure that these values are properly re-populated before trying
to use them.
Since stopping is a rather complex mechanism that relies on correct
handling of concurrency, it's handy to have these logs here to debug when
things don't happen in quite the right order.
Prior to our refactoring here, we were relying on a lucky coincidence for
correct behavior of the plan walk following a refresh in the same run:
- The refresh phase created placeholder objects in the state to represent
any resource instance pending creation, to allow the interpolator to
read attributes from them when evaluating "provider" and "data" blocks.
In effect, the refresh walk is creating a partial plan that only covers
creation actions, but was immediately discarding the actual diff entries
and storing only the planned new state.
- It happened that objects pending creation showed up in state with an
empty ID value, since that only gets assigned by the provider during
apply.
- The Refresh function concluded by calling terraform.State.Prune, which
deletes from the state any objects that have an empty ID value, which
therefore prevented these temporary objects from surviving into the
plan phase.
After refactoring, we no longer have this special ID field on instance
object state, and we instead rely on the Status field for tracking such
things. We also no longer have an explicit "prune" step on state, since
the state mutation methods themselves keep the structure pruned.
To address this, here we introduce a new instance object status "planned",
which is equivalent to having an empty ID value in the old world. We also
introduce a new method on states.SyncState that deletes from the state
any planned objects, which therefore replaces that portion of the old
State.prune operation just for this refresh use-case.
Finally, we are now expecting the expression evaluator to pull pending
objects from the planned changeset rather than from the state directly,
and so for correct results these placeholder resource creation changes
must also be reported in a throwaway changeset during the refresh walk.
The addition of states.ObjectPlanned also permits a previously-missing
safety check in the expression evaluator to prevent us from relying on the
incomplete value stored in state for a pending object, in the event that
some bug prevents the real pending object from being written into the
planned changeset.
This is a light adaptation of our earlier prototype of structural diff
rendering, as a starting point for what we'll actually ship. This is not
consistent with the latest mocks, so will need some additional work before
it is ready, but integrating this allows us to at least see the plan
contents while fixing up remaining issues elsewhere.
Chaange ResourceProvider to providers.Interface starting from the
context, and fix all type errors.
This only replaced some of method calls directly applicable to the
providers themselves. The resource methods will follow.
Due to how often the state and plan types are referenced throughout
Terraform, there isn't a great way to switch them out gradually. As a
consequence, this huge commit gets us from the old world to a _compilable_
new world, but still has a large number of known test failures due to
key functionality being stubbed out.
The stubs here are for anything that interacts with providers, since we
now need to do the follow-up work to similarly replace the old
terraform.ResourceProvider interface with its replacement in the new
"providers" package. That work, along with work to fix the remaining
failing tests, will follow in subsequent commits.
The aim here was to replace all references to terraform.State and its
downstream types with states.State, terraform.Plan with plans.Plan,
state.State with statemgr.State, and switch to the new implementations of
the state and plan file formats. However, due to the number of times those
types are used, this also ended up affecting numerous other parts of core
such as terraform.Hook, the backend.Backend interface, and most of the CLI
commands.
Just as with 5861dbf3fc49b19587a31816eb06f511ab861bb4 before, I apologize
in advance to the person who inevitably just found this huge commit while
spelunking through the commit history.
Previously we had the evaluate methods accept directly an
addrs.InstanceKey and had our evaluator infer a suitable value for
count.index for it, but that prevents us from setting the index to be
unknown in the validation scenario where we may not be able to predict
the number of instances yet but we still want to be able to check that
the configuration block is type-safe for all possible count values.
To achieve this, we separate the concern of deciding on a value for
count.index from the concern of evaluating it, which then allows for
other implementations of this in future. For the purpose of this commit
there is no change in behavior, with the count.index value being populated
whenever the instance key is a number.
This commit does a little more groundwork for the future implementation
of the for_each feature (which'll support each.key and each.value) but
still doesn't yet implement it, leaving it just stubbed out for the
moment.
Provider input is now longer handled with a graph walk, so the code
related to the input graph and walk are no longer needed.
For now the Input method is retained on the ResourceProvider interface,
but it will never be called. Subsequent work to revamp the provider API
will remove this method.
Now that core has access to the provider configuration schema, our input
logic can be implemented entirely within Context.Input, removing the need
to execute a full graph walk to gather input.
This commit replaces the graph walk call with instead just visiting the
provider configurations (explicit and implied) in the root module, using
the schema to prompt.
The code to manage the input graph walk is not yet removed by this commit,
and will be cleaned up in a subsequent commit once we've made sure there
aren't any other callers/tests depending on parts of it.
Previously we fetched schemas during the AttachSchemaTransformer,
potentially multiple times as that was re-run for each graph built. Now
we fetch the schemas just once during context construction, passing that
result into each of the graph builders.
This only addresses the schema accesses during graph construction. We're
still separately loading schemas during the main walk for evaluation
purposes. This will be addressed in a later commit.
These transformers both construct temporary graphs using many of the same
transformers used in the apply graph, and properly doing this now requires
access to the providers and provisioners in order to obtain their schemas.
Along with this, we also update the tests here to use the
simpleMockComponentFactory helper to get a mock provider with a schema
already configured, which means we also need to update the test fixtures
and assertions to use the resource type and attributes defined in that
mock factory.
Some of the objects that are referencable from expressions are transient
values computed only during a graph walk, and not persisted in state. In
order to support arbitrary evaluation of expressions, such as in the
"terraform console" CLI command, it's necessary to be able to evaluate
these values before we start evaluating.
This new Eval method achieves this by performing a special graph walk that
ignores resources (except for dependency resolution) and just focuses on
evaluating all of these transient values, before returning an evaluation
scope that can then resolve expressions in terms of that result.
This replaces the Context.Interpolator method, which was fraught with
various issues due to it not properly priming the state before evaluating.
I took some missteps here while doing the initial refactor for HCL2 types.
This restores the map of maps that retains all of the variable values, and
then makes it available to the evaluator.
This is a little awkward since we need to instantiate the providers much
earlier than before. To avoid a lot of reshuffling here we just spin each
one up and then immediately shut it down again, letting our existing init
functionality during the graph walk still do the main initialization.