The checks.Checks type aims to encapsulate keeping track of check results
during a run and then reporting on them afterwards even if the run was
aborted early for some reason.
The intended model here is that each new run starts with an entirely fresh
checks.Checks, with all of the statuses therefore initially unknown, and
gradually populates the check results as we walk the graph in Terraform
Core. This means that even if we don't complete the run due to an error
or due to targeting options we'll still report anything we didn't visit
yet as unknown.
This commit only includes the modeling of checks in the checks package.
For now this is just dead code, and we'll wire it in to Terraform Core in
subsequent commits.
We previously added methods like this for some of the other types in this
package, including Local in this same file, but apparently haven't needed
these two yet.
Our existing addrs.Checkable represents a particular (possibly-dynamic)
object that can have checks associated with it.
This new addrs.ConfigCheckable represents static configuration objects
that can potentially generate addrs.Checkable objects.
The idea here is to allow us to predict from the configuration a set of
potential checkable object containers and then dynamically associate
the dynamic checkable objects with them as we make progress with planning.
This is intended for our integration of checks into the "terraform test"
testing harness, to be used instead of the weirdo builtin provider we were
using as a placeholder before we had first-class syntax for checks.
Test reporting tools find it helpful for there to be a consistent set of
test cases from one run to the next so that they can report on trends over
multiple runs, and so our ConfigCheckable addresses will serve as the
relatively-static "test case" that we'll then associate the dynamic checks
with, so that we can still talk about objects in the test result report
even if we end up not reaching them due to an upstream conditution failure.
This is a complement to "timestamp" and "timeadd" which allows
establishing the ordering of two different timestamps while taking into
account their timezone offsets, which isn't otherwise possible using the
existing primitives in the Terraform language.
Go 1.19's "fmt" has some awareness of the new doc comment formatting
conventions and adjusts the presentation of the source comments to make
it clearer how godoc would interpret them. Therefore this commit includes
various updates made by "go fmt" to acheve that.
In line with our usual convention that we make stylistic/grammar/spelling
tweaks typically only when we're "in the area" changing something else
anyway, I also took this opportunity to review most of the comments that
this updated to see if there were any other opportunities to improve them.
Prevously the cloud backend would only render post-plan run tasks. Now
that pre-plan tasks are in beta, this commit updates the plan phase to
render pre-plan run tasks. This commit also moves some common code to
the common backend as it will be used by other task stages in the
future.
Previously, when applying defaults to an input variable's given value
before type conversion, we would permit `null` attribute values to
override a specified default. This behaviour is inconsistent with the
intent of the type system underlying Terraform, and represented a
divergence from the treatment of `null` as equivalent to unset which
exists in resources. The same behaviour exists in top-level variable
definitions with `nullable = false`, and we consider this to be the
preferred behaviour here too.
This commit slightly changes default value application such that an
explicit `null` attribute value is treated as equivalent to the
attribute being missing. Default values for attributes will now replace
explicit nulls.
We can't validate that data from deprecated nested attributes is used in
the configuration, but we can at least catch the simple case where a
deprecated attribute is referenced directly.
All the code infrastructure was there to support formatting multiple
files already.
This makes `terraform fmt` more flexible and also compliant with the
[treefmt formatter
spec](https://numtide.github.io/treefmt/docs/formatters-spec.html)
The StaticValidateTraversal code was not taking into account nested
structural types. Rather than create more special cases for checking
Type vs NestedType, we move the ImpliedType method up to the Attribute
to ensure both are used to generate the final type spec.
If there are outputs in configuration, a destroy plan will always contain a "delete" change for each of these outputs.
This leads to meaningless delete changes being present for outputs which were not present in state and therefore cannot be deleted. Since there is a change in the plan, this plan will then be considered applyable, and the user will be presented with text instructing them to apply a plan in which there are no actual changes.
This commit stops the above from happening in the case of root module outputs.
Return early from AssertPlanValid for any attribute which is only
computed. We currently fail if there's a config value, but that could
only happen because of core, not because of the provider.
Normally, `terraform output` refreshes and reads the entire state in the command package before pulling output values out of it. This doesn't give Terraform Cloud the opportunity to apply the read state outputs org permission and instead applies the read state versions permission.
I decided to expand the state manager interface to provide a separate GetRootOutputValues function in order to give the cloud backend a more nuanced opportunity to fetch just the outputs. This required moving state Refresh/Read code that was previously in the command into the shared backend state as well as the filesystem state packages.
Previously we tried to early-exit before doing anything at all for any
no-op changes, but that means we also skip some ancillary steps like
evaluating any preconditions/postconditions.
Now we'll skip only the main action itself for plans.NoOp, and still run
through all of the other side-steps.
Since one of those other steps is emitting events through the hooks
interface, this means that now no-op actions are visible to hooks, whereas
before we always filtered them out before calling. I therefore added some
additional logic to the hooks to filter them out at the UI layer instead;
the decision for whether or not to report that we visited a particular
object and found no action required seems defensible as a UI-level concern
anyway.
We previously would optimize away the graph nodes for any resource
instance without a real change pending, but that means we don't get an
opportunity to re-check any invariants associated with the instance, such
as preconditions and postconditions.
Other upstream changes during apply can potentially decide the outcome of
a condition even if the instance itself isn't being changed, so we do
still need to revisit these during apply or else we might skip running
certain checks altogether, if they yielded unknown results during planning
and then don't get run during apply.
We previously had a special case in the graph transformer for output
values where it would directly create an individual output value node
instead of an "expand" node as we would do for output values in nested
modules.
While it's true that we do always know that expanding a root module
output value will always produce exactly one instance, treating this case
as special creates the risk of those two codepaths diverging in other
ways.
Instead, we'll let the expand node also deal with root modules and
minimize the special case only to how we look up any changes for the
output values, since the design of plans.Changes is a bit awkward and
requires us to ask the question differently for root module output values.
Otherwise, the behavior will now be consistent across all output values
regardless of module.
The dag package did not previously provide a topological walk of a given
graph. While the existing combination of a transitive reduction with a
depth-first walk appeared to accomplish this, depth-first is only
equivalent with a simple tree. If there are multiple paths to a node, a
depth-first approach will skip dependencies from alternate paths.
A topological walk was previously only done in Terraform via the
concurrent method used for walking the primary dependency graph in core.
Sometime however we want a dependency ordering without the overhead of
instantiating the concurrent walk with the channel-based edges.
Add TopologicalOrder and ReverseTopologicalOrder to obtain a list of
nodes which can be used to visit each while ensuring that all
dependencies are satisfied.
Make DAG walks test-able, and add tests for more complex graph ordering.
We also add breadth-first for comparison, though it's not used currently
in Terraform.