opentofu/internal/tofu/context_apply.go

211 lines
7.7 KiB
Go
Raw Normal View History

// Copyright (c) The OpenTofu Authors
// SPDX-License-Identifier: MPL-2.0
// Copyright (c) 2023 HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
2023-09-20 07:16:53 -05:00
package tofu
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
import (
"fmt"
"log"
"github.com/zclconf/go-cty/cty"
"github.com/opentofu/opentofu/internal/addrs"
"github.com/opentofu/opentofu/internal/configs"
"github.com/opentofu/opentofu/internal/plans"
"github.com/opentofu/opentofu/internal/states"
"github.com/opentofu/opentofu/internal/tfdiags"
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
)
// Apply performs the actions described by the given Plan object and returns
// the resulting updated state.
//
// The given configuration *must* be the same configuration that was passed
// earlier to Context.Plan in order to create this plan.
//
// Even if the returned diagnostics contains errors, Apply always returns the
// resulting state which is likely to have been partially-updated.
func (c *Context) Apply(plan *plans.Plan, config *configs.Config) (*states.State, tfdiags.Diagnostics) {
defer c.acquireRun("apply")()
log.Printf("[DEBUG] Building and walking apply graph for %s plan", plan.UIMode)
if plan.Errored {
var diags tfdiags.Diagnostics
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Cannot apply failed plan",
`The given plan is incomplete due to errors during planning, and so it cannot be applied.`,
))
return nil, diags
}
for _, rc := range plan.Changes.Resources {
2023-05-11 21:02:49 -05:00
// Import is a no-op change during an apply (all the real action happens during the plan) but we'd
// like to show some helpful output that mirrors the way we show other changes.
if rc.Importing != nil {
for _, h := range c.hooks {
2023-05-11 21:02:49 -05:00
// In future, we may need to call PostApplyImport separately elsewhere in the apply
// operation. For now, though, we'll call Pre and Post hooks together.
h.PreApplyImport(rc.Addr, *rc.Importing)
h.PostApplyImport(rc.Addr, *rc.Importing)
}
}
}
graph, operation, diags := c.applyGraph(plan, config, true)
if diags.HasErrors() {
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
return nil, diags
}
workingState := plan.PriorState.DeepCopy()
walker, walkDiags := c.walk(graph, operation, &graphWalkOpts{
core: Handle root and child module input variables consistently Previously we had a significant discrepancy between these two situations: we wrote the raw root module variables directly into the EvalContext and then applied type conversions only at expression evaluation time, while for child modules we converted and validated the values while visiting the variable graph node and wrote only the _final_ value into the EvalContext. This confusion seems to have been the root cause for #29899, where validation rules for root module variables were being applied at the wrong point in the process, prior to type conversion. To fix that bug and also make similar mistakes less likely in the future, I've made the root module variable handling more like the child module variable handling in the following ways: - The "raw value" (exactly as given by the user) lives only in the graph node representing the variable, which mirrors how the _expression_ for a child module variable lives in its graph node. This means that the flow for the two is the same except that there's no expression evaluation step for root module variables, because they arrive as constant values from the caller. - The set of variable values in the EvalContext is always only "final" values, after type conversion is complete. That in turn means we no longer need to do "just in time" conversion in evaluationStateData.GetInputVariable, and can just return the value exactly as stored, which is consistent with how we handle all other references between objects. This diff is noisier than I'd like because of how much it takes to wire a new argument (the raw variable values) through to the plan graph builder, but those changes are pretty mechanical and the interesting logic lives inside the plan graph builder itself, in NodeRootVariable, and the shared helper functions in eval_variable.go. While here I also took the opportunity to fix a historical API wart in EvalContext, where SetModuleCallArguments was built to take a set of variable values all at once but our current caller always calls with only one at a time. That is now just SetModuleCallArgument singular, to match with the new SetRootModuleArgument to deal with root module variables.
2021-11-10 19:29:45 -06:00
Config: config,
InputState: workingState,
Changes: plan.Changes,
// We need to propagate the check results from the plan phase,
// because that will tell us which checkable objects we're expecting
// to see updated results from during the apply step.
PlanTimeCheckResults: plan.Checks,
// We also want to propagate the timestamp from the plan file.
PlanTimeTimestamp: plan.Timestamp,
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
})
diags = diags.Append(walker.NonFatalDiagnostics)
diags = diags.Append(walkDiags)
// After the walk is finished, we capture a simplified snapshot of the
// check result data as part of the new state.
walker.State.RecordCheckResults(walker.Checks)
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
newState := walker.State.Close()
if plan.UIMode == plans.DestroyMode && !diags.HasErrors() {
// NOTE: This is a vestigial violation of the rule that we mustn't
// use plan.UIMode to affect apply-time behavior.
// We ideally ought to just call newState.PruneResourceHusks
// unconditionally here, but we historically didn't and haven't yet
// verified that it'd be safe to do so.
newState.PruneResourceHusks()
}
if len(plan.TargetAddrs) > 0 {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Warning,
"Applied changes may be incomplete",
`The plan was created with the -target option in effect, so some changes requested in the configuration may have been ignored and the output values may not be fully updated. Run the following command to verify that no other changes are pending:
2023-09-20 07:16:53 -05:00
tofu plan
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
Note that the -target option is not suitable for routine use, and is provided only for exceptional situations such as recovering from errors or mistakes, or when OpenTofu specifically suggests to use it as part of an error message.`,
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
))
}
// FIXME: we cannot check for an empty plan for refresh-only, because root
// outputs are always stored as changes. The final condition of the state
// also depends on some cleanup which happens during the apply walk. It
// would probably make more sense if applying a refresh-only plan were
// simply just returning the planned state and checks, but some extra
// cleanup is going to be needed to make the plan state match what apply
// would do. For now we can copy the checks over which were overwritten
// during the apply walk.
// Despite the intent of UIMode, it must still be used for apply-time
// differences in destroy plans too, so we can make use of that here as
// well.
if plan.UIMode == plans.RefreshOnlyMode {
newState.CheckResults = plan.Checks.DeepCopy()
}
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
return newState, diags
}
func (c *Context) applyGraph(plan *plans.Plan, config *configs.Config, validate bool) (*Graph, walkOperation, tfdiags.Diagnostics) {
core: Handle root and child module input variables consistently Previously we had a significant discrepancy between these two situations: we wrote the raw root module variables directly into the EvalContext and then applied type conversions only at expression evaluation time, while for child modules we converted and validated the values while visiting the variable graph node and wrote only the _final_ value into the EvalContext. This confusion seems to have been the root cause for #29899, where validation rules for root module variables were being applied at the wrong point in the process, prior to type conversion. To fix that bug and also make similar mistakes less likely in the future, I've made the root module variable handling more like the child module variable handling in the following ways: - The "raw value" (exactly as given by the user) lives only in the graph node representing the variable, which mirrors how the _expression_ for a child module variable lives in its graph node. This means that the flow for the two is the same except that there's no expression evaluation step for root module variables, because they arrive as constant values from the caller. - The set of variable values in the EvalContext is always only "final" values, after type conversion is complete. That in turn means we no longer need to do "just in time" conversion in evaluationStateData.GetInputVariable, and can just return the value exactly as stored, which is consistent with how we handle all other references between objects. This diff is noisier than I'd like because of how much it takes to wire a new argument (the raw variable values) through to the plan graph builder, but those changes are pretty mechanical and the interesting logic lives inside the plan graph builder itself, in NodeRootVariable, and the shared helper functions in eval_variable.go. While here I also took the opportunity to fix a historical API wart in EvalContext, where SetModuleCallArguments was built to take a set of variable values all at once but our current caller always calls with only one at a time. That is now just SetModuleCallArgument singular, to match with the new SetRootModuleArgument to deal with root module variables.
2021-11-10 19:29:45 -06:00
var diags tfdiags.Diagnostics
variables := InputValues{}
for name, dyVal := range plan.VariableValues {
val, err := dyVal.Decode(cty.DynamicPseudoType)
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Invalid variable value in plan",
fmt.Sprintf("Invalid value for variable %q recorded in plan file: %s.", name, err),
))
continue
}
variables[name] = &InputValue{
Value: val,
SourceType: ValueFromPlan,
}
}
if diags.HasErrors() {
return nil, walkApply, diags
}
core and backend: remove redundant handling of default variable values Previously we had three different layers all thinking they were responsible for substituting a default value for an unset root module variable: - the local backend, via logic in backend.ParseVariableValues - the context.Plan function (and other similar functions) trying to preprocess the input variables using terraform.mergeDefaultInputVariableValues . - the newer prepareFinalInputVariableValue, which aims to centralize all of the variable preparation logic so it can be common to both root and child module variables. The second of these was also trying to handle type constraint checking, which is also the responsibility of the central function and not something we need to handle so early. Only the last of these consistently handles both root and child module variables, and so is the one we ought to keep. The others are now redundant and are causing prepareFinalInputVariableValue to get a slightly corrupted view of the caller's chosen variable values. To rectify that, here we remove the two redundant layers altogether and have unset root variables pass through as cty.NilVal all the way to the central prepareFinalInputVariableValue function, which will then handle them in a suitable way which properly respects the "nullable" setting. This commit includes some test changes in the terraform package to make those tests no longer rely on the mergeDefaultInputVariableValues logic we've removed, and to instead explicitly set cty.NilVal for all unset variables to comply with our intended contract for PlanOpts.SetVariables, and similar. (This is so that we can more easily catch bugs in callers where they _don't_ correctly handle input variables; it allows us to distinguish between the caller explicitly marking a variable as unset vs. not describing it at all, where the latter is a bug in the caller.)
2021-12-20 18:38:52 -06:00
// The plan.VariableValues field only records variables that were actually
// set by the caller in the PlanOpts, so we may need to provide
// placeholders for any other variables that the user didn't set, in
// which case OpenTofu will once again use the default value from the
core and backend: remove redundant handling of default variable values Previously we had three different layers all thinking they were responsible for substituting a default value for an unset root module variable: - the local backend, via logic in backend.ParseVariableValues - the context.Plan function (and other similar functions) trying to preprocess the input variables using terraform.mergeDefaultInputVariableValues . - the newer prepareFinalInputVariableValue, which aims to centralize all of the variable preparation logic so it can be common to both root and child module variables. The second of these was also trying to handle type constraint checking, which is also the responsibility of the central function and not something we need to handle so early. Only the last of these consistently handles both root and child module variables, and so is the one we ought to keep. The others are now redundant and are causing prepareFinalInputVariableValue to get a slightly corrupted view of the caller's chosen variable values. To rectify that, here we remove the two redundant layers altogether and have unset root variables pass through as cty.NilVal all the way to the central prepareFinalInputVariableValue function, which will then handle them in a suitable way which properly respects the "nullable" setting. This commit includes some test changes in the terraform package to make those tests no longer rely on the mergeDefaultInputVariableValues logic we've removed, and to instead explicitly set cty.NilVal for all unset variables to comply with our intended contract for PlanOpts.SetVariables, and similar. (This is so that we can more easily catch bugs in callers where they _don't_ correctly handle input variables; it allows us to distinguish between the caller explicitly marking a variable as unset vs. not describing it at all, where the latter is a bug in the caller.)
2021-12-20 18:38:52 -06:00
// configuration when we visit these variables during the graph walk.
for name := range config.Module.Variables {
if _, ok := variables[name]; ok {
continue
}
variables[name] = &InputValue{
Value: cty.NilVal,
SourceType: ValueFromPlan,
}
}
2022-10-18 14:34:37 -05:00
operation := walkApply
if plan.UIMode == plans.DestroyMode {
// FIXME: Due to differences in how objects must be handled in the
// graph and evaluated during a complete destroy, we must continue to
// use plans.DestroyMode to switch on this behavior. If all objects
// which require special destroy handling can be tracked in the plan,
// then this switch will no longer be needed and we can remove the
// walkDestroy operation mode.
// TODO: Audit that and remove walkDestroy as an operation mode.
operation = walkDestroy
}
core: Handle root and child module input variables consistently Previously we had a significant discrepancy between these two situations: we wrote the raw root module variables directly into the EvalContext and then applied type conversions only at expression evaluation time, while for child modules we converted and validated the values while visiting the variable graph node and wrote only the _final_ value into the EvalContext. This confusion seems to have been the root cause for #29899, where validation rules for root module variables were being applied at the wrong point in the process, prior to type conversion. To fix that bug and also make similar mistakes less likely in the future, I've made the root module variable handling more like the child module variable handling in the following ways: - The "raw value" (exactly as given by the user) lives only in the graph node representing the variable, which mirrors how the _expression_ for a child module variable lives in its graph node. This means that the flow for the two is the same except that there's no expression evaluation step for root module variables, because they arrive as constant values from the caller. - The set of variable values in the EvalContext is always only "final" values, after type conversion is complete. That in turn means we no longer need to do "just in time" conversion in evaluationStateData.GetInputVariable, and can just return the value exactly as stored, which is consistent with how we handle all other references between objects. This diff is noisier than I'd like because of how much it takes to wire a new argument (the raw variable values) through to the plan graph builder, but those changes are pretty mechanical and the interesting logic lives inside the plan graph builder itself, in NodeRootVariable, and the shared helper functions in eval_variable.go. While here I also took the opportunity to fix a historical API wart in EvalContext, where SetModuleCallArguments was built to take a set of variable values all at once but our current caller always calls with only one at a time. That is now just SetModuleCallArgument singular, to match with the new SetRootModuleArgument to deal with root module variables.
2021-11-10 19:29:45 -06:00
graph, moreDiags := (&ApplyGraphBuilder{
Config: config,
Changes: plan.Changes,
State: plan.PriorState,
RootVariableValues: variables,
Plugins: c.plugins,
Targets: plan.TargetAddrs,
ForceReplace: plan.ForceReplaceAddrs,
2022-10-18 14:34:37 -05:00
Operation: operation,
ExternalReferences: plan.ExternalReferences,
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
}).Build(addrs.RootModuleInstance)
core: Handle root and child module input variables consistently Previously we had a significant discrepancy between these two situations: we wrote the raw root module variables directly into the EvalContext and then applied type conversions only at expression evaluation time, while for child modules we converted and validated the values while visiting the variable graph node and wrote only the _final_ value into the EvalContext. This confusion seems to have been the root cause for #29899, where validation rules for root module variables were being applied at the wrong point in the process, prior to type conversion. To fix that bug and also make similar mistakes less likely in the future, I've made the root module variable handling more like the child module variable handling in the following ways: - The "raw value" (exactly as given by the user) lives only in the graph node representing the variable, which mirrors how the _expression_ for a child module variable lives in its graph node. This means that the flow for the two is the same except that there's no expression evaluation step for root module variables, because they arrive as constant values from the caller. - The set of variable values in the EvalContext is always only "final" values, after type conversion is complete. That in turn means we no longer need to do "just in time" conversion in evaluationStateData.GetInputVariable, and can just return the value exactly as stored, which is consistent with how we handle all other references between objects. This diff is noisier than I'd like because of how much it takes to wire a new argument (the raw variable values) through to the plan graph builder, but those changes are pretty mechanical and the interesting logic lives inside the plan graph builder itself, in NodeRootVariable, and the shared helper functions in eval_variable.go. While here I also took the opportunity to fix a historical API wart in EvalContext, where SetModuleCallArguments was built to take a set of variable values all at once but our current caller always calls with only one at a time. That is now just SetModuleCallArgument singular, to match with the new SetRootModuleArgument to deal with root module variables.
2021-11-10 19:29:45 -06:00
diags = diags.Append(moreDiags)
if moreDiags.HasErrors() {
return nil, walkApply, diags
}
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
return graph, operation, diags
}
// ApplyGraphForUI is a last vestage of graphs in the public interface of
// Context (as opposed to graphs as an implementation detail) intended only for
// use by the "tofu graph" command when asked to render an apply-time
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
// graph.
//
// The result of this is intended only for rendering ot the user as a dot
// graph, and so may change in future in order to make the result more useful
// in that context, even if drifts away from the physical graph that OpenTofu
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
// Core currently uses as an implementation detail of planning.
func (c *Context) ApplyGraphForUI(plan *plans.Plan, config *configs.Config) (*Graph, tfdiags.Diagnostics) {
// For now though, this really is just the internal graph, confusing
// implementation details and all.
var diags tfdiags.Diagnostics
graph, _, moreDiags := c.applyGraph(plan, config, false)
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
diags = diags.Append(moreDiags)
return graph, diags
}