opentofu/internal/backend/remote/backend_context.go

296 lines
10 KiB
Go
Raw Normal View History

package remote
import (
"context"
backend/remote: Support HCL variable values in local operations For remote operations, the remote system (Terraform Cloud or Enterprise) writes the stored variable values into a .tfvars file before running the remote copy of Terraform CLI. By contrast, for operations that only run locally (like "terraform import"), we fetch the stored variable values from the remote API and add them into the set of available variables directly as part of creating the local execution context. Previously in the local-only case we were assuming that all stored variables are strings, which isn't true: the Terraform Cloud/Enterprise UI allows users to specify that a particular variable is given as an HCL expression, in which case the correct behavior is to parse and evaluate the expression to obtain the final value. This also addresses a related issue whereby previously we were forcing all sensitive values to be represented as a special string "<sensitive>". That leads to type checking errors for any variable specified as having a type other than string, so instead here we use an unknown value as a placeholder so that type checking can pass. Unpopulated sensitive values may cause errors downstream though, so we'll also produce a warning for each of them to let the user know that those variables are not available for local-only operations. It's a warning rather than an error so that operations that don't rely on known values for those variables can potentially complete successfully. This can potentially produce errors in situations that would've been silently ignored before: if a remote variable is marked as being HCL syntax but is not valid HCL then it will now fail parsing at this early stage, whereas previously it would've just passed through as a string and failed only if the operation tried to interpret it as a non-string. However, in situations like these the remote operations like "terraform plan" would already have been failing with an equivalent error message anyway, so it's unlikely that any existing workspace that is being used for routine operations would have such a broken configuration.
2019-10-29 19:09:16 -05:00
"fmt"
"log"
"strings"
tfe "github.com/hashicorp/go-tfe"
backend/remote: Support HCL variable values in local operations For remote operations, the remote system (Terraform Cloud or Enterprise) writes the stored variable values into a .tfvars file before running the remote copy of Terraform CLI. By contrast, for operations that only run locally (like "terraform import"), we fetch the stored variable values from the remote API and add them into the set of available variables directly as part of creating the local execution context. Previously in the local-only case we were assuming that all stored variables are strings, which isn't true: the Terraform Cloud/Enterprise UI allows users to specify that a particular variable is given as an HCL expression, in which case the correct behavior is to parse and evaluate the expression to obtain the final value. This also addresses a related issue whereby previously we were forcing all sensitive values to be represented as a special string "<sensitive>". That leads to type checking errors for any variable specified as having a type other than string, so instead here we use an unknown value as a placeholder so that type checking can pass. Unpopulated sensitive values may cause errors downstream though, so we'll also produce a warning for each of them to let the user know that those variables are not available for local-only operations. It's a warning rather than an error so that operations that don't rely on known values for those variables can potentially complete successfully. This can potentially produce errors in situations that would've been silently ignored before: if a remote variable is marked as being HCL syntax but is not valid HCL then it will now fail parsing at this early stage, whereas previously it would've just passed through as a string and failed only if the operation tried to interpret it as a non-string. However, in situations like these the remote operations like "terraform plan" would already have been failing with an equivalent error message anyway, so it's unlikely that any existing workspace that is being used for routine operations would have such a broken configuration.
2019-10-29 19:09:16 -05:00
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/hashicorp/terraform/internal/backend"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/states/statemgr"
"github.com/hashicorp/terraform/internal/terraform"
"github.com/hashicorp/terraform/internal/tfdiags"
"github.com/zclconf/go-cty/cty"
)
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
// Context implements backend.Local.
func (b *Remote) LocalRun(op *backend.Operation) (*backend.LocalRun, statemgr.Full, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
ret := &backend.LocalRun{
PlanOpts: &terraform.PlanOpts{
Mode: op.PlanMode,
Targets: op.Targets,
},
}
op.StateLocker = op.StateLocker.WithContext(context.Background())
// Get the remote workspace name.
remoteWorkspaceName := b.getRemoteWorkspaceName(op.Workspace)
// Get the latest state.
log.Printf("[TRACE] backend/remote: requesting state manager for workspace %q", remoteWorkspaceName)
stateMgr, err := b.StateMgr(op.Workspace)
if err != nil {
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
diags = diags.Append(fmt.Errorf("error loading state: %w", err))
return nil, nil, diags
}
log.Printf("[TRACE] backend/remote: requesting state lock for workspace %q", remoteWorkspaceName)
if diags := op.StateLocker.Lock(stateMgr, op.Type.String()); diags.HasErrors() {
return nil, nil, diags
}
defer func() {
// If we're returning with errors, and thus not producing a valid
// context, we'll want to avoid leaving the remote workspace locked.
if diags.HasErrors() {
diags = diags.Append(op.StateLocker.Unlock())
}
}()
log.Printf("[TRACE] backend/remote: reading remote state for workspace %q", remoteWorkspaceName)
if err := stateMgr.RefreshState(); err != nil {
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
diags = diags.Append(fmt.Errorf("error loading state: %w", err))
return nil, nil, diags
}
// Initialize our context options
var opts terraform.ContextOpts
if v := b.ContextOpts; v != nil {
opts = *v
}
// Copy set options from the operation
opts.UIInput = op.UIIn
// Load the latest state. If we enter contextFromPlanFile below then the
// state snapshot in the plan file must match this, or else it'll return
// error diagnostics.
log.Printf("[TRACE] backend/remote: retrieving remote state snapshot for workspace %q", remoteWorkspaceName)
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
ret.InputState = stateMgr.State()
log.Printf("[TRACE] backend/remote: loading configuration for the current working directory")
config, configDiags := op.ConfigLoader.LoadConfig(op.ConfigDir)
diags = diags.Append(configDiags)
if configDiags.HasErrors() {
return nil, nil, diags
}
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
ret.Config = config
if op.AllowUnsetVariables {
// If we're not going to use the variables in an operation we'll be
// more lax about them, stubbing out any unset ones as unknown.
// This gives us enough information to produce a consistent context,
// but not enough information to run a real operation (plan, apply, etc)
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
ret.PlanOpts.SetVariables = stubAllVariables(op.Variables, config.Module.Variables)
} else {
// The underlying API expects us to use the opaque workspace id to request
// variables, so we'll need to look that up using our organization name
// and workspace name.
remoteWorkspaceID, err := b.getRemoteWorkspaceID(context.Background(), op.Workspace)
if err != nil {
diags = diags.Append(fmt.Errorf("error finding remote workspace: %w", err))
return nil, nil, diags
}
w, err := b.fetchWorkspace(context.Background(), b.organization, op.Workspace)
if err != nil {
diags = diags.Append(fmt.Errorf("error loading workspace: %w", err))
return nil, nil, diags
}
if isLocalExecutionMode(w.ExecutionMode) {
log.Printf("[TRACE] skipping retrieving variables from workspace %s/%s (%s), workspace is in Local Execution mode", remoteWorkspaceName, b.organization, remoteWorkspaceID)
} else {
log.Printf("[TRACE] backend/remote: retrieving variables from workspace %s/%s (%s)", remoteWorkspaceName, b.organization, remoteWorkspaceID)
tfeVariables, err := b.client.Variables.List(context.Background(), remoteWorkspaceID, nil)
if err != nil && err != tfe.ErrResourceNotFound {
diags = diags.Append(fmt.Errorf("error loading variables: %w", err))
return nil, nil, diags
}
if tfeVariables != nil {
if op.Variables == nil {
op.Variables = make(map[string]backend.UnparsedVariableValue)
}
for _, v := range tfeVariables.Items {
if v.Category == tfe.CategoryTerraform {
if _, ok := op.Variables[v.Key]; !ok {
op.Variables[v.Key] = &remoteStoredVariableValue{
definition: v,
}
}
}
}
}
}
if op.Variables != nil {
variables, varDiags := backend.ParseVariableValues(op.Variables, config.Module.Variables)
diags = diags.Append(varDiags)
if diags.HasErrors() {
return nil, nil, diags
}
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
ret.PlanOpts.SetVariables = variables
}
}
tfCtx, ctxDiags := terraform.NewContext(&opts)
diags = diags.Append(ctxDiags)
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
ret.Core = tfCtx
log.Printf("[TRACE] backend/remote: finished building terraform.Context")
core: Functional-style API for terraform.Context Previously terraform.Context was built in an unfortunate way where all of the data was provided up front in terraform.NewContext and then mutated directly by subsequent operations. That made the data flow hard to follow, commonly leading to bugs, and also meant that we were forced to take various actions too early in terraform.NewContext, rather than waiting until a more appropriate time during an operation. This (enormous) commit changes terraform.Context so that its fields are broadly just unchanging data about the execution context (current workspace name, available plugins, etc) whereas the main data Terraform works with arrives via individual method arguments and is returned in return values. Specifically, this means that terraform.Context no longer "has-a" config, state, and "planned changes", instead holding on to those only temporarily during an operation. The caller is responsible for propagating the outcome of one step into the next step so that the data flow between operations is actually visible. However, since that's a change to the main entry points in the "terraform" package, this commit also touches every file in the codebase which interacted with those APIs. Most of the noise here is in updating tests to take the same actions using the new API style, but this also affects the main-code callers in the backends and in the command package. My goal here was to refactor without changing observable behavior, but in practice there are a couple externally-visible behavior variations here that seemed okay in service of the broader goal: - The "terraform graph" command is no longer hooked directly into the core graph builders, because that's no longer part of the public API. However, I did include a couple new Context functions whose contract is to produce a UI-oriented graph, and _for now_ those continue to return the physical graph we use for those operations. There's no exported API for generating the "validate" and "eval" graphs, because neither is particularly interesting in its own right, and so "terraform graph" no longer supports those graph types. - terraform.NewContext no longer has the responsibility for collecting all of the provider schemas up front. Instead, we wait until we need them. However, that means that some of our error messages now have a slightly different shape due to unwinding through a differently-shaped call stack. As of this commit we also end up reloading the schemas multiple times in some cases, which is functionally acceptable but likely represents a performance regression. I intend to rework this to use caching, but I'm saving that for a later commit because this one is big enough already. The proximal reason for this change is to resolve the chicken/egg problem whereby there was previously no single point where we could apply "moved" statements to the previous run state before creating a plan. With this change in place, we can now do that as part of Context.Plan, prior to forking the input state into the three separate state artifacts we use during planning. However, this is at least the third project in a row where the previous API design led to piling more functionality into terraform.NewContext and then working around the incorrect order of operations that produces, so I intend that by paying the cost/risk of this large diff now we can in turn reduce the cost/risk of future projects that relate to our main workflow actions.
2021-08-24 14:06:38 -05:00
return ret, stateMgr, diags
}
func (b *Remote) getRemoteWorkspaceName(localWorkspaceName string) string {
switch {
case localWorkspaceName == backend.DefaultStateName:
// The default workspace name is a special case, for when the backend
// is configured to with to an exact remote workspace rather than with
// a remote workspace _prefix_.
return b.workspace
case b.prefix != "" && !strings.HasPrefix(localWorkspaceName, b.prefix):
return b.prefix + localWorkspaceName
default:
return localWorkspaceName
}
}
backend: Validate remote backend Terraform version When using the enhanced remote backend, a subset of all Terraform operations are supported. Of these, only plan and apply can be executed on the remote infrastructure (e.g. Terraform Cloud). Other operations run locally and use the remote backend for state storage. This causes problems when the local version of Terraform does not match the configured version from the remote workspace. If the two versions are incompatible, an `import` or `state mv` operation can cause the remote workspace to be unusable until a manual fix is applied. To prevent this from happening accidentally, this commit introduces a check that the local Terraform version and the configured remote workspace Terraform version are compatible. This check is skipped for commands which do not write state, and can also be disabled by the use of a new command-line flag, `-ignore-remote-version`. Terraform version compatibility is defined as: - For all releases before 0.14.0, local must exactly equal remote, as two different versions cannot share state; - 0.14.0 to 1.0.x are compatible, as we will not change the state version number until at least Terraform 1.1.0; - Versions after 1.1.0 must have the same major and minor versions, as we will not change the state version number in a patch release. If the two versions are incompatible, a diagnostic is displayed, advising that the error can be suppressed with `-ignore-remote-version`. When this flag is used, the diagnostic is still displayed, but as a warning instead of an error. Commands which will not write state can assert this fact by calling the helper `meta.ignoreRemoteBackendVersionConflict`, which will disable the checks. Those which can write state should instead call the helper `meta.remoteBackendVersionCheck`, which will return diagnostics for display. In addition to these explicit paths for managing the version check, we have an implicit check in the remote backend's state manager initialization method. Both of the above helpers will disable this check. This fallback is in place to ensure that future code paths which access state cannot accidentally skip the remote version check.
2020-11-13 15:43:56 -06:00
func (b *Remote) getRemoteWorkspace(ctx context.Context, localWorkspaceName string) (*tfe.Workspace, error) {
remoteWorkspaceName := b.getRemoteWorkspaceName(localWorkspaceName)
backend: Validate remote backend Terraform version When using the enhanced remote backend, a subset of all Terraform operations are supported. Of these, only plan and apply can be executed on the remote infrastructure (e.g. Terraform Cloud). Other operations run locally and use the remote backend for state storage. This causes problems when the local version of Terraform does not match the configured version from the remote workspace. If the two versions are incompatible, an `import` or `state mv` operation can cause the remote workspace to be unusable until a manual fix is applied. To prevent this from happening accidentally, this commit introduces a check that the local Terraform version and the configured remote workspace Terraform version are compatible. This check is skipped for commands which do not write state, and can also be disabled by the use of a new command-line flag, `-ignore-remote-version`. Terraform version compatibility is defined as: - For all releases before 0.14.0, local must exactly equal remote, as two different versions cannot share state; - 0.14.0 to 1.0.x are compatible, as we will not change the state version number until at least Terraform 1.1.0; - Versions after 1.1.0 must have the same major and minor versions, as we will not change the state version number in a patch release. If the two versions are incompatible, a diagnostic is displayed, advising that the error can be suppressed with `-ignore-remote-version`. When this flag is used, the diagnostic is still displayed, but as a warning instead of an error. Commands which will not write state can assert this fact by calling the helper `meta.ignoreRemoteBackendVersionConflict`, which will disable the checks. Those which can write state should instead call the helper `meta.remoteBackendVersionCheck`, which will return diagnostics for display. In addition to these explicit paths for managing the version check, we have an implicit check in the remote backend's state manager initialization method. Both of the above helpers will disable this check. This fallback is in place to ensure that future code paths which access state cannot accidentally skip the remote version check.
2020-11-13 15:43:56 -06:00
log.Printf("[TRACE] backend/remote: looking up workspace for %s/%s", b.organization, remoteWorkspaceName)
remoteWorkspace, err := b.client.Workspaces.Read(ctx, b.organization, remoteWorkspaceName)
backend: Validate remote backend Terraform version When using the enhanced remote backend, a subset of all Terraform operations are supported. Of these, only plan and apply can be executed on the remote infrastructure (e.g. Terraform Cloud). Other operations run locally and use the remote backend for state storage. This causes problems when the local version of Terraform does not match the configured version from the remote workspace. If the two versions are incompatible, an `import` or `state mv` operation can cause the remote workspace to be unusable until a manual fix is applied. To prevent this from happening accidentally, this commit introduces a check that the local Terraform version and the configured remote workspace Terraform version are compatible. This check is skipped for commands which do not write state, and can also be disabled by the use of a new command-line flag, `-ignore-remote-version`. Terraform version compatibility is defined as: - For all releases before 0.14.0, local must exactly equal remote, as two different versions cannot share state; - 0.14.0 to 1.0.x are compatible, as we will not change the state version number until at least Terraform 1.1.0; - Versions after 1.1.0 must have the same major and minor versions, as we will not change the state version number in a patch release. If the two versions are incompatible, a diagnostic is displayed, advising that the error can be suppressed with `-ignore-remote-version`. When this flag is used, the diagnostic is still displayed, but as a warning instead of an error. Commands which will not write state can assert this fact by calling the helper `meta.ignoreRemoteBackendVersionConflict`, which will disable the checks. Those which can write state should instead call the helper `meta.remoteBackendVersionCheck`, which will return diagnostics for display. In addition to these explicit paths for managing the version check, we have an implicit check in the remote backend's state manager initialization method. Both of the above helpers will disable this check. This fallback is in place to ensure that future code paths which access state cannot accidentally skip the remote version check.
2020-11-13 15:43:56 -06:00
if err != nil {
return nil, err
}
return remoteWorkspace, nil
}
func (b *Remote) getRemoteWorkspaceID(ctx context.Context, localWorkspaceName string) (string, error) {
remoteWorkspace, err := b.getRemoteWorkspace(ctx, localWorkspaceName)
if err != nil {
return "", err
}
return remoteWorkspace.ID, nil
}
func stubAllVariables(vv map[string]backend.UnparsedVariableValue, decls map[string]*configs.Variable) terraform.InputValues {
ret := make(terraform.InputValues, len(decls))
for name, cfg := range decls {
raw, exists := vv[name]
if !exists {
ret[name] = &terraform.InputValue{
Value: cty.UnknownVal(cfg.Type),
SourceType: terraform.ValueFromConfig,
}
continue
}
val, diags := raw.ParseVariableValue(cfg.ParsingMode)
if diags.HasErrors() {
ret[name] = &terraform.InputValue{
Value: cty.UnknownVal(cfg.Type),
SourceType: terraform.ValueFromConfig,
}
continue
}
ret[name] = val
}
return ret
}
backend/remote: Support HCL variable values in local operations For remote operations, the remote system (Terraform Cloud or Enterprise) writes the stored variable values into a .tfvars file before running the remote copy of Terraform CLI. By contrast, for operations that only run locally (like "terraform import"), we fetch the stored variable values from the remote API and add them into the set of available variables directly as part of creating the local execution context. Previously in the local-only case we were assuming that all stored variables are strings, which isn't true: the Terraform Cloud/Enterprise UI allows users to specify that a particular variable is given as an HCL expression, in which case the correct behavior is to parse and evaluate the expression to obtain the final value. This also addresses a related issue whereby previously we were forcing all sensitive values to be represented as a special string "<sensitive>". That leads to type checking errors for any variable specified as having a type other than string, so instead here we use an unknown value as a placeholder so that type checking can pass. Unpopulated sensitive values may cause errors downstream though, so we'll also produce a warning for each of them to let the user know that those variables are not available for local-only operations. It's a warning rather than an error so that operations that don't rely on known values for those variables can potentially complete successfully. This can potentially produce errors in situations that would've been silently ignored before: if a remote variable is marked as being HCL syntax but is not valid HCL then it will now fail parsing at this early stage, whereas previously it would've just passed through as a string and failed only if the operation tried to interpret it as a non-string. However, in situations like these the remote operations like "terraform plan" would already have been failing with an equivalent error message anyway, so it's unlikely that any existing workspace that is being used for routine operations would have such a broken configuration.
2019-10-29 19:09:16 -05:00
// remoteStoredVariableValue is a backend.UnparsedVariableValue implementation
// that translates from the go-tfe representation of stored variables into
// the Terraform Core backend representation of variables.
type remoteStoredVariableValue struct {
definition *tfe.Variable
}
var _ backend.UnparsedVariableValue = (*remoteStoredVariableValue)(nil)
func (v *remoteStoredVariableValue) ParseVariableValue(mode configs.VariableParsingMode) (*terraform.InputValue, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
var val cty.Value
switch {
case v.definition.Sensitive:
// If it's marked as sensitive then it's not available for use in
// local operations. We'll use an unknown value as a placeholder for
// it so that operations that don't need it might still work, but
// we'll also produce a warning about it to add context for any
// errors that might result here.
val = cty.DynamicVal
if !v.definition.HCL {
// If it's not marked as HCL then we at least know that the
// value must be a string, so we'll set that in case it allows
// us to do some more precise type checking.
val = cty.UnknownVal(cty.String)
}
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Warning,
fmt.Sprintf("Value for var.%s unavailable", v.definition.Key),
fmt.Sprintf("The value of variable %q is marked as sensitive in the remote workspace. This operation always runs locally, so the value for that variable is not available.", v.definition.Key),
))
case v.definition.HCL:
// If the variable value is marked as being in HCL syntax, we need to
// parse it the same way as it would be interpreted in a .tfvars
// file because that is how it would get passed to Terraform CLI for
// a remote operation and we want to mimic that result as closely as
// possible.
var exprDiags hcl.Diagnostics
expr, exprDiags := hclsyntax.ParseExpression([]byte(v.definition.Value), "<remote workspace>", hcl.Pos{Line: 1, Column: 1})
if expr != nil {
var moreDiags hcl.Diagnostics
val, moreDiags = expr.Value(nil)
exprDiags = append(exprDiags, moreDiags...)
} else {
// We'll have already put some errors in exprDiags above, so we'll
// just stub out the value here.
val = cty.DynamicVal
}
// We don't have sufficient context to return decent error messages
// for syntax errors in the remote values, so we'll just return a
// generic message instead for now.
// (More complete error messages will still result from true remote
// operations, because they'll run on the remote system where we've
// materialized the values into a tfvars file we can report from.)
if exprDiags.HasErrors() {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
fmt.Sprintf("Invalid expression for var.%s", v.definition.Key),
fmt.Sprintf("The value of variable %q is marked in the remote workspace as being specified in HCL syntax, but the given value is not valid HCL. Stored variable values must be valid literal expressions and may not contain references to other variables or calls to functions.", v.definition.Key),
))
}
default:
// A variable value _not_ marked as HCL is always be a string, given
// literally.
val = cty.StringVal(v.definition.Value)
}
return &terraform.InputValue{
Value: val,
// We mark these as "from input" with the rationale that entering
// variable values into the Terraform Cloud or Enterprise UI is,
// roughly speaking, a similar idea to entering variable values at
// the interactive CLI prompts. It's not a perfect correspondance,
// but it's closer than the other options.
SourceType: terraform.ValueFromInput,
}, diags
}