opentofu/internal/command/meta_config.go

471 lines
15 KiB
Go
Raw Normal View History

// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
package command
import (
"context"
"fmt"
"os"
"path/filepath"
"sort"
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
command: Start of propagating OpenTelemetry context Several times over the years we've considered adding tracing instrumentation to Terraform, since even when running in isolation as a CLI program it has a "distributed system-like" structure, with lots of concurrent internal work and also some work delegated to provider plugins that are essentially temporarily-running microservices. However, it's always felt a bit overwhelming to do it because much of Terraform predates the Go context.Context idiom and so it's tough to get a clean chain of context.Context values all the way down the stack without disturbing a lot of existing APIs. This commit aims to just get that process started by establishing how a context can propagate from "package main" into the command package, focusing initially on "terraform init" and some other commands that share some underlying functions with that command. OpenTelemetry has emerged as a de-facto industry standard and so this uses its API directly, without any attempt to hide it behind an abstraction. The OpenTelemetry API is itself already an adapter layer, so we should be able to swap in any backend that uses comparable concepts. For now we just discard the tracing reports by default, and allow users to opt in to delivering traces over OTLP by setting an environment variable when running Terraform (the environment variable was established in an earlier commit, so this commit builds on that.) When tracing collection is enabled, every Terraform CLI run will generate at least one overall span representing the command that was run. Some commands might also create child spans, but most currently do not.
2023-07-10 13:29:57 -05:00
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
"github.com/hashicorp/terraform/internal/configs"
"github.com/hashicorp/terraform/internal/configs/configload"
"github.com/hashicorp/terraform/internal/configs/configschema"
"github.com/hashicorp/terraform/internal/initwd"
"github.com/hashicorp/terraform/internal/registry"
"github.com/hashicorp/terraform/internal/terraform"
"github.com/hashicorp/terraform/internal/tfdiags"
)
// normalizePath normalizes a given path so that it is, if possible, relative
// to the current working directory. This is primarily used to prepare
// paths used to load configuration, because we want to prefer recording
// relative paths in source code references within the configuration.
func (m *Meta) normalizePath(path string) string {
workdir: Start of a new package for working directory state management Thus far our various interactions with the bits of state we keep associated with a working directory have all been implemented directly inside the "command" package -- often in the huge command.Meta type -- and not managed collectively via a single component. There's too many little codepaths reading and writing from the working directory and data directory to refactor it all in one step, but this is an attempt at a first step towards a future where everything that reads and writes from the current working directory would do so via an object that encapsulates the implementation details and offers a high-level API to read and write all of these session-persistent settings. The design here continues our gradual path towards using a dependency injection style where "package main" is solely responsible for directly interacting with the OS command line, the OS environment, the OS working directory, the stdio streams, and the CLI configuration, and then communicating the resulting information to the rest of Terraform by wiring together objects. It seems likely that eventually we'll have enough wiring code in package main to justify a more explicit organization of that code, but for this commit the new "workdir.Dir" object is just wired directly in place of its predecessors, without any significant change of code organization at that top layer. This first commit focuses on the main files and directories we use to find provider plugins, because a subsequent commit will lightly reorganize the separation of concerns for plugin launching with a similar goal of collecting all of the relevant logic together into one spot.
2021-09-01 19:01:44 -05:00
m.fixupMissingWorkingDir()
return m.WorkingDir.NormalizePath(path)
}
// loadConfig reads a configuration from the given directory, which should
// contain a root module and have already have any required descendent modules
// installed.
func (m *Meta) loadConfig(rootDir string) (*configs.Config, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
rootDir = m.normalizePath(rootDir)
loader, err := m.initConfigLoader()
if err != nil {
diags = diags.Append(err)
return nil, diags
}
config, hclDiags := loader.LoadConfig(rootDir)
diags = diags.Append(hclDiags)
return config, diags
}
// loadConfigWithTests matches loadConfig, except it also loads any test files
// into the config alongside the main configuration.
func (m *Meta) loadConfigWithTests(rootDir, testDir string) (*configs.Config, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
rootDir = m.normalizePath(rootDir)
loader, err := m.initConfigLoader()
if err != nil {
diags = diags.Append(err)
return nil, diags
}
config, hclDiags := loader.LoadConfigWithTests(rootDir, testDir)
diags = diags.Append(hclDiags)
return config, diags
}
// loadSingleModule reads configuration from the given directory and returns
// a description of that module only, without attempting to assemble a module
// tree for referenced child modules.
//
// Most callers should use loadConfig. This method exists to support early
// initialization use-cases where the root module must be inspected in order
// to determine what else needs to be installed before the full configuration
// can be used.
func (m *Meta) loadSingleModule(dir string) (*configs.Module, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
dir = m.normalizePath(dir)
loader, err := m.initConfigLoader()
if err != nil {
diags = diags.Append(err)
return nil, diags
}
module, hclDiags := loader.Parser().LoadConfigDir(dir)
diags = diags.Append(hclDiags)
return module, diags
}
// loadSingleModuleWithTests matches loadSingleModule except it also loads any
// tests for the target module.
func (m *Meta) loadSingleModuleWithTests(dir string, testDir string) (*configs.Module, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
dir = m.normalizePath(dir)
loader, err := m.initConfigLoader()
if err != nil {
diags = diags.Append(err)
return nil, diags
}
module, hclDiags := loader.Parser().LoadConfigDirWithTests(dir, testDir)
diags = diags.Append(hclDiags)
return module, diags
}
// dirIsConfigPath checks if the given path is a directory that contains at
// least one Terraform configuration file (.tf or .tf.json), returning true
// if so.
//
// In the unlikely event that the underlying config loader cannot be initalized,
// this function optimistically returns true, assuming that the caller will
// then do some other operation that requires the config loader and get an
// error at that point.
func (m *Meta) dirIsConfigPath(dir string) bool {
loader, err := m.initConfigLoader()
if err != nil {
return true
}
return loader.IsConfigDir(dir)
}
// loadBackendConfig reads configuration from the given directory and returns
// the backend configuration defined by that module, if any. Nil is returned
// if the specified module does not have an explicit backend configuration.
//
// This is a convenience method for command code that will delegate to the
// configured backend to do most of its work, since in that case it is the
// backend that will do the full configuration load.
//
// Although this method returns only the backend configuration, at present it
// actually loads and validates the entire configuration first. Therefore errors
// returned may be about other aspects of the configuration. This behavior may
// change in future, so callers must not rely on it. (That is, they must expect
// that a call to loadSingleModule or loadConfig could fail on the same
// directory even if loadBackendConfig succeeded.)
func (m *Meta) loadBackendConfig(rootDir string) (*configs.Backend, tfdiags.Diagnostics) {
mod, diags := m.loadSingleModule(rootDir)
// Only return error diagnostics at this point. Any warnings will be caught
// again later and duplicated in the output.
if diags.HasErrors() {
return nil, diags
}
if mod.CloudConfig != nil {
backendConfig := mod.CloudConfig.ToBackendConfig()
return &backendConfig, nil
}
return mod.Backend, nil
}
// loadHCLFile reads an arbitrary HCL file and returns the unprocessed body
// representing its toplevel. Most callers should use one of the more
// specialized "load..." methods to get a higher-level representation.
func (m *Meta) loadHCLFile(filename string) (hcl.Body, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
filename = m.normalizePath(filename)
loader, err := m.initConfigLoader()
if err != nil {
diags = diags.Append(err)
return nil, diags
}
body, hclDiags := loader.Parser().LoadHCLFile(filename)
diags = diags.Append(hclDiags)
return body, diags
}
// installModules reads a root module from the given directory and attempts
// recursively to install all of its descendent modules.
//
// The given hooks object will be notified of installation progress, which
// can then be relayed to the end-user. The uiModuleInstallHooks type in
// this package has a reasonable implementation for displaying notifications
// via a provided cli.Ui.
func (m *Meta) installModules(ctx context.Context, rootDir, testsDir string, upgrade bool, hooks initwd.ModuleInstallHooks) (abort bool, diags tfdiags.Diagnostics) {
command: Start of propagating OpenTelemetry context Several times over the years we've considered adding tracing instrumentation to Terraform, since even when running in isolation as a CLI program it has a "distributed system-like" structure, with lots of concurrent internal work and also some work delegated to provider plugins that are essentially temporarily-running microservices. However, it's always felt a bit overwhelming to do it because much of Terraform predates the Go context.Context idiom and so it's tough to get a clean chain of context.Context values all the way down the stack without disturbing a lot of existing APIs. This commit aims to just get that process started by establishing how a context can propagate from "package main" into the command package, focusing initially on "terraform init" and some other commands that share some underlying functions with that command. OpenTelemetry has emerged as a de-facto industry standard and so this uses its API directly, without any attempt to hide it behind an abstraction. The OpenTelemetry API is itself already an adapter layer, so we should be able to swap in any backend that uses comparable concepts. For now we just discard the tracing reports by default, and allow users to opt in to delivering traces over OTLP by setting an environment variable when running Terraform (the environment variable was established in an earlier commit, so this commit builds on that.) When tracing collection is enabled, every Terraform CLI run will generate at least one overall span representing the command that was run. Some commands might also create child spans, but most currently do not.
2023-07-10 13:29:57 -05:00
ctx, span := tracer.Start(ctx, "install modules")
defer span.End()
rootDir = m.normalizePath(rootDir)
err := os.MkdirAll(m.modulesDir(), os.ModePerm)
if err != nil {
diags = diags.Append(fmt.Errorf("failed to create local modules directory: %s", err))
return true, diags
}
loader, err := m.initConfigLoader()
if err != nil {
diags = diags.Append(err)
return true, diags
}
inst := initwd.NewModuleInstaller(m.modulesDir(), loader, m.registryClient())
_, moreDiags := inst.InstallModules(ctx, rootDir, testsDir, upgrade, hooks)
diags = diags.Append(moreDiags)
if ctx.Err() == context.Canceled {
m.showDiagnostics(diags)
m.Ui.Error("Module installation was canceled by an interrupt signal.")
return true, diags
}
return false, diags
}
// initDirFromModule initializes the given directory (which should be
// pre-verified as empty by the caller) by copying the source code from the
// given module address.
//
// Internally this runs similar steps to installModules.
// The given hooks object will be notified of installation progress, which
// can then be relayed to the end-user. The uiModuleInstallHooks type in
// this package has a reasonable implementation for displaying notifications
// via a provided cli.Ui.
command: Start of propagating OpenTelemetry context Several times over the years we've considered adding tracing instrumentation to Terraform, since even when running in isolation as a CLI program it has a "distributed system-like" structure, with lots of concurrent internal work and also some work delegated to provider plugins that are essentially temporarily-running microservices. However, it's always felt a bit overwhelming to do it because much of Terraform predates the Go context.Context idiom and so it's tough to get a clean chain of context.Context values all the way down the stack without disturbing a lot of existing APIs. This commit aims to just get that process started by establishing how a context can propagate from "package main" into the command package, focusing initially on "terraform init" and some other commands that share some underlying functions with that command. OpenTelemetry has emerged as a de-facto industry standard and so this uses its API directly, without any attempt to hide it behind an abstraction. The OpenTelemetry API is itself already an adapter layer, so we should be able to swap in any backend that uses comparable concepts. For now we just discard the tracing reports by default, and allow users to opt in to delivering traces over OTLP by setting an environment variable when running Terraform (the environment variable was established in an earlier commit, so this commit builds on that.) When tracing collection is enabled, every Terraform CLI run will generate at least one overall span representing the command that was run. Some commands might also create child spans, but most currently do not.
2023-07-10 13:29:57 -05:00
func (m *Meta) initDirFromModule(ctx context.Context, targetDir string, addr string, hooks initwd.ModuleInstallHooks) (abort bool, diags tfdiags.Diagnostics) {
ctx, span := tracer.Start(ctx, "initialize directory from module", trace.WithAttributes(
attribute.String("source_addr", addr),
))
defer span.End()
loader, err := m.initConfigLoader()
if err != nil {
diags = diags.Append(err)
return true, diags
}
targetDir = m.normalizePath(targetDir)
moreDiags := initwd.DirFromModule(ctx, loader, targetDir, m.modulesDir(), addr, m.registryClient(), hooks)
diags = diags.Append(moreDiags)
if ctx.Err() == context.Canceled {
m.showDiagnostics(diags)
m.Ui.Error("Module initialization was canceled by an interrupt signal.")
return true, diags
}
return false, diags
}
// inputForSchema uses interactive prompts to try to populate any
// not-yet-populated required attributes in the given object value to
// comply with the given schema.
//
// An error will be returned if input is disabled for this meta or if
// values cannot be obtained for some other operational reason. Errors are
// not returned for invalid input since the input loop itself will report
// that interactively.
//
// It is not guaranteed that the result will be valid, since certain attribute
// types and nested blocks are not supported for input.
//
// The given value must conform to the given schema. If not, this method will
// panic.
func (m *Meta) inputForSchema(given cty.Value, schema *configschema.Block) (cty.Value, error) {
if given.IsNull() || !given.IsKnown() {
// This is not reasonable input, but we'll tolerate it anyway and
// just pass it through for the caller to handle downstream.
return given, nil
}
retVals := given.AsValueMap()
names := make([]string, 0, len(schema.Attributes))
for name, attrS := range schema.Attributes {
if attrS.Required && retVals[name].IsNull() && attrS.Type.IsPrimitiveType() {
names = append(names, name)
}
}
sort.Strings(names)
input := m.UIInput()
for _, name := range names {
attrS := schema.Attributes[name]
for {
strVal, err := input.Input(context.Background(), &terraform.InputOpts{
Id: name,
Query: name,
Description: attrS.Description,
})
if err != nil {
return cty.UnknownVal(schema.ImpliedType()), fmt.Errorf("%s: %s", name, err)
}
val := cty.StringVal(strVal)
val, err = convert.Convert(val, attrS.Type)
if err != nil {
m.showDiagnostics(fmt.Errorf("Invalid value: %s", err))
continue
}
retVals[name] = val
break
}
}
return cty.ObjectVal(retVals), nil
}
// configSources returns the source cache from the receiver's config loader,
// which the caller must not modify.
//
// If a config loader has not yet been instantiated then no files could have
// been loaded already, so this method returns a nil map in that case.
func (m *Meta) configSources() map[string][]byte {
if m.configLoader == nil {
return nil
}
return m.configLoader.Sources()
}
func (m *Meta) modulesDir() string {
return filepath.Join(m.DataDir(), "modules")
}
// registerSynthConfigSource allows commands to add synthetic additional source
// buffers to the config loader's cache of sources (as returned by
// configSources), which is useful when a command is directly parsing something
// from the command line that may produce diagnostics, so that diagnostic
// snippets can still be produced.
//
// If this is called before a configLoader has been initialized then it will
// try to initialize the loader but ignore any initialization failure, turning
// the call into a no-op. (We presume that a caller will later call a different
// function that also initializes the config loader as a side effect, at which
// point those errors can be returned.)
func (m *Meta) registerSynthConfigSource(filename string, src []byte) {
loader, err := m.initConfigLoader()
if err != nil || loader == nil {
return // treated as no-op, since this is best-effort
}
loader.Parser().ForceFileSource(filename, src)
}
// initConfigLoader initializes the shared configuration loader if it isn't
// already initialized.
//
// If the loader cannot be created for some reason then an error is returned
// and no loader is created. Subsequent calls will presumably see the same
// error. Loader initialization errors will tend to prevent any further use
// of most Terraform features, so callers should report any error and safely
// terminate.
func (m *Meta) initConfigLoader() (*configload.Loader, error) {
if m.configLoader == nil {
loader, err := configload.NewLoader(&configload.Config{
ModulesDir: m.modulesDir(),
Services: m.Services,
})
if err != nil {
return nil, err
}
Experiments supported only in alpha/dev builds We originally introduced the idea of language experiments as a way to get early feedback on not-yet-proven feature ideas, ideally as part of the initial exploration of the solution space rather than only after a solution has become relatively clear. Unfortunately, our tradeoff of making them available in normal releases behind an explicit opt-in in order to make it easier to participate in the feedback process had the unintended side-effect of making it feel okay to use experiments in production and endure the warnings they generate. This in turn has made us reluctant to make use of the experiments feature lest experiments become de-facto production features which we then feel compelled to preserve even though we aren't yet ready to graduate them to stable features. In an attempt to tweak that compromise, here we make the availability of experiments _at all_ a build-time flag which will not be set by default, and therefore experiments will not be available in most release builds. The intent (not yet implemented in this PR) is for our release process to set this flag only when it knows it's building an alpha release or a development snapshot not destined for release at all, which will therefore allow us to still use the alpha releases as a vehicle for giving feedback participants access to a feature (without needing to install a Go toolchain) but will not encourage pretending that these features are production-ready before they graduate from experimental. Only language experiments have an explicit framework for dealing with them which outlives any particular experiment, so most of the changes here are to that generalized mechanism. However, the intent is that non-language experiments, such as experimental CLI commands, would also in future check Meta.AllowExperimentalFeatures and gate the use of those experiments too, so that we can be consistent that experimental features will never be available unless you explicitly choose to use an alpha release or a custom build from source code. Since there are already some experiments active at the time of this commit which were not previously subject to this restriction, we'll pragmatically leave those as exceptions that will remain generally available for now, and so this new approach will apply only to new experiments started in the future. Once those experiments have all concluded, we will be left with no more exceptions unless we explicitly choose to make an exception for some reason we've not imagined yet. It's important that we be able to write tests that rely on experiments either being available or not being available, so here we're using our typical approach of making "package main" deal with the global setting that applies to Terraform CLI executables while making the layers below all support fine-grain selection of this behavior so that tests with different needs can run concurrently without trampling on one another. As a compromise, the integration tests in the terraform package will run with experiments enabled _by default_ since we commonly need to exercise experiments in those tests, but they can selectively opt-out if they need to by overriding the loader setting back to false again.
2022-04-27 13:14:51 -05:00
loader.AllowLanguageExperiments(m.AllowExperimentalFeatures)
m.configLoader = loader
if m.View != nil {
m.View.SetConfigSources(loader.Sources)
}
}
return m.configLoader, nil
}
// registryClient instantiates and returns a new Terraform Registry client.
func (m *Meta) registryClient() *registry.Client {
return registry.NewClient(m.Services, nil)
}
// configValueFromCLI parses a configuration value that was provided in a
// context in the CLI where only strings can be provided, such as on the
// command line or in an environment variable, and returns the resulting
// value.
func configValueFromCLI(synthFilename, rawValue string, wantType cty.Type) (cty.Value, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
switch {
case wantType.IsPrimitiveType():
// Primitive types are handled as conversions from string.
val := cty.StringVal(rawValue)
var err error
val, err = convert.Convert(val, wantType)
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Invalid backend configuration value",
fmt.Sprintf("Invalid backend configuration argument %s: %s", synthFilename, err),
))
val = cty.DynamicVal // just so we return something valid-ish
}
return val, diags
default:
// Non-primitives are parsed as HCL expressions
src := []byte(rawValue)
expr, hclDiags := hclsyntax.ParseExpression(src, synthFilename, hcl.Pos{Line: 1, Column: 1})
diags = diags.Append(hclDiags)
if hclDiags.HasErrors() {
return cty.DynamicVal, diags
}
val, hclDiags := expr.Value(nil)
diags = diags.Append(hclDiags)
if hclDiags.HasErrors() {
val = cty.DynamicVal
}
return val, diags
}
}
// rawFlags is a flag.Value implementation that just appends raw flag
// names and values to a slice.
type rawFlags struct {
flagName string
items *[]rawFlag
}
func newRawFlags(flagName string) rawFlags {
terraform: ugly huge change to weave in new HCL2-oriented types Due to how deeply the configuration types go into Terraform Core, there isn't a great way to switch out to HCL2 gradually. As a consequence, this huge commit gets us from the old state to a _compilable_ new state, but does not yet attempt to fix any tests and has a number of known missing parts and bugs. We will continue to iterate on this in forthcoming commits, heading back towards passing tests and making Terraform fully-functional again. The three main goals here are: - Use the configuration models from the "configs" package instead of the older models in the "config" package, which is now deprecated and preserved only to help us write our migration tool. - Do expression inspection and evaluation using the functionality of the new "lang" package, instead of the Interpolator type and related functionality in the main "terraform" package. - Represent addresses of various objects using types in the addrs package, rather than hand-constructed strings. This is not critical to support the above, but was a big help during the implementation of these other points since it made it much more explicit what kind of address is expected in each context. Since our new packages are built to accommodate some future planned features that are not yet implemented (e.g. the "for_each" argument on resources, "count"/"for_each" on modules), and since there's still a fair amount of functionality still using old-style APIs, there is a moderate amount of shimming here to connect new assumptions with old, hopefully in a way that makes it easier to find and eliminate these shims later. I apologize in advance to the person who inevitably just found this huge commit while spelunking through the commit history.
2018-04-30 12:33:53 -05:00
var items []rawFlag
return rawFlags{
flagName: flagName,
terraform: ugly huge change to weave in new HCL2-oriented types Due to how deeply the configuration types go into Terraform Core, there isn't a great way to switch out to HCL2 gradually. As a consequence, this huge commit gets us from the old state to a _compilable_ new state, but does not yet attempt to fix any tests and has a number of known missing parts and bugs. We will continue to iterate on this in forthcoming commits, heading back towards passing tests and making Terraform fully-functional again. The three main goals here are: - Use the configuration models from the "configs" package instead of the older models in the "config" package, which is now deprecated and preserved only to help us write our migration tool. - Do expression inspection and evaluation using the functionality of the new "lang" package, instead of the Interpolator type and related functionality in the main "terraform" package. - Represent addresses of various objects using types in the addrs package, rather than hand-constructed strings. This is not critical to support the above, but was a big help during the implementation of these other points since it made it much more explicit what kind of address is expected in each context. Since our new packages are built to accommodate some future planned features that are not yet implemented (e.g. the "for_each" argument on resources, "count"/"for_each" on modules), and since there's still a fair amount of functionality still using old-style APIs, there is a moderate amount of shimming here to connect new assumptions with old, hopefully in a way that makes it easier to find and eliminate these shims later. I apologize in advance to the person who inevitably just found this huge commit while spelunking through the commit history.
2018-04-30 12:33:53 -05:00
items: &items,
}
}
func (f rawFlags) Empty() bool {
if f.items == nil {
return true
}
return len(*f.items) == 0
}
func (f rawFlags) AllItems() []rawFlag {
if f.items == nil {
return nil
}
return *f.items
}
func (f rawFlags) Alias(flagName string) rawFlags {
return rawFlags{
flagName: flagName,
items: f.items,
}
}
func (f rawFlags) String() string {
return ""
}
func (f rawFlags) Set(str string) error {
*f.items = append(*f.items, rawFlag{
Name: f.flagName,
Value: str,
})
return nil
}
type rawFlag struct {
Name string
Value string
}
func (f rawFlag) String() string {
return fmt.Sprintf("%s=%q", f.Name, f.Value)
}