opentofu/internal/configs/parser_config.go
Martin Atkins fda0579537 Experiments supported only in alpha/dev builds
We originally introduced the idea of language experiments as a way to get
early feedback on not-yet-proven feature ideas, ideally as part of the
initial exploration of the solution space rather than only after a
solution has become relatively clear.

Unfortunately, our tradeoff of making them available in normal releases
behind an explicit opt-in in order to make it easier to participate in the
feedback process had the unintended side-effect of making it feel okay
to use experiments in production and endure the warnings they generate.
This in turn has made us reluctant to make use of the experiments feature
lest experiments become de-facto production features which we then feel
compelled to preserve even though we aren't yet ready to graduate them
to stable features.

In an attempt to tweak that compromise, here we make the availability of
experiments _at all_ a build-time flag which will not be set by default,
and therefore experiments will not be available in most release builds.

The intent (not yet implemented in this PR) is for our release process to
set this flag only when it knows it's building an alpha release or a
development snapshot not destined for release at all, which will therefore
allow us to still use the alpha releases as a vehicle for giving feedback
participants access to a feature (without needing to install a Go
toolchain) but will not encourage pretending that these features are
production-ready before they graduate from experimental.

Only language experiments have an explicit framework for dealing with them
which outlives any particular experiment, so most of the changes here are
to that generalized mechanism. However, the intent is that non-language
experiments, such as experimental CLI commands, would also in future
check Meta.AllowExperimentalFeatures and gate the use of those experiments
too, so that we can be consistent that experimental features will never
be available unless you explicitly choose to use an alpha release or
a custom build from source code.

Since there are already some experiments active at the time of this commit
which were not previously subject to this restriction, we'll pragmatically
leave those as exceptions that will remain generally available for now,
and so this new approach will apply only to new experiments started in the
future. Once those experiments have all concluded, we will be left with
no more exceptions unless we explicitly choose to make an exception for
some reason we've not imagined yet.

It's important that we be able to write tests that rely on experiments
either being available or not being available, so here we're using our
typical approach of making "package main" deal with the global setting
that applies to Terraform CLI executables while making the layers below
all support fine-grain selection of this behavior so that tests with
different needs can run concurrently without trampling on one another.

As a compromise, the integration tests in the terraform package will
run with experiments enabled _by default_ since we commonly need to
exercise experiments in those tests, but they can selectively opt-out
if they need to by overriding the loader setting back to false again.
2022-06-17 14:46:07 -07:00

311 lines
8.7 KiB
Go

package configs
import (
"github.com/hashicorp/hcl/v2"
)
// LoadConfigFile reads the file at the given path and parses it as a config
// file.
//
// If the file cannot be read -- for example, if it does not exist -- then
// a nil *File will be returned along with error diagnostics. Callers may wish
// to disregard the returned diagnostics in this case and instead generate
// their own error message(s) with additional context.
//
// If the returned diagnostics has errors when a non-nil map is returned
// then the map may be incomplete but should be valid enough for careful
// static analysis.
//
// This method wraps LoadHCLFile, and so it inherits the syntax selection
// behaviors documented for that method.
func (p *Parser) LoadConfigFile(path string) (*File, hcl.Diagnostics) {
return p.loadConfigFile(path, false)
}
// LoadConfigFileOverride is the same as LoadConfigFile except that it relaxes
// certain required attribute constraints in order to interpret the given
// file as an overrides file.
func (p *Parser) LoadConfigFileOverride(path string) (*File, hcl.Diagnostics) {
return p.loadConfigFile(path, true)
}
func (p *Parser) loadConfigFile(path string, override bool) (*File, hcl.Diagnostics) {
body, diags := p.LoadHCLFile(path)
if body == nil {
return nil, diags
}
file := &File{}
var reqDiags hcl.Diagnostics
file.CoreVersionConstraints, reqDiags = sniffCoreVersionRequirements(body)
diags = append(diags, reqDiags...)
// We'll load the experiments first because other decoding logic in the
// loop below might depend on these experiments.
var expDiags hcl.Diagnostics
file.ActiveExperiments, expDiags = sniffActiveExperiments(body, p.allowExperiments)
diags = append(diags, expDiags...)
content, contentDiags := body.Content(configFileSchema)
diags = append(diags, contentDiags...)
for _, block := range content.Blocks {
switch block.Type {
case "terraform":
content, contentDiags := block.Body.Content(terraformBlockSchema)
diags = append(diags, contentDiags...)
// We ignore the "terraform_version", "language" and "experiments"
// attributes here because sniffCoreVersionRequirements and
// sniffActiveExperiments already dealt with those above.
for _, innerBlock := range content.Blocks {
switch innerBlock.Type {
case "backend":
backendCfg, cfgDiags := decodeBackendBlock(innerBlock)
diags = append(diags, cfgDiags...)
if backendCfg != nil {
file.Backends = append(file.Backends, backendCfg)
}
case "cloud":
cloudCfg, cfgDiags := decodeCloudBlock(innerBlock)
diags = append(diags, cfgDiags...)
if cloudCfg != nil {
file.CloudConfigs = append(file.CloudConfigs, cloudCfg)
}
case "required_providers":
reqs, reqsDiags := decodeRequiredProvidersBlock(innerBlock)
diags = append(diags, reqsDiags...)
file.RequiredProviders = append(file.RequiredProviders, reqs)
case "provider_meta":
providerCfg, cfgDiags := decodeProviderMetaBlock(innerBlock)
diags = append(diags, cfgDiags...)
if providerCfg != nil {
file.ProviderMetas = append(file.ProviderMetas, providerCfg)
}
default:
// Should never happen because the above cases should be exhaustive
// for all block type names in our schema.
continue
}
}
case "required_providers":
// required_providers should be nested inside a "terraform" block
diags = append(diags, &hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid required_providers block",
Detail: "A \"required_providers\" block must be nested inside a \"terraform\" block.",
Subject: block.TypeRange.Ptr(),
})
case "provider":
cfg, cfgDiags := decodeProviderBlock(block)
diags = append(diags, cfgDiags...)
if cfg != nil {
file.ProviderConfigs = append(file.ProviderConfigs, cfg)
}
case "variable":
cfg, cfgDiags := decodeVariableBlock(block, override)
diags = append(diags, cfgDiags...)
if cfg != nil {
file.Variables = append(file.Variables, cfg)
}
case "locals":
defs, defsDiags := decodeLocalsBlock(block)
diags = append(diags, defsDiags...)
file.Locals = append(file.Locals, defs...)
case "output":
cfg, cfgDiags := decodeOutputBlock(block, override)
diags = append(diags, cfgDiags...)
if cfg != nil {
file.Outputs = append(file.Outputs, cfg)
}
case "module":
cfg, cfgDiags := decodeModuleBlock(block, override)
diags = append(diags, cfgDiags...)
if cfg != nil {
file.ModuleCalls = append(file.ModuleCalls, cfg)
}
case "resource":
cfg, cfgDiags := decodeResourceBlock(block, override)
diags = append(diags, cfgDiags...)
if cfg != nil {
file.ManagedResources = append(file.ManagedResources, cfg)
}
case "data":
cfg, cfgDiags := decodeDataBlock(block, override)
diags = append(diags, cfgDiags...)
if cfg != nil {
file.DataResources = append(file.DataResources, cfg)
}
case "moved":
cfg, cfgDiags := decodeMovedBlock(block)
diags = append(diags, cfgDiags...)
if cfg != nil {
file.Moved = append(file.Moved, cfg)
}
default:
// Should never happen because the above cases should be exhaustive
// for all block type names in our schema.
continue
}
}
return file, diags
}
// sniffCoreVersionRequirements does minimal parsing of the given body for
// "terraform" blocks with "required_version" attributes, returning the
// requirements found.
//
// This is intended to maximize the chance that we'll be able to read the
// requirements (syntax errors notwithstanding) even if the config file contains
// constructs that might've been added in future Terraform versions
//
// This is a "best effort" sort of method which will return constraints it is
// able to find, but may return no constraints at all if the given body is
// so invalid that it cannot be decoded at all.
func sniffCoreVersionRequirements(body hcl.Body) ([]VersionConstraint, hcl.Diagnostics) {
rootContent, _, diags := body.PartialContent(configFileTerraformBlockSniffRootSchema)
var constraints []VersionConstraint
for _, block := range rootContent.Blocks {
content, _, blockDiags := block.Body.PartialContent(configFileVersionSniffBlockSchema)
diags = append(diags, blockDiags...)
attr, exists := content.Attributes["required_version"]
if !exists {
continue
}
constraint, constraintDiags := decodeVersionConstraint(attr)
diags = append(diags, constraintDiags...)
if !constraintDiags.HasErrors() {
constraints = append(constraints, constraint)
}
}
return constraints, diags
}
// configFileSchema is the schema for the top-level of a config file. We use
// the low-level HCL API for this level so we can easily deal with each
// block type separately with its own decoding logic.
var configFileSchema = &hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "terraform",
},
{
// This one is not really valid, but we include it here so we
// can create a specialized error message hinting the user to
// nest it inside a "terraform" block.
Type: "required_providers",
},
{
Type: "provider",
LabelNames: []string{"name"},
},
{
Type: "variable",
LabelNames: []string{"name"},
},
{
Type: "locals",
},
{
Type: "output",
LabelNames: []string{"name"},
},
{
Type: "module",
LabelNames: []string{"name"},
},
{
Type: "resource",
LabelNames: []string{"type", "name"},
},
{
Type: "data",
LabelNames: []string{"type", "name"},
},
{
Type: "moved",
},
},
}
// terraformBlockSchema is the schema for a top-level "terraform" block in
// a configuration file.
var terraformBlockSchema = &hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{Name: "required_version"},
{Name: "experiments"},
{Name: "language"},
},
Blocks: []hcl.BlockHeaderSchema{
{
Type: "backend",
LabelNames: []string{"type"},
},
{
Type: "cloud",
},
{
Type: "required_providers",
},
{
Type: "provider_meta",
LabelNames: []string{"provider"},
},
},
}
// configFileTerraformBlockSniffRootSchema is a schema for
// sniffCoreVersionRequirements and sniffActiveExperiments.
var configFileTerraformBlockSniffRootSchema = &hcl.BodySchema{
Blocks: []hcl.BlockHeaderSchema{
{
Type: "terraform",
},
},
}
// configFileVersionSniffBlockSchema is a schema for sniffCoreVersionRequirements
var configFileVersionSniffBlockSchema = &hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{
Name: "required_version",
},
},
}
// configFileExperimentsSniffBlockSchema is a schema for sniffActiveExperiments,
// to decode a single attribute from inside a "terraform" block.
var configFileExperimentsSniffBlockSchema = &hcl.BodySchema{
Attributes: []hcl.AttributeSchema{
{Name: "experiments"},
{Name: "language"},
},
}