2023-05-02 10:33:06 -05:00
// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
experiments: a mechanism for opt-in experimental language features
Traditionally we've preferred to release new language features in major
releases only, because we can then use the beta cycle to gather feedback
on the feature and learn about any usability challenges or other
situations we didn't consider during our design in time to make those
changes before inclusion in a stable release.
This "experiments" feature is intended to decouple the feedback cycle for
new features from the major release rhythm, and thus allow us to release
new features in minor releases by first releasing them as experimental for
a minor release or two, adjust for any feedback gathered during that
period, and then finally remove the experiment gate and enable the feature
for everyone.
The intended model here is that anything behind an experiment gate is
subject to breaking changes even in patch releases, and so any module
using these experimental features will be broken by a future Terraform
upgrade.
The behavior implemented here is:
- Recognize a new "experiments" setting in the "terraform" block which
allows module authors to explicitly opt in to experimental features.
terraform {
experiments = [resource_for_each]
}
- Generate a warning whenever loading a module that has experiments
enabled, to avoid accidentally depending on experimental features and
thus risking unexpected breakage on next Terraform upgrade.
- We check the enabled experiments against the configuration at module
load time, which means that experiments are scoped to a particular
module. Enabling an experiment in one module does not automatically
enable it in any other module.
This experiments mechanism is itself an experiment, and so I'd like to
use the resource for_each feature to trial it. Because any configuration
using experiments is subject to breaking changes, we are free to adjust
this experiments feature in future releases as we see fit, but once
for_each is shipped without an experiment gate we'll be blocked from
making significant changes to it until the next major release at least.
2019-07-10 14:37:11 -05:00
package configs
import (
"fmt"
"github.com/hashicorp/hcl/v2"
2023-09-20 06:35:35 -05:00
"github.com/opentofu/opentofu/internal/experiments"
"github.com/opentofu/opentofu/version"
experiments: a mechanism for opt-in experimental language features
Traditionally we've preferred to release new language features in major
releases only, because we can then use the beta cycle to gather feedback
on the feature and learn about any usability challenges or other
situations we didn't consider during our design in time to make those
changes before inclusion in a stable release.
This "experiments" feature is intended to decouple the feedback cycle for
new features from the major release rhythm, and thus allow us to release
new features in minor releases by first releasing them as experimental for
a minor release or two, adjust for any feedback gathered during that
period, and then finally remove the experiment gate and enable the feature
for everyone.
The intended model here is that anything behind an experiment gate is
subject to breaking changes even in patch releases, and so any module
using these experimental features will be broken by a future Terraform
upgrade.
The behavior implemented here is:
- Recognize a new "experiments" setting in the "terraform" block which
allows module authors to explicitly opt in to experimental features.
terraform {
experiments = [resource_for_each]
}
- Generate a warning whenever loading a module that has experiments
enabled, to avoid accidentally depending on experimental features and
thus risking unexpected breakage on next Terraform upgrade.
- We check the enabled experiments against the configuration at module
load time, which means that experiments are scoped to a particular
module. Enabling an experiment in one module does not automatically
enable it in any other module.
This experiments mechanism is itself an experiment, and so I'd like to
use the resource for_each feature to trial it. Because any configuration
using experiments is subject to breaking changes, we are free to adjust
this experiments feature in future releases as we see fit, but once
for_each is shipped without an experiment gate we'll be blocked from
making significant changes to it until the next major release at least.
2019-07-10 14:37:11 -05:00
)
2021-09-03 10:58:15 -05:00
// When developing UI for experimental features, you can temporarily disable
// the experiment warning by setting this package-level variable to a non-empty
// value using a link-time flag:
//
2023-09-20 06:35:35 -05:00
// go install -ldflags="-X 'github.com/opentofu/opentofu/internal/configs.disableExperimentWarnings=yes'"
2021-09-03 10:58:15 -05:00
//
// This functionality is for development purposes only and is not a feature we
// are committing to supporting for end users.
var disableExperimentWarnings = ""
experiments: a mechanism for opt-in experimental language features
Traditionally we've preferred to release new language features in major
releases only, because we can then use the beta cycle to gather feedback
on the feature and learn about any usability challenges or other
situations we didn't consider during our design in time to make those
changes before inclusion in a stable release.
This "experiments" feature is intended to decouple the feedback cycle for
new features from the major release rhythm, and thus allow us to release
new features in minor releases by first releasing them as experimental for
a minor release or two, adjust for any feedback gathered during that
period, and then finally remove the experiment gate and enable the feature
for everyone.
The intended model here is that anything behind an experiment gate is
subject to breaking changes even in patch releases, and so any module
using these experimental features will be broken by a future Terraform
upgrade.
The behavior implemented here is:
- Recognize a new "experiments" setting in the "terraform" block which
allows module authors to explicitly opt in to experimental features.
terraform {
experiments = [resource_for_each]
}
- Generate a warning whenever loading a module that has experiments
enabled, to avoid accidentally depending on experimental features and
thus risking unexpected breakage on next Terraform upgrade.
- We check the enabled experiments against the configuration at module
load time, which means that experiments are scoped to a particular
module. Enabling an experiment in one module does not automatically
enable it in any other module.
This experiments mechanism is itself an experiment, and so I'd like to
use the resource for_each feature to trial it. Because any configuration
using experiments is subject to breaking changes, we are free to adjust
this experiments feature in future releases as we see fit, but once
for_each is shipped without an experiment gate we'll be blocked from
making significant changes to it until the next major release at least.
2019-07-10 14:37:11 -05:00
// sniffActiveExperiments does minimal parsing of the given body for
// "terraform" blocks with "experiments" attributes, returning the
// experiments found.
//
// This is separate from other processing so that we can be sure that all of
// the experiments are known before we process the result of the module config,
// and thus we can take into account which experiments are active when deciding
// how to decode.
Experiments supported only in alpha/dev builds
We originally introduced the idea of language experiments as a way to get
early feedback on not-yet-proven feature ideas, ideally as part of the
initial exploration of the solution space rather than only after a
solution has become relatively clear.
Unfortunately, our tradeoff of making them available in normal releases
behind an explicit opt-in in order to make it easier to participate in the
feedback process had the unintended side-effect of making it feel okay
to use experiments in production and endure the warnings they generate.
This in turn has made us reluctant to make use of the experiments feature
lest experiments become de-facto production features which we then feel
compelled to preserve even though we aren't yet ready to graduate them
to stable features.
In an attempt to tweak that compromise, here we make the availability of
experiments _at all_ a build-time flag which will not be set by default,
and therefore experiments will not be available in most release builds.
The intent (not yet implemented in this PR) is for our release process to
set this flag only when it knows it's building an alpha release or a
development snapshot not destined for release at all, which will therefore
allow us to still use the alpha releases as a vehicle for giving feedback
participants access to a feature (without needing to install a Go
toolchain) but will not encourage pretending that these features are
production-ready before they graduate from experimental.
Only language experiments have an explicit framework for dealing with them
which outlives any particular experiment, so most of the changes here are
to that generalized mechanism. However, the intent is that non-language
experiments, such as experimental CLI commands, would also in future
check Meta.AllowExperimentalFeatures and gate the use of those experiments
too, so that we can be consistent that experimental features will never
be available unless you explicitly choose to use an alpha release or
a custom build from source code.
Since there are already some experiments active at the time of this commit
which were not previously subject to this restriction, we'll pragmatically
leave those as exceptions that will remain generally available for now,
and so this new approach will apply only to new experiments started in the
future. Once those experiments have all concluded, we will be left with
no more exceptions unless we explicitly choose to make an exception for
some reason we've not imagined yet.
It's important that we be able to write tests that rely on experiments
either being available or not being available, so here we're using our
typical approach of making "package main" deal with the global setting
that applies to Terraform CLI executables while making the layers below
all support fine-grain selection of this behavior so that tests with
different needs can run concurrently without trampling on one another.
As a compromise, the integration tests in the terraform package will
run with experiments enabled _by default_ since we commonly need to
exercise experiments in those tests, but they can selectively opt-out
if they need to by overriding the loader setting back to false again.
2022-04-27 13:14:51 -05:00
func sniffActiveExperiments ( body hcl . Body , allowed bool ) ( experiments . Set , hcl . Diagnostics ) {
experiments: a mechanism for opt-in experimental language features
Traditionally we've preferred to release new language features in major
releases only, because we can then use the beta cycle to gather feedback
on the feature and learn about any usability challenges or other
situations we didn't consider during our design in time to make those
changes before inclusion in a stable release.
This "experiments" feature is intended to decouple the feedback cycle for
new features from the major release rhythm, and thus allow us to release
new features in minor releases by first releasing them as experimental for
a minor release or two, adjust for any feedback gathered during that
period, and then finally remove the experiment gate and enable the feature
for everyone.
The intended model here is that anything behind an experiment gate is
subject to breaking changes even in patch releases, and so any module
using these experimental features will be broken by a future Terraform
upgrade.
The behavior implemented here is:
- Recognize a new "experiments" setting in the "terraform" block which
allows module authors to explicitly opt in to experimental features.
terraform {
experiments = [resource_for_each]
}
- Generate a warning whenever loading a module that has experiments
enabled, to avoid accidentally depending on experimental features and
thus risking unexpected breakage on next Terraform upgrade.
- We check the enabled experiments against the configuration at module
load time, which means that experiments are scoped to a particular
module. Enabling an experiment in one module does not automatically
enable it in any other module.
This experiments mechanism is itself an experiment, and so I'd like to
use the resource for_each feature to trial it. Because any configuration
using experiments is subject to breaking changes, we are free to adjust
this experiments feature in future releases as we see fit, but once
for_each is shipped without an experiment gate we'll be blocked from
making significant changes to it until the next major release at least.
2019-07-10 14:37:11 -05:00
rootContent , _ , diags := body . PartialContent ( configFileTerraformBlockSniffRootSchema )
ret := experiments . NewSet ( )
for _ , block := range rootContent . Blocks {
content , _ , blockDiags := block . Body . PartialContent ( configFileExperimentsSniffBlockSchema )
diags = append ( diags , blockDiags ... )
2021-02-26 11:48:42 -06:00
if attr , exists := content . Attributes [ "language" ] ; exists {
// We don't yet have a sense of selecting an edition of the
// language, but we're reserving this syntax for now so that
// if and when we do this later older versions of Terraform
// will emit a more helpful error message than just saying
// this attribute doesn't exist. Handling this as part of
// experiments is a bit odd for now but justified by the
// fact that a future fuller implementation of switchable
// languages would be likely use a similar implementation
// strategy as experiments, and thus would lead to this
// function being refactored to deal with both concerns at
// once. We'll see, though!
kw := hcl . ExprAsKeyword ( attr . Expr )
currentVersion := version . SemVer . String ( )
const firstEdition = "TF2021"
switch {
case kw == "" : // (the expression wasn't a keyword at all)
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "Invalid language edition" ,
Detail : fmt . Sprintf (
2023-08-22 07:43:11 -05:00
"The language argument expects a bare language edition keyword. OpenTF %s supports only language edition %s, which is the default." ,
2021-02-26 11:48:42 -06:00
currentVersion , firstEdition ,
) ,
Subject : attr . Expr . Range ( ) . Ptr ( ) ,
} )
case kw != firstEdition :
rel := "different"
if kw > firstEdition { // would be weird for this not to be true, but it's user input so anything goes
rel = "newer"
}
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "Unsupported language edition" ,
Detail : fmt . Sprintf (
2023-08-22 07:43:11 -05:00
"OpenTF v%s only supports language edition %s. This module requires a %s version of OpenTF CLI." ,
2021-02-26 11:48:42 -06:00
currentVersion , firstEdition , rel ,
) ,
Subject : attr . Expr . Range ( ) . Ptr ( ) ,
} )
}
}
experiments: a mechanism for opt-in experimental language features
Traditionally we've preferred to release new language features in major
releases only, because we can then use the beta cycle to gather feedback
on the feature and learn about any usability challenges or other
situations we didn't consider during our design in time to make those
changes before inclusion in a stable release.
This "experiments" feature is intended to decouple the feedback cycle for
new features from the major release rhythm, and thus allow us to release
new features in minor releases by first releasing them as experimental for
a minor release or two, adjust for any feedback gathered during that
period, and then finally remove the experiment gate and enable the feature
for everyone.
The intended model here is that anything behind an experiment gate is
subject to breaking changes even in patch releases, and so any module
using these experimental features will be broken by a future Terraform
upgrade.
The behavior implemented here is:
- Recognize a new "experiments" setting in the "terraform" block which
allows module authors to explicitly opt in to experimental features.
terraform {
experiments = [resource_for_each]
}
- Generate a warning whenever loading a module that has experiments
enabled, to avoid accidentally depending on experimental features and
thus risking unexpected breakage on next Terraform upgrade.
- We check the enabled experiments against the configuration at module
load time, which means that experiments are scoped to a particular
module. Enabling an experiment in one module does not automatically
enable it in any other module.
This experiments mechanism is itself an experiment, and so I'd like to
use the resource for_each feature to trial it. Because any configuration
using experiments is subject to breaking changes, we are free to adjust
this experiments feature in future releases as we see fit, but once
for_each is shipped without an experiment gate we'll be blocked from
making significant changes to it until the next major release at least.
2019-07-10 14:37:11 -05:00
attr , exists := content . Attributes [ "experiments" ]
if ! exists {
continue
}
exps , expDiags := decodeExperimentsAttr ( attr )
2022-07-01 10:55:54 -05:00
// Because we concluded this particular experiment in the same
// release as we made experiments alpha-releases-only, we need to
// treat it as special to avoid masking the "experiment has concluded"
// error with the more general "experiments are not available at all"
// error. Note that this experiment is marked as concluded so this
// only "allows" showing the different error message that it is
// concluded, and does not allow actually using the experiment outside
// of an alpha.
// NOTE: We should be able to remove this special exception a release
// or two after v1.3 when folks have had a chance to notice that the
// experiment has concluded and update their modules accordingly.
// When we do so, we might also consider changing decodeExperimentsAttr
// to _not_ include concluded experiments in the returned set, since
// we're doing that right now only to make this condition work.
if exps . Has ( experiments . ModuleVariableOptionalAttrs ) && len ( exps ) == 1 {
allowed = true
}
Experiments supported only in alpha/dev builds
We originally introduced the idea of language experiments as a way to get
early feedback on not-yet-proven feature ideas, ideally as part of the
initial exploration of the solution space rather than only after a
solution has become relatively clear.
Unfortunately, our tradeoff of making them available in normal releases
behind an explicit opt-in in order to make it easier to participate in the
feedback process had the unintended side-effect of making it feel okay
to use experiments in production and endure the warnings they generate.
This in turn has made us reluctant to make use of the experiments feature
lest experiments become de-facto production features which we then feel
compelled to preserve even though we aren't yet ready to graduate them
to stable features.
In an attempt to tweak that compromise, here we make the availability of
experiments _at all_ a build-time flag which will not be set by default,
and therefore experiments will not be available in most release builds.
The intent (not yet implemented in this PR) is for our release process to
set this flag only when it knows it's building an alpha release or a
development snapshot not destined for release at all, which will therefore
allow us to still use the alpha releases as a vehicle for giving feedback
participants access to a feature (without needing to install a Go
toolchain) but will not encourage pretending that these features are
production-ready before they graduate from experimental.
Only language experiments have an explicit framework for dealing with them
which outlives any particular experiment, so most of the changes here are
to that generalized mechanism. However, the intent is that non-language
experiments, such as experimental CLI commands, would also in future
check Meta.AllowExperimentalFeatures and gate the use of those experiments
too, so that we can be consistent that experimental features will never
be available unless you explicitly choose to use an alpha release or
a custom build from source code.
Since there are already some experiments active at the time of this commit
which were not previously subject to this restriction, we'll pragmatically
leave those as exceptions that will remain generally available for now,
and so this new approach will apply only to new experiments started in the
future. Once those experiments have all concluded, we will be left with
no more exceptions unless we explicitly choose to make an exception for
some reason we've not imagined yet.
It's important that we be able to write tests that rely on experiments
either being available or not being available, so here we're using our
typical approach of making "package main" deal with the global setting
that applies to Terraform CLI executables while making the layers below
all support fine-grain selection of this behavior so that tests with
different needs can run concurrently without trampling on one another.
As a compromise, the integration tests in the terraform package will
run with experiments enabled _by default_ since we commonly need to
exercise experiments in those tests, but they can selectively opt-out
if they need to by overriding the loader setting back to false again.
2022-04-27 13:14:51 -05:00
if allowed {
diags = append ( diags , expDiags ... )
if ! expDiags . HasErrors ( ) {
ret = experiments . SetUnion ( ret , exps )
}
} else {
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "Module uses experimental features" ,
2023-08-22 07:43:11 -05:00
Detail : "Experimental features are intended only for gathering early feedback on new language designs, and so are available only in alpha releases of OpenTF." ,
Experiments supported only in alpha/dev builds
We originally introduced the idea of language experiments as a way to get
early feedback on not-yet-proven feature ideas, ideally as part of the
initial exploration of the solution space rather than only after a
solution has become relatively clear.
Unfortunately, our tradeoff of making them available in normal releases
behind an explicit opt-in in order to make it easier to participate in the
feedback process had the unintended side-effect of making it feel okay
to use experiments in production and endure the warnings they generate.
This in turn has made us reluctant to make use of the experiments feature
lest experiments become de-facto production features which we then feel
compelled to preserve even though we aren't yet ready to graduate them
to stable features.
In an attempt to tweak that compromise, here we make the availability of
experiments _at all_ a build-time flag which will not be set by default,
and therefore experiments will not be available in most release builds.
The intent (not yet implemented in this PR) is for our release process to
set this flag only when it knows it's building an alpha release or a
development snapshot not destined for release at all, which will therefore
allow us to still use the alpha releases as a vehicle for giving feedback
participants access to a feature (without needing to install a Go
toolchain) but will not encourage pretending that these features are
production-ready before they graduate from experimental.
Only language experiments have an explicit framework for dealing with them
which outlives any particular experiment, so most of the changes here are
to that generalized mechanism. However, the intent is that non-language
experiments, such as experimental CLI commands, would also in future
check Meta.AllowExperimentalFeatures and gate the use of those experiments
too, so that we can be consistent that experimental features will never
be available unless you explicitly choose to use an alpha release or
a custom build from source code.
Since there are already some experiments active at the time of this commit
which were not previously subject to this restriction, we'll pragmatically
leave those as exceptions that will remain generally available for now,
and so this new approach will apply only to new experiments started in the
future. Once those experiments have all concluded, we will be left with
no more exceptions unless we explicitly choose to make an exception for
some reason we've not imagined yet.
It's important that we be able to write tests that rely on experiments
either being available or not being available, so here we're using our
typical approach of making "package main" deal with the global setting
that applies to Terraform CLI executables while making the layers below
all support fine-grain selection of this behavior so that tests with
different needs can run concurrently without trampling on one another.
As a compromise, the integration tests in the terraform package will
run with experiments enabled _by default_ since we commonly need to
exercise experiments in those tests, but they can selectively opt-out
if they need to by overriding the loader setting back to false again.
2022-04-27 13:14:51 -05:00
Subject : attr . NameRange . Ptr ( ) ,
} )
experiments: a mechanism for opt-in experimental language features
Traditionally we've preferred to release new language features in major
releases only, because we can then use the beta cycle to gather feedback
on the feature and learn about any usability challenges or other
situations we didn't consider during our design in time to make those
changes before inclusion in a stable release.
This "experiments" feature is intended to decouple the feedback cycle for
new features from the major release rhythm, and thus allow us to release
new features in minor releases by first releasing them as experimental for
a minor release or two, adjust for any feedback gathered during that
period, and then finally remove the experiment gate and enable the feature
for everyone.
The intended model here is that anything behind an experiment gate is
subject to breaking changes even in patch releases, and so any module
using these experimental features will be broken by a future Terraform
upgrade.
The behavior implemented here is:
- Recognize a new "experiments" setting in the "terraform" block which
allows module authors to explicitly opt in to experimental features.
terraform {
experiments = [resource_for_each]
}
- Generate a warning whenever loading a module that has experiments
enabled, to avoid accidentally depending on experimental features and
thus risking unexpected breakage on next Terraform upgrade.
- We check the enabled experiments against the configuration at module
load time, which means that experiments are scoped to a particular
module. Enabling an experiment in one module does not automatically
enable it in any other module.
This experiments mechanism is itself an experiment, and so I'd like to
use the resource for_each feature to trial it. Because any configuration
using experiments is subject to breaking changes, we are free to adjust
this experiments feature in future releases as we see fit, but once
for_each is shipped without an experiment gate we'll be blocked from
making significant changes to it until the next major release at least.
2019-07-10 14:37:11 -05:00
}
}
return ret , diags
}
func decodeExperimentsAttr ( attr * hcl . Attribute ) ( experiments . Set , hcl . Diagnostics ) {
var diags hcl . Diagnostics
exprs , moreDiags := hcl . ExprList ( attr . Expr )
diags = append ( diags , moreDiags ... )
if moreDiags . HasErrors ( ) {
return nil , diags
}
var ret = experiments . NewSet ( )
for _ , expr := range exprs {
kw := hcl . ExprAsKeyword ( expr )
if kw == "" {
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "Invalid experiment keyword" ,
Detail : "Elements of \"experiments\" must all be keywords representing active experiments." ,
Subject : expr . Range ( ) . Ptr ( ) ,
} )
continue
}
exp , err := experiments . GetCurrent ( kw )
switch err := err . ( type ) {
case experiments . UnavailableError :
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "Unknown experiment keyword" ,
Detail : fmt . Sprintf ( "There is no current experiment with the keyword %q." , kw ) ,
Subject : expr . Range ( ) . Ptr ( ) ,
} )
case experiments . ConcludedError :
2022-07-01 10:55:54 -05:00
// As a special case we still include the optional attributes
// experiment if it's present, because our caller treats that
// as special. See the comment in sniffActiveExperiments for
// more information, and remove this special case here one the
// special case up there is also removed.
if kw == "module_variable_optional_attrs" {
ret . Add ( experiments . ModuleVariableOptionalAttrs )
}
experiments: a mechanism for opt-in experimental language features
Traditionally we've preferred to release new language features in major
releases only, because we can then use the beta cycle to gather feedback
on the feature and learn about any usability challenges or other
situations we didn't consider during our design in time to make those
changes before inclusion in a stable release.
This "experiments" feature is intended to decouple the feedback cycle for
new features from the major release rhythm, and thus allow us to release
new features in minor releases by first releasing them as experimental for
a minor release or two, adjust for any feedback gathered during that
period, and then finally remove the experiment gate and enable the feature
for everyone.
The intended model here is that anything behind an experiment gate is
subject to breaking changes even in patch releases, and so any module
using these experimental features will be broken by a future Terraform
upgrade.
The behavior implemented here is:
- Recognize a new "experiments" setting in the "terraform" block which
allows module authors to explicitly opt in to experimental features.
terraform {
experiments = [resource_for_each]
}
- Generate a warning whenever loading a module that has experiments
enabled, to avoid accidentally depending on experimental features and
thus risking unexpected breakage on next Terraform upgrade.
- We check the enabled experiments against the configuration at module
load time, which means that experiments are scoped to a particular
module. Enabling an experiment in one module does not automatically
enable it in any other module.
This experiments mechanism is itself an experiment, and so I'd like to
use the resource for_each feature to trial it. Because any configuration
using experiments is subject to breaking changes, we are free to adjust
this experiments feature in future releases as we see fit, but once
for_each is shipped without an experiment gate we'll be blocked from
making significant changes to it until the next major release at least.
2019-07-10 14:37:11 -05:00
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "Experiment has concluded" ,
Detail : fmt . Sprintf ( "Experiment %q is no longer available. %s" , kw , err . Message ) ,
Subject : expr . Range ( ) . Ptr ( ) ,
} )
case nil :
// No error at all means it's valid and current.
ret . Add ( exp )
2021-09-03 10:58:15 -05:00
if disableExperimentWarnings == "" {
// However, experimental features are subject to breaking changes
// in future releases, so we'll warn about them to help make sure
// folks aren't inadvertently using them in places where that'd be
// inappropriate, particularly if the experiment is active in a
// shared module they depend on.
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagWarning ,
Summary : fmt . Sprintf ( "Experimental feature %q is active" , exp . Keyword ( ) ) ,
2023-08-22 07:43:11 -05:00
Detail : "Experimental features are available only in alpha releases of OpenTF and are subject to breaking changes or total removal in later versions, based on feedback. We recommend against using experimental features in production.\n\nIf you have feedback on the design of this feature, please open a GitHub issue to discuss it." ,
2021-09-03 10:58:15 -05:00
Subject : expr . Range ( ) . Ptr ( ) ,
} )
}
experiments: a mechanism for opt-in experimental language features
Traditionally we've preferred to release new language features in major
releases only, because we can then use the beta cycle to gather feedback
on the feature and learn about any usability challenges or other
situations we didn't consider during our design in time to make those
changes before inclusion in a stable release.
This "experiments" feature is intended to decouple the feedback cycle for
new features from the major release rhythm, and thus allow us to release
new features in minor releases by first releasing them as experimental for
a minor release or two, adjust for any feedback gathered during that
period, and then finally remove the experiment gate and enable the feature
for everyone.
The intended model here is that anything behind an experiment gate is
subject to breaking changes even in patch releases, and so any module
using these experimental features will be broken by a future Terraform
upgrade.
The behavior implemented here is:
- Recognize a new "experiments" setting in the "terraform" block which
allows module authors to explicitly opt in to experimental features.
terraform {
experiments = [resource_for_each]
}
- Generate a warning whenever loading a module that has experiments
enabled, to avoid accidentally depending on experimental features and
thus risking unexpected breakage on next Terraform upgrade.
- We check the enabled experiments against the configuration at module
load time, which means that experiments are scoped to a particular
module. Enabling an experiment in one module does not automatically
enable it in any other module.
This experiments mechanism is itself an experiment, and so I'd like to
use the resource for_each feature to trial it. Because any configuration
using experiments is subject to breaking changes, we are free to adjust
this experiments feature in future releases as we see fit, but once
for_each is shipped without an experiment gate we'll be blocked from
making significant changes to it until the next major release at least.
2019-07-10 14:37:11 -05:00
default :
// This should never happen, because GetCurrent is not documented
// to return any other error type, but we'll handle it to be robust.
diags = diags . Append ( & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "Invalid experiment keyword" ,
Detail : fmt . Sprintf ( "Could not parse %q as an experiment keyword: %s." , kw , err . Error ( ) ) ,
Subject : expr . Range ( ) . Ptr ( ) ,
} )
}
}
return ret , diags
}
func checkModuleExperiments ( m * Module ) hcl . Diagnostics {
var diags hcl . Diagnostics
// When we have current experiments, this is a good place to check that
// the features in question can only be used when the experiments are
// active. Return error diagnostics if a feature is being used without
// opting in to the feature. For example:
/ *
if ! m . ActiveExperiments . Has ( experiments . ResourceForEach ) {
for _ , rc := range m . ManagedResources {
if rc . ForEach != nil {
diags = append ( diags , & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "Resource for_each is experimental" ,
Detail : "This feature is currently an opt-in experiment, subject to change in future releases based on feedback.\n\nActivate the feature for this module by adding resource_for_each to the list of active experiments." ,
Subject : rc . ForEach . Range ( ) . Ptr ( ) ,
} )
}
}
for _ , rc := range m . DataResources {
if rc . ForEach != nil {
diags = append ( diags , & hcl . Diagnostic {
Severity : hcl . DiagError ,
Summary : "Resource for_each is experimental" ,
Detail : "This feature is currently an opt-in experiment, subject to change in future releases based on feedback.\n\nActivate the feature for this module by adding resource_for_each to the list of active experiments." ,
Subject : rc . ForEach . Range ( ) . Ptr ( ) ,
} )
}
}
}
* /
2020-10-08 10:14:50 -05:00
experiments: a mechanism for opt-in experimental language features
Traditionally we've preferred to release new language features in major
releases only, because we can then use the beta cycle to gather feedback
on the feature and learn about any usability challenges or other
situations we didn't consider during our design in time to make those
changes before inclusion in a stable release.
This "experiments" feature is intended to decouple the feedback cycle for
new features from the major release rhythm, and thus allow us to release
new features in minor releases by first releasing them as experimental for
a minor release or two, adjust for any feedback gathered during that
period, and then finally remove the experiment gate and enable the feature
for everyone.
The intended model here is that anything behind an experiment gate is
subject to breaking changes even in patch releases, and so any module
using these experimental features will be broken by a future Terraform
upgrade.
The behavior implemented here is:
- Recognize a new "experiments" setting in the "terraform" block which
allows module authors to explicitly opt in to experimental features.
terraform {
experiments = [resource_for_each]
}
- Generate a warning whenever loading a module that has experiments
enabled, to avoid accidentally depending on experimental features and
thus risking unexpected breakage on next Terraform upgrade.
- We check the enabled experiments against the configuration at module
load time, which means that experiments are scoped to a particular
module. Enabling an experiment in one module does not automatically
enable it in any other module.
This experiments mechanism is itself an experiment, and so I'd like to
use the resource for_each feature to trial it. Because any configuration
using experiments is subject to breaking changes, we are free to adjust
this experiments feature in future releases as we see fit, but once
for_each is shipped without an experiment gate we'll be blocked from
making significant changes to it until the next major release at least.
2019-07-10 14:37:11 -05:00
return diags
}