Merge branch 'main' into update-TF-WORKSPACE-variable

This commit is contained in:
Laura Pacilio 2022-04-20 18:40:26 -04:00 committed by GitHub
commit a0ebb94fb5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
46 changed files with 1256 additions and 639 deletions

View File

@ -6,12 +6,17 @@ UPGRADE NOTES:
* When making outgoing HTTPS or other TLS connections as a client, Terraform now requires the server to support TLS v1.2. TLS v1.0 and v1.1 are no longer supported. Any safely up-to-date server should support TLS 1.2, and mainstream web browsers have required it since 2020.
* When making outgoing HTTPS or other TLS connections as a client, Terraform will no longer accept CA certificates signed using the SHA-1 hash function. Publicly trusted Certificate Authorities have not issued SHA-1 certificates since 2015.
(Note: the changes to Terraform's requirements when interacting with TLS servers apply only to requests made by Terraform CLI itself, such as provider/module installation and state storage requests. Terraform provider plugins include their own TLS clients which may have different requirements, and may add new requirements in their own releases, independently of Terraform CLI changes.)
(Note: the changes to Terraform's requirements when interacting with TLS servers apply only to requests made by Terraform CLI itself, such as provider/module installation and state storage requests. Terraform provider plugins include their own TLS clients which may have different requirements, and may add new requirements in their own releases, independently of Terraform CLI changes.)
* If you use the [third-party credentials helper plugin terraform-credentials-env](https://github.com/apparentlymart/terraform-credentials-env), you should disable it as part of upgrading to Terraform v1.2 because similar functionality is now built in to Terraform itself.
The new behavior supports the same environment variable naming scheme but has a difference in priority order from the credentials helper: `TF_TOKEN_...` environment variables will now take priority over credentials blocks in CLI configuration and credentials stored automatically by terraform login, which is not true for credentials provided by any credentials helper plugin. If you see Terraform using different credentials after upgrading, check to make sure you do not specify credentials for the same host in multiple locations.
If you use the credentials helper in conjunction with the [hashicorp/tfe](https://registry.terraform.io/providers/hashicorp/tfe) Terraform provider to manage Terraform Cloud or Terraform Enterprise objects with Terraform, you should also upgrade to version 0.31 of that provider, which added the corresponding built-in support for these environment variables.
NEW FEATURES:
* `precondition` and `postcondition` check blocks for resources, data sources, and module output values: module authors can now document assumptions and assertions about configuration and state values. If these conditions are not met, Terraform will report a custom error message to the user and halt further evaluation.
* Terraform now supports [run tasks](https://www.terraform.io/cloud-docs/workspaces/settings/run-tasks), a Terraform Cloud integration for executing remote operations, for the post plan stage of a run.
* You may specify remote network service credentials using an environment variable named after the host name with a `TF_TOKEN_` prefix. For example, the value of a variable named `TF_TOKEN_app_terraform_io` will be used as a bearer authorization token when the CLI makes service requests to the host name "app.terraform.io".
ENHANCEMENTS:
@ -20,19 +25,23 @@ ENHANCEMENTS:
* Error messages for preconditions, postconditions, and custom variable validations are now evaluated as expressions, allowing interpolation of relevant values into the output. ([#30613](https://github.com/hashicorp/terraform/issues/30613))
* There are some small improvements to the error and warning messages Terraform will emit in the case of invalid provider configuration passing between modules. There are no changes to which situations will produce errors and warnings, but the messages now include additional information intended to clarify what problem Terraform is describing and how to address it. ([#30639](https://github.com/hashicorp/terraform/issues/30639))
* When running `terraform plan`, only show external changes which may have contributed to the current plan ([#30486](https://github.com/hashicorp/terraform/issues/30486))
* Add `TF_ORGANIZATION` environment variable fallback for `organization` in the cloud configuration
* Add `TF_HOSTNAME` environment variable fallback for `hostname` in the cloud configuration
* Add `TF_CLOUD_ORGANIZATION` environment variable fallback for `organization` in the cloud configuration
* Add `TF_CLOUD_HOSTNAME` environment variable fallback for `hostname` in the cloud configuration
* `TF_WORKSPACE` can now be used to configure the `workspaces` attribute in your cloud configuration
* When running on macOS, Terraform will now use platform APIs to validate certificates presented by TLS (HTTPS) servers. This may change exactly which root certificates Terraform will accept as valid. ([#30768](https://github.com/hashicorp/terraform/issues/30768))
* The AzureRM Backend now defaults to using MSAL (and Microsoft Graph) rather than ADAL (and Azure Active Directory Graph) for authentication. ([#30891](https://github.com/hashicorp/terraform/issues/30891))
* Show remote host in error message for clarity when installation of provider fails ([#30810](https://github.com/hashicorp/terraform/issues/30810))
* Terraform now prints a warning when adding an attribute to `ignore_changes` that is managed only by the provider (non-optional computed attribute). ([#30517](https://github.com/hashicorp/terraform/issues/30517))
BUG FIXES:
* Terraform now handles type constraints, nullability, and custom variable validation properly for root module variables. Previously there was an order of operations problem where the nullability and custom variable validation were checked too early, prior to dealing with the type constraints, and thus that logic could potentially "see" an incorrectly-typed value in spite of the type constraint, leading to incorrect errors. ([#29959](https://github.com/hashicorp/terraform/issues/29959))
* `terraform show -json`: JSON plan output now correctly maps aliased providers to their configurations, and includes the full provider source address alongside the short provider name. ([#30138](https://github.com/hashicorp/terraform/issues/30138))
* Terraform now prints a warning when adding an attribute to `ignore_changes` that is managed only by the provider (non-optional computed attribute). ([#30517](https://github.com/hashicorp/terraform/issues/30517))
* Terraform will prioritize local terraform variables over remote terraform variables in operations such as `import`, `plan`, `refresh` and `apply` for workspaces in local execution mode. This behavior applies to both `remote` backend and the `cloud` integration configuration. ([#29972](https://github.com/hashicorp/terraform/issues/29972))
* Terraform now outputs an error when `cidrnetmask()` is called with an IPv6 address. ([#30703](https://github.com/hashicorp/terraform/issues/30703))
* Applying the various type conversion functions like `tostring`, `tonumber`, etc to `null` will now return a null value of the intended type. For example, `tostring(null)` converts from a null value of an unknown type to a null value of string type. Terraform can often handle such conversions automatically when needed, but explicit annotations like this can help Terraform to understand author intent when inferring type conversions for complex-typed values. [GH-30879]
* Terraform now outputs an error when `cidrnetmask()` is called with an IPv6 address, as it was previously documented to do. ([#30703](https://github.com/hashicorp/terraform/issues/30703))
* When performing advanced state management with the `terraform state` commands, Terraform now checks the `required_version` field in the configuration before proceeding. ([#30511](https://github.com/hashicorp/terraform/pull/30511))
* When rendering a diff, Terraform now quotes the name of any object attribute whose string representation is not a valid identifier. ([#30766](https://github.com/hashicorp/terraform/issues/30766))
* Terraform will prioritize local terraform variables over remote terraform variables in operations such as `import`, `plan`, `refresh` and `apply` for workspaces in local execution mode. This behavior applies to both `remote` backend and the `cloud` integration configuration. ([#29972](https://github.com/hashicorp/terraform/issues/29972))
* `terraform show -json`: JSON plan output now correctly maps aliased providers to their configurations, and includes the full provider source address alongside the short provider name. ([#30138](https://github.com/hashicorp/terraform/issues/30138))
UPGRADE NOTES:

View File

@ -145,8 +145,9 @@ func New() backend.Backend {
"use_microsoft_graph": {
Type: schema.TypeBool,
Optional: true,
Deprecated: "This field now defaults to `true` and will be removed in v1.3 of Terraform Core due to the deprecation of ADAL by Microsoft.",
Description: "Should Terraform obtain an MSAL auth token and use Microsoft Graph rather than Azure Active Directory?",
DefaultFunc: schema.EnvDefaultFunc("ARM_USE_MSGRAPH", false),
DefaultFunc: schema.EnvDefaultFunc("ARM_USE_MSGRAPH", true),
},
},
}

View File

@ -41,7 +41,8 @@ func TestBackendConfig(t *testing.T, b Backend, c hcl.Body) Backend {
newObj, valDiags := b.PrepareConfig(obj)
diags = diags.Append(valDiags.InConfigBody(c, ""))
if len(diags) != 0 {
// it's valid for a Backend to have warnings (e.g. a Deprecation) as such we should only raise on errors
if diags.HasErrors() {
t.Fatal(diags.ErrWithWarnings())
}

View File

@ -155,9 +155,9 @@ func (b *Cloud) PrepareConfig(obj cty.Value) (cty.Value, tfdiags.Diagnostics) {
// check if organization is specified in the config.
if val := obj.GetAttr("organization"); val.IsNull() || val.AsString() == "" {
// organization is specified in the config but is invalid, so
// we'll fallback on TF_ORGANIZATION
if val := os.Getenv("TF_ORGANIZATION"); val == "" {
diags = diags.Append(missingConfigAttributeAndEnvVar("organization", "TF_ORGANIZATION"))
// we'll fallback on TF_CLOUD_ORGANIZATION
if val := os.Getenv("TF_CLOUD_ORGANIZATION"); val == "" {
diags = diags.Append(missingConfigAttributeAndEnvVar("organization", "TF_CLOUD_ORGANIZATION"))
}
}
@ -253,30 +253,32 @@ func (b *Cloud) Configure(obj cty.Value) tfdiags.Diagnostics {
return diags
}
cfg := &tfe.Config{
Address: service.String(),
BasePath: service.Path,
Token: token,
Headers: make(http.Header),
RetryLogHook: b.retryLogHook,
}
if b.client == nil {
cfg := &tfe.Config{
Address: service.String(),
BasePath: service.Path,
Token: token,
Headers: make(http.Header),
RetryLogHook: b.retryLogHook,
}
// Set the version header to the current version.
cfg.Headers.Set(tfversion.Header, tfversion.Version)
cfg.Headers.Set(headerSourceKey, headerSourceValue)
// Set the version header to the current version.
cfg.Headers.Set(tfversion.Header, tfversion.Version)
cfg.Headers.Set(headerSourceKey, headerSourceValue)
// Create the TFC/E API client.
b.client, err = tfe.NewClient(cfg)
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Failed to create the Terraform Cloud/Enterprise client",
fmt.Sprintf(
`Encountered an unexpected error while creating the `+
`Terraform Cloud/Enterprise client: %s.`, err,
),
))
return diags
// Create the TFC/E API client.
b.client, err = tfe.NewClient(cfg)
if err != nil {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Failed to create the Terraform Cloud/Enterprise client",
fmt.Sprintf(
`Encountered an unexpected error while creating the `+
`Terraform Cloud/Enterprise client: %s.`, err,
),
))
return diags
}
}
// Check if the organization exists by reading its entitlements.
@ -353,7 +355,7 @@ func (b *Cloud) setConfigurationFields(obj cty.Value) tfdiags.Diagnostics {
var diags tfdiags.Diagnostics
// Get the hostname.
b.hostname = os.Getenv("TF_HOSTNAME")
b.hostname = os.Getenv("TF_CLOUD_HOSTNAME")
if val := obj.GetAttr("hostname"); !val.IsNull() && val.AsString() != "" {
b.hostname = val.AsString()
} else if b.hostname == "" {
@ -361,10 +363,10 @@ func (b *Cloud) setConfigurationFields(obj cty.Value) tfdiags.Diagnostics {
}
// We can have two options, setting the organization via the config
// or using TF_ORGANIZATION. Since PrepareConfig() validates that one of these
// or using TF_CLOUD_ORGANIZATION. Since PrepareConfig() validates that one of these
// values must exist, we'll initially set it to the env var and override it if
// specified in the configuration.
b.organization = os.Getenv("TF_ORGANIZATION")
b.organization = os.Getenv("TF_CLOUD_ORGANIZATION")
// Check if the organization is present and valid in the config.
if val := obj.GetAttr("organization"); !val.IsNull() && val.AsString() != "" {
@ -383,7 +385,7 @@ func (b *Cloud) setConfigurationFields(obj cty.Value) tfdiags.Diagnostics {
var tags []string
err := gocty.FromCtyValue(val, &tags)
if err != nil {
log.Panicf("An unxpected error occurred: %s", err)
log.Panicf("An unexpected error occurred: %s", err)
}
b.WorkspaceMapping.Tags = tags

View File

@ -86,7 +86,7 @@ func TestCloud_PrepareConfig(t *testing.T) {
"tags": cty.NullVal(cty.Set(cty.String)),
}),
}),
expectedErr: `Invalid or missing required argument: "organization" must be set in the cloud configuration or as an environment variable: TF_ORGANIZATION.`,
expectedErr: `Invalid or missing required argument: "organization" must be set in the cloud configuration or as an environment variable: TF_CLOUD_ORGANIZATION.`,
},
"null workspace": {
config: cty.ObjectVal(map[string]cty.Value{
@ -161,7 +161,7 @@ func TestCloud_PrepareConfigWithEnvVars(t *testing.T) {
}),
}),
vars: map[string]string{
"TF_ORGANIZATION": "example-org",
"TF_CLOUD_ORGANIZATION": "example-org",
},
},
"with no organization attribute or env var": {
@ -173,7 +173,7 @@ func TestCloud_PrepareConfigWithEnvVars(t *testing.T) {
}),
}),
vars: map[string]string{},
expectedErr: `Invalid or missing required argument: "organization" must be set in the cloud configuration or as an environment variable: TF_ORGANIZATION.`,
expectedErr: `Invalid or missing required argument: "organization" must be set in the cloud configuration or as an environment variable: TF_CLOUD_ORGANIZATION.`,
},
"null workspace": {
config: cty.ObjectVal(map[string]cty.Value{
@ -190,8 +190,8 @@ func TestCloud_PrepareConfigWithEnvVars(t *testing.T) {
"workspaces": cty.NullVal(cty.String),
}),
vars: map[string]string{
"TF_ORGANIZATION": "hashicorp",
"TF_WORKSPACE": "my-workspace",
"TF_CLOUD_ORGANIZATION": "hashicorp",
"TF_WORKSPACE": "my-workspace",
},
},
}
@ -223,6 +223,7 @@ func TestCloud_PrepareConfigWithEnvVars(t *testing.T) {
func TestCloud_configWithEnvVars(t *testing.T) {
cases := map[string]struct {
setup func(b *Cloud)
config cty.Value
vars map[string]string
expectedOrganization string
@ -241,7 +242,7 @@ func TestCloud_configWithEnvVars(t *testing.T) {
}),
}),
vars: map[string]string{
"TF_ORGANIZATION": "hashicorp",
"TF_CLOUD_ORGANIZATION": "hashicorp",
},
expectedOrganization: "hashicorp",
},
@ -256,7 +257,7 @@ func TestCloud_configWithEnvVars(t *testing.T) {
}),
}),
vars: map[string]string{
"TF_ORGANIZATION": "we-should-not-see-this",
"TF_CLOUD_ORGANIZATION": "we-should-not-see-this",
},
expectedOrganization: "hashicorp",
},
@ -271,13 +272,13 @@ func TestCloud_configWithEnvVars(t *testing.T) {
}),
}),
vars: map[string]string{
"TF_HOSTNAME": "app.terraform.io",
"TF_CLOUD_HOSTNAME": "private.hashicorp.engineering",
},
expectedHostname: "app.terraform.io",
expectedHostname: "private.hashicorp.engineering",
},
"with hostname and env var specified": {
config: cty.ObjectVal(map[string]cty.Value{
"hostname": cty.StringVal("app.terraform.io"),
"hostname": cty.StringVal("private.hashicorp.engineering"),
"token": cty.NullVal(cty.String),
"organization": cty.StringVal("hashicorp"),
"workspaces": cty.ObjectVal(map[string]cty.Value{
@ -286,13 +287,13 @@ func TestCloud_configWithEnvVars(t *testing.T) {
}),
}),
vars: map[string]string{
"TF_HOSTNAME": "mycool.tfe-host.io",
"TF_CLOUD_HOSTNAME": "mycool.tfe-host.io",
},
expectedHostname: "app.terraform.io",
expectedHostname: "private.hashicorp.engineering",
},
"an invalid workspace env var": {
config: cty.ObjectVal(map[string]cty.Value{
"hostname": cty.StringVal("app.terraform.io"),
"hostname": cty.NullVal(cty.String),
"token": cty.NullVal(cty.String),
"organization": cty.StringVal("hashicorp"),
"workspaces": cty.NullVal(cty.Object(map[string]cty.Type{
@ -307,25 +308,98 @@ func TestCloud_configWithEnvVars(t *testing.T) {
},
"workspaces and env var specified": {
config: cty.ObjectVal(map[string]cty.Value{
"hostname": cty.StringVal("app.terraform.io"),
"hostname": cty.NullVal(cty.String),
"token": cty.NullVal(cty.String),
"organization": cty.StringVal("hashicorp"),
"organization": cty.StringVal("mordor"),
"workspaces": cty.ObjectVal(map[string]cty.Value{
"name": cty.StringVal("prod"),
"name": cty.StringVal("mt-doom"),
"tags": cty.NullVal(cty.Set(cty.String)),
}),
}),
vars: map[string]string{
"TF_WORKSPACE": "mt-doom",
"TF_WORKSPACE": "shire",
},
expectedWorkspaceName: "prod",
expectedWorkspaceName: "mt-doom",
},
"env var workspace does not have specified tag": {
setup: func(b *Cloud) {
b.client.Organizations.Create(context.Background(), tfe.OrganizationCreateOptions{
Name: tfe.String("mordor"),
})
b.client.Workspaces.Create(context.Background(), "mordor", tfe.WorkspaceCreateOptions{
Name: tfe.String("shire"),
})
},
config: cty.ObjectVal(map[string]cty.Value{
"hostname": cty.NullVal(cty.String),
"token": cty.NullVal(cty.String),
"organization": cty.StringVal("mordor"),
"workspaces": cty.ObjectVal(map[string]cty.Value{
"name": cty.NullVal(cty.String),
"tags": cty.SetVal([]cty.Value{
cty.StringVal("cloud"),
}),
}),
}),
vars: map[string]string{
"TF_WORKSPACE": "shire",
},
expectedErr: "Terraform failed to find workspace \"shire\" with the tags specified in your configuration:\n[cloud]",
},
"env var workspace has specified tag": {
setup: func(b *Cloud) {
b.client.Organizations.Create(context.Background(), tfe.OrganizationCreateOptions{
Name: tfe.String("mordor"),
})
b.client.Workspaces.Create(context.Background(), "mordor", tfe.WorkspaceCreateOptions{
Name: tfe.String("shire"),
Tags: []*tfe.Tag{
{
Name: "hobbity",
},
},
})
},
config: cty.ObjectVal(map[string]cty.Value{
"hostname": cty.NullVal(cty.String),
"token": cty.NullVal(cty.String),
"organization": cty.StringVal("mordor"),
"workspaces": cty.ObjectVal(map[string]cty.Value{
"name": cty.NullVal(cty.String),
"tags": cty.SetVal([]cty.Value{
cty.StringVal("hobbity"),
}),
}),
}),
vars: map[string]string{
"TF_WORKSPACE": "shire",
},
expectedWorkspaceName: "", // No error is raised, but workspace is not set
},
"with everything set as env vars": {
config: cty.ObjectVal(map[string]cty.Value{
"hostname": cty.NullVal(cty.String),
"token": cty.NullVal(cty.String),
"organization": cty.NullVal(cty.String),
"workspaces": cty.NullVal(cty.String),
}),
vars: map[string]string{
"TF_CLOUD_ORGANIZATION": "mordor",
"TF_WORKSPACE": "mt-doom",
"TF_CLOUD_HOSTNAME": "mycool.tfe-host.io",
},
expectedOrganization: "mordor",
expectedWorkspaceName: "mt-doom",
expectedHostname: "mycool.tfe-host.io",
},
}
for name, tc := range cases {
t.Run(name, func(t *testing.T) {
s := testServer(t)
b := New(testDisco(s))
b, cleanup := testUnconfiguredBackend(t)
t.Cleanup(cleanup)
for k, v := range tc.vars {
os.Setenv(k, v)
@ -342,6 +416,10 @@ func TestCloud_configWithEnvVars(t *testing.T) {
t.Fatalf("%s: unexpected validation result: %v", name, valDiags.Err())
}
if tc.setup != nil {
tc.setup(b)
}
diags := b.Configure(tc.config)
if (diags.Err() != nil || tc.expectedErr != "") &&
(diags.Err() == nil || !strings.Contains(diags.Err().Error(), tc.expectedErr)) {
@ -369,18 +447,6 @@ func TestCloud_config(t *testing.T) {
confErr string
valErr string
}{
"with_a_nonexisting_organization": {
config: cty.ObjectVal(map[string]cty.Value{
"hostname": cty.NullVal(cty.String),
"organization": cty.StringVal("nonexisting"),
"token": cty.NullVal(cty.String),
"workspaces": cty.ObjectVal(map[string]cty.Value{
"name": cty.StringVal("prod"),
"tags": cty.NullVal(cty.Set(cty.String)),
}),
}),
confErr: "organization \"nonexisting\" at host app.terraform.io not found",
},
"with_an_unknown_host": {
config: cty.ObjectVal(map[string]cty.Value{
"hostname": cty.StringVal("nonexisting.local"),
@ -466,8 +532,8 @@ func TestCloud_config(t *testing.T) {
}
for name, tc := range cases {
s := testServer(t)
b := New(testDisco(s))
b, cleanup := testUnconfiguredBackend(t)
t.Cleanup(cleanup)
// Validate
_, valDiags := b.PrepareConfig(tc.config)

View File

@ -0,0 +1,264 @@
package main
import (
"context"
"fmt"
"testing"
"github.com/hashicorp/go-tfe"
)
func Test_cloud_organization_env_var(t *testing.T) {
t.Parallel()
skipIfMissingEnvVar(t)
ctx := context.Background()
org, cleanup := createOrganization(t)
t.Cleanup(cleanup)
cases := testCases{
"with TF_CLOUD_ORGANIZATION set": {
operations: []operationSets{
{
prep: func(t *testing.T, orgName, dir string) {
remoteWorkspace := "cloud-workspace"
tfBlock := terraformConfigCloudBackendOmitOrg(remoteWorkspace)
writeMainTF(t, tfBlock, dir)
},
commands: []tfCommand{
{
command: []string{"init"},
expectedCmdOutput: `Terraform Cloud has been successfully initialized!`,
},
{
command: []string{"apply", "-auto-approve"},
postInputOutput: []string{`Apply complete!`},
},
},
},
},
validations: func(t *testing.T, orgName string) {
expectedName := "cloud-workspace"
ws, err := tfeClient.Workspaces.Read(ctx, org.Name, expectedName)
if err != nil {
t.Fatal(err)
}
if ws == nil {
t.Fatalf("Expected workspace %s to be present, but is not.", expectedName)
}
},
},
}
testRunner(t, cases, 0, fmt.Sprintf("TF_CLOUD_ORGANIZATION=%s", org.Name))
}
func Test_cloud_workspace_name_env_var(t *testing.T) {
t.Parallel()
skipIfMissingEnvVar(t)
org, orgCleanup := createOrganization(t)
t.Cleanup(orgCleanup)
wk := createWorkspace(t, org.Name, tfe.WorkspaceCreateOptions{
Name: tfe.String("cloud-workspace"),
})
validCases := testCases{
"a workspace that exists": {
operations: []operationSets{
{
prep: func(t *testing.T, orgName, dir string) {
tfBlock := terraformConfigCloudBackendOmitWorkspaces(org.Name)
writeMainTF(t, tfBlock, dir)
},
commands: []tfCommand{
{
command: []string{"init"},
expectedCmdOutput: `Terraform Cloud has been successfully initialized!`,
},
{
command: []string{"apply", "-auto-approve"},
postInputOutput: []string{`Apply complete!`},
},
},
},
{
prep: func(t *testing.T, orgName, dir string) {
tfBlock := terraformConfigCloudBackendOmitWorkspaces(org.Name)
writeMainTF(t, tfBlock, dir)
},
commands: []tfCommand{
{
command: []string{"init"},
expectedCmdOutput: `Terraform Cloud has been successfully initialized!`,
},
{
command: []string{"workspace", "show"},
expectedCmdOutput: wk.Name,
},
},
},
},
},
}
errCases := testCases{
"a workspace that doesn't exist": {
operations: []operationSets{
{
prep: func(t *testing.T, orgName, dir string) {
tfBlock := terraformConfigCloudBackendOmitWorkspaces(org.Name)
writeMainTF(t, tfBlock, dir)
},
commands: []tfCommand{
{
command: []string{"init"},
expectError: true,
},
},
},
},
},
}
testRunner(t, validCases, 0, fmt.Sprintf(`TF_WORKSPACE=%s`, wk.Name))
testRunner(t, errCases, 0, fmt.Sprintf(`TF_WORKSPACE=%s`, "the-fires-of-mt-doom"))
}
func Test_cloud_workspace_tags_env_var(t *testing.T) {
t.Parallel()
skipIfMissingEnvVar(t)
org, orgCleanup := createOrganization(t)
t.Cleanup(orgCleanup)
wkValid := createWorkspace(t, org.Name, tfe.WorkspaceCreateOptions{
Name: tfe.String("cloud-workspace"),
Tags: []*tfe.Tag{
{Name: "cloud"},
},
})
// this will be a workspace that won't have a tag listed in our test configuration
wkInvalid := createWorkspace(t, org.Name, tfe.WorkspaceCreateOptions{
Name: tfe.String("cloud-workspace-2"),
})
validCases := testCases{
"a workspace with valid tag": {
operations: []operationSets{
{
prep: func(t *testing.T, orgName, dir string) {
tfBlock := terraformConfigCloudBackendTags(org.Name, wkValid.TagNames[0])
writeMainTF(t, tfBlock, dir)
},
commands: []tfCommand{
{
command: []string{"init"},
expectedCmdOutput: `Terraform Cloud has been successfully initialized!`,
},
{
command: []string{"apply", "-auto-approve"},
postInputOutput: []string{`Apply complete!`},
},
},
},
{
prep: func(t *testing.T, orgName, dir string) {
tfBlock := terraformConfigCloudBackendTags(org.Name, wkValid.TagNames[0])
writeMainTF(t, tfBlock, dir)
},
commands: []tfCommand{
{
command: []string{"init"},
expectedCmdOutput: `Terraform Cloud has been successfully initialized!`,
},
{
command: []string{"workspace", "show"},
expectedCmdOutput: wkValid.Name,
},
},
},
},
},
}
errCases := testCases{
"a workspace not specified by tags": {
operations: []operationSets{
{
prep: func(t *testing.T, orgName, dir string) {
tfBlock := terraformConfigCloudBackendTags(org.Name, wkValid.TagNames[0])
writeMainTF(t, tfBlock, dir)
},
commands: []tfCommand{
{
command: []string{"init"},
expectError: true,
},
},
},
},
},
}
testRunner(t, validCases, 0, fmt.Sprintf(`TF_WORKSPACE=%s`, wkValid.Name))
testRunner(t, errCases, 0, fmt.Sprintf(`TF_WORKSPACE=%s`, wkInvalid.Name))
}
func Test_cloud_null_config(t *testing.T) {
t.Parallel()
skipIfMissingEnvVar(t)
org, cleanup := createOrganization(t)
t.Cleanup(cleanup)
wk := createWorkspace(t, org.Name, tfe.WorkspaceCreateOptions{
Name: tfe.String("cloud-workspace"),
})
cases := testCases{
"with all env vars set": {
operations: []operationSets{
{
prep: func(t *testing.T, orgName, dir string) {
tfBlock := terraformConfigCloudBackendOmitConfig()
writeMainTF(t, tfBlock, dir)
},
commands: []tfCommand{
{
command: []string{"init"},
expectedCmdOutput: `Terraform Cloud has been successfully initialized!`,
},
{
command: []string{"apply", "-auto-approve"},
postInputOutput: []string{`Apply complete!`},
},
},
},
{
prep: func(t *testing.T, orgName, dir string) {
tfBlock := terraformConfigCloudBackendOmitConfig()
writeMainTF(t, tfBlock, dir)
},
commands: []tfCommand{
{
command: []string{"init"},
expectedCmdOutput: `Terraform Cloud has been successfully initialized!`,
},
{
command: []string{"workspace", "show"},
expectedCmdOutput: wk.Name,
},
},
},
},
},
}
testRunner(t, cases, 1,
fmt.Sprintf(`TF_CLOUD_ORGANIZATION=%s`, org.Name),
fmt.Sprintf(`TF_CLOUD_HOSTNAME=%s`, tfeHostname),
fmt.Sprintf(`TF_WORKSPACE=%s`, wk.Name))
}

View File

@ -191,6 +191,51 @@ output "val" {
`, tfeHostname, org, name)
}
func terraformConfigCloudBackendOmitOrg(workspaceName string) string {
return fmt.Sprintf(`
terraform {
cloud {
hostname = "%s"
workspaces {
name = "%s"
}
}
}
output "val" {
value = "${terraform.workspace}"
}
`, tfeHostname, workspaceName)
}
func terraformConfigCloudBackendOmitWorkspaces(orgName string) string {
return fmt.Sprintf(`
terraform {
cloud {
hostname = "%s"
organization = "%s"
}
}
output "val" {
value = "${terraform.workspace}"
}
`, tfeHostname, orgName)
}
func terraformConfigCloudBackendOmitConfig() string {
return `
terraform {
cloud {}
}
output "val" {
value = "${terraform.workspace}"
}
`
}
func writeMainTF(t *testing.T, block string, dir string) {
f, err := os.Create(fmt.Sprintf("%s/main.tf", dir))
if err != nil {

View File

@ -98,11 +98,15 @@ func testRunner(t *testing.T, cases testCases, orgCount int, tfEnvFlags ...strin
var orgName string
for index, op := range tc.operations {
if orgCount == 1 {
switch orgCount {
case 0:
orgName = ""
case 1:
orgName = orgNames[0]
} else {
default:
orgName = orgNames[index]
}
op.prep(t, orgName, tf.WorkDir())
for _, tfCmd := range op.commands {
cmd := tf.Cmd(tfCmd.command...)

View File

@ -178,6 +178,44 @@ func testBackend(t *testing.T, obj cty.Value) (*Cloud, func()) {
return b, s.Close
}
// testUnconfiguredBackend is used for testing the configuration of the backend
// with the mock client
func testUnconfiguredBackend(t *testing.T) (*Cloud, func()) {
s := testServer(t)
b := New(testDisco(s))
// Normally, the client is created during configuration, but the configuration uses the
// client to read entitlements.
var err error
b.client, err = tfe.NewClient(&tfe.Config{
Token: "fake-token",
})
if err != nil {
t.Fatal(err)
}
// Get a new mock client.
mc := NewMockClient()
// Replace the services we use with our mock services.
b.CLI = cli.NewMockUi()
b.client.Applies = mc.Applies
b.client.ConfigurationVersions = mc.ConfigurationVersions
b.client.CostEstimates = mc.CostEstimates
b.client.Organizations = mc.Organizations
b.client.Plans = mc.Plans
b.client.PolicyChecks = mc.PolicyChecks
b.client.Runs = mc.Runs
b.client.StateVersions = mc.StateVersions
b.client.Variables = mc.Variables
b.client.Workspaces = mc.Workspaces
// Set local to a local test backend.
b.local = testLocalBackend(t, b)
return b, s.Close
}
func testLocalBackend(t *testing.T, cloud *Cloud) backend.Enhanced {
b := backendLocal.NewWithBackend(cloud)

View File

@ -114,45 +114,77 @@ func (c *Config) credentialsSource(helperType string, helper svcauth.Credentials
}
}
func collectCredentialsFromEnv() map[svchost.Hostname]string {
const prefix = "TF_TOKEN_"
ret := make(map[svchost.Hostname]string)
for _, ev := range os.Environ() {
eqIdx := strings.Index(ev, "=")
if eqIdx < 0 {
continue
}
name := ev[:eqIdx]
value := ev[eqIdx+1:]
if !strings.HasPrefix(name, prefix) {
continue
}
rawHost := name[len(prefix):]
// We accept double underscores in place of hyphens because hyphens are not valid
// identifiers in most shells and are therefore hard to set.
// This is unambiguous with replacing single underscores below because
// hyphens are not allowed at the beginning or end of a label and therefore
// odd numbers of underscores will not appear together in a valid variable name.
rawHost = strings.ReplaceAll(rawHost, "__", "-")
// We accept underscores in place of dots because dots are not valid
// identifiers in most shells and are therefore hard to set.
// Underscores are not valid in hostnames, so this is unambiguous for
// valid hostnames.
rawHost = strings.ReplaceAll(rawHost, "_", ".")
// Because environment variables are often set indirectly by OS
// libraries that might interfere with how they are encoded, we'll
// be tolerant of them being given either directly as UTF-8 IDNs
// or in Punycode form, normalizing to Punycode form here because
// that is what the Terraform credentials helper protocol will
// use in its requests.
//
// Using ForDisplay first here makes this more liberal than Terraform
// itself would usually be in that it will tolerate pre-punycoded
// hostnames that Terraform normally rejects in other contexts in order
// to ensure stored hostnames are human-readable.
dispHost := svchost.ForDisplay(rawHost)
hostname, err := svchost.ForComparison(dispHost)
if err != nil {
// Ignore invalid hostnames
continue
}
ret[hostname] = value
}
return ret
}
// hostCredentialsFromEnv returns a token credential by searching for a hostname-specific
// environment variable. The host parameter is expected to be in the "comparison" form,
// for example, hostnames containing non-ASCII characters like "café.fr"
// should be expressed as "xn--caf-dma.fr". If the variable based on the hostname is not
// defined, nil is returned. Variable names must have dot characters translated to
// underscores, which are not allowed in DNS names. For example, token credentials
// for app.terraform.io should be set in the variable named TF_TOKEN_app_terraform_io.
// defined, nil is returned.
//
// Hyphen characters are allowed in environment variable names, but are not valid POSIX
// variable names. Usually, it's still possible to set variable names with hyphens using
// utilities like env or docker. But, as a fallback, host names may encode their
// hyphens as double underscores in the variable name. For the example "café.fr",
// the variable name "TF_TOKEN_xn____caf__dma_fr" or "TF_TOKEN_xn--caf-dma_fr"
// may be used.
// Hyphen and period characters are allowed in environment variable names, but are not valid POSIX
// variable names. However, it's still possible to set variable names with these characters using
// utilities like env or docker. Variable names may have periods translated to underscores and
// hyphens translated to double underscores in the variable name.
// For the example "café.fr", you may use the variable names "TF_TOKEN_xn____caf__dma_fr",
// "TF_TOKEN_xn--caf-dma_fr", or "TF_TOKEN_xn--caf-dma.fr"
func hostCredentialsFromEnv(host svchost.Hostname) svcauth.HostCredentials {
if len(host) == 0 {
token, ok := collectCredentialsFromEnv()[host]
if !ok {
return nil
}
// Convert dots to underscores when looking for environment configuration for a specific host.
// DNS names do not allow underscore characters so this is unambiguous.
translated := strings.ReplaceAll(host.String(), ".", "_")
if token, ok := os.LookupEnv(fmt.Sprintf("TF_TOKEN_%s", translated)); ok {
return svcauth.HostCredentialsToken(token)
}
if strings.ContainsRune(translated, '-') {
// This host name contains a hyphen. Replace hyphens with double underscores as a fallback
// (see godoc above for details)
translated = strings.ReplaceAll(host.String(), "-", "__")
translated = strings.ReplaceAll(translated, ".", "_")
if token, ok := os.LookupEnv(fmt.Sprintf("TF_TOKEN_%s", translated)); ok {
return svcauth.HostCredentialsToken(token)
}
}
return nil
return svcauth.HostCredentialsToken(token)
}
// CredentialsSource is an implementation of svcauth.CredentialsSource

View File

@ -156,6 +156,56 @@ func TestCredentialsForHost(t *testing.T) {
t.Errorf("wrong result\ngot: %s\nwant: %s", got, expectedToken)
}
})
t.Run("periods are ok", func(t *testing.T) {
envName := "TF_TOKEN_configured.example.com"
expectedToken := "configured-by-env"
t.Cleanup(func() {
os.Unsetenv(envName)
})
os.Setenv(envName, expectedToken)
hostname, _ := svchost.ForComparison("configured.example.com")
creds, err := credSrc.ForHost(hostname)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if creds == nil {
t.Fatal("no credentials found")
}
if got := creds.Token(); got != expectedToken {
t.Errorf("wrong result\ngot: %s\nwant: %s", got, expectedToken)
}
})
t.Run("casing is insensitive", func(t *testing.T) {
envName := "TF_TOKEN_CONFIGUREDUPPERCASE_EXAMPLE_COM"
expectedToken := "configured-by-env"
os.Setenv(envName, expectedToken)
t.Cleanup(func() {
os.Unsetenv(envName)
})
hostname, _ := svchost.ForComparison("configureduppercase.example.com")
creds, err := credSrc.ForHost(hostname)
if err != nil {
t.Fatalf("unexpected error: %s", err)
}
if creds == nil {
t.Fatal("no credentials found")
}
if got := creds.Token(); got != expectedToken {
t.Errorf("wrong result\ngot: %s\nwant: %s", got, expectedToken)
}
})
}
func TestCredentialsStoreForget(t *testing.T) {

View File

@ -437,7 +437,7 @@ func (c *registryClient) getFile(url *url.URL) ([]byte, error) {
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("%s", resp.Status)
return nil, fmt.Errorf("%s returned from %s", resp.Status, resp.Request.Host)
}
data, err := ioutil.ReadAll(resp.Body)
@ -478,7 +478,7 @@ func maxRetryErrorHandler(resp *http.Response, err error, numTries int) (*http.R
// both response and error.
var errMsg string
if resp != nil {
errMsg = fmt.Sprintf(": %s returned from %s", resp.Status, resp.Request.URL)
errMsg = fmt.Sprintf(": %s returned from %s", resp.Status, resp.Request.Host)
} else if err != nil {
errMsg = fmt.Sprintf(": %s", err)
}

View File

@ -30,9 +30,10 @@ func MakeToFunc(wantTy cty.Type) function.Function {
// messages to be more appropriate for an explicit type
// conversion, whereas the cty function system produces
// messages aimed at _implicit_ type conversions.
Type: cty.DynamicPseudoType,
AllowNull: true,
AllowMarked: true,
Type: cty.DynamicPseudoType,
AllowNull: true,
AllowMarked: true,
AllowDynamicType: true,
},
},
Type: func(args []cty.Value) (cty.Type, error) {

View File

@ -33,6 +33,16 @@ func TestTo(t *testing.T) {
cty.NullVal(cty.String),
``,
},
{
// This test case represents evaluating the expression tostring(null)
// from HCL, since null in HCL is cty.NullVal(cty.DynamicPseudoType).
// The result in that case should still be null, but a null specifically
// of type string.
cty.NullVal(cty.DynamicPseudoType),
cty.String,
cty.NullVal(cty.String),
``,
},
{
cty.StringVal("a").Mark("boop"),
cty.String,

View File

@ -2963,3 +2963,66 @@ output "a" {
}
}
}
func TestContext2Plan_dataSchemaChange(t *testing.T) {
// We can't decode the prior state when a data source upgrades the schema
// in an incompatible way. Since prior state for data sources is purely
// informational, decoding should be skipped altogether.
m := testModuleInline(t, map[string]string{
"main.tf": `
data "test_object" "a" {
obj {
# args changes from a list to a map
args = {
val = "string"
}
}
}
`,
})
p := new(MockProvider)
p.GetProviderSchemaResponse = getProviderSchemaResponseFromProviderSchema(&ProviderSchema{
DataSources: map[string]*configschema.Block{
"test_object": {
Attributes: map[string]*configschema.Attribute{
"id": {
Type: cty.String,
Computed: true,
},
},
BlockTypes: map[string]*configschema.NestedBlock{
"obj": {
Block: configschema.Block{
Attributes: map[string]*configschema.Attribute{
"args": {Type: cty.Map(cty.String), Optional: true},
},
},
Nesting: configschema.NestingSet,
},
},
},
},
})
p.ReadDataSourceFn = func(req providers.ReadDataSourceRequest) (resp providers.ReadDataSourceResponse) {
resp.State = req.Config
return resp
}
state := states.BuildState(func(s *states.SyncState) {
s.SetResourceInstanceCurrent(mustResourceInstanceAddr(`data.test_object.a`), &states.ResourceInstanceObjectSrc{
AttrsJSON: []byte(`{"id":"old","obj":[{"args":["string"]}]}`),
Status: states.ObjectReady,
}, mustProviderConfig(`provider["registry.terraform.io/hashicorp/test"]`))
})
ctx := testContext2(t, &ContextOpts{
Providers: map[addrs.Provider]providers.Factory{
addrs.NewDefaultProvider("test"): testProviderFuncFixed(p),
},
})
_, diags := ctx.Plan(m, state, DefaultPlanOpts)
assertNoErrors(t, diags)
}

View File

@ -389,10 +389,6 @@ func (n *NodeAbstractResource) readResourceInstanceState(ctx EvalContext, addr a
}
diags = diags.Append(upgradeDiags)
if diags.HasErrors() {
// Note that we don't have any channel to return warnings here. We'll
// accept that for now since warnings during a schema upgrade would
// be pretty weird anyway, since this operation is supposed to seem
// invisible to the user.
return nil, diags
}

View File

@ -1477,7 +1477,7 @@ func (n *NodeAbstractResourceInstance) providerMetas(ctx EvalContext) (cty.Value
// value, but it still matches the previous state, then we can record a NoNop
// change. If the states don't match then we record a Read change so that the
// new value is applied to the state.
func (n *NodeAbstractResourceInstance) planDataSource(ctx EvalContext, currentState *states.ResourceInstanceObject, checkRuleSeverity tfdiags.Severity) (*plans.ResourceInstanceChange, *states.ResourceInstanceObject, instances.RepetitionData, tfdiags.Diagnostics) {
func (n *NodeAbstractResourceInstance) planDataSource(ctx EvalContext, checkRuleSeverity tfdiags.Severity) (*plans.ResourceInstanceChange, *states.ResourceInstanceObject, instances.RepetitionData, tfdiags.Diagnostics) {
var diags tfdiags.Diagnostics
var keyData instances.RepetitionData
var configVal cty.Value
@ -1500,9 +1500,6 @@ func (n *NodeAbstractResourceInstance) planDataSource(ctx EvalContext, currentSt
objTy := schema.ImpliedType()
priorVal := cty.NullVal(objTy)
if currentState != nil {
priorVal = currentState.Value
}
forEach, _ := evaluateForEachExpression(config.ForEach, ctx)
keyData = EvalDataForInstanceKey(n.ResourceInstanceAddr().Resource.Key, forEach)
@ -1515,9 +1512,6 @@ func (n *NodeAbstractResourceInstance) planDataSource(ctx EvalContext, currentSt
)
diags = diags.Append(checkDiags)
if diags.HasErrors() {
diags = diags.Append(ctx.Hook(func(h Hook) (HookAction, error) {
return h.PostApply(n.Addr, states.CurrentGen, priorVal, diags.Err())
}))
return nil, nil, keyData, diags // failed preconditions prevent further evaluation
}
@ -1529,9 +1523,6 @@ func (n *NodeAbstractResourceInstance) planDataSource(ctx EvalContext, currentSt
}
unmarkedConfigVal, configMarkPaths := configVal.UnmarkDeepWithPaths()
// We drop marks on the values used here as the result is only
// temporarily used for validation.
unmarkedPriorVal, _ := priorVal.UnmarkDeep()
configKnown := configVal.IsWhollyKnown()
// If our configuration contains any unknown values, or we depend on any
@ -1581,28 +1572,6 @@ func (n *NodeAbstractResourceInstance) planDataSource(ctx EvalContext, currentSt
return nil, nil, keyData, diags
}
// if we have a prior value, we can check for any irregularities in the response
if !priorVal.IsNull() {
// While we don't propose planned changes for data sources, we can
// generate a proposed value for comparison to ensure the data source
// is returning a result following the rules of the provider contract.
proposedVal := objchange.ProposedNew(schema, unmarkedPriorVal, unmarkedConfigVal)
if errs := objchange.AssertObjectCompatible(schema, proposedVal, newVal); len(errs) > 0 {
// Resources have the LegacyTypeSystem field to signal when they are
// using an SDK which may not produce precise values. While data
// sources are read-only, they can still return a value which is not
// compatible with the config+schema. Since we can't detect the legacy
// type system, we can only warn about this for now.
var buf strings.Builder
fmt.Fprintf(&buf, "[WARN] Provider %q produced an unexpected new value for %s.",
n.ResolvedProvider, n.Addr)
for _, err := range errs {
fmt.Fprintf(&buf, "\n - %s", tfdiags.FormatError(err))
}
log.Print(buf.String())
}
}
plannedNewState := &states.ResourceInstanceObject{
Value: newVal,
Status: states.ObjectReady,

View File

@ -264,12 +264,6 @@ func (n *NodeApplyableResourceInstance) managedResourceExecute(ctx EvalContext)
return diags
}
state, readDiags = n.readResourceInstanceState(ctx, n.ResourceInstanceAddr())
diags = diags.Append(readDiags)
if diags.HasErrors() {
return diags
}
diffApply = reducePlan(addr, diffApply, false)
// reducePlan may have simplified our planned change
// into a NoOp if it only requires destroying, since destroying

View File

@ -1,10 +1,13 @@
package terraform
import (
"fmt"
"github.com/hashicorp/terraform/internal/addrs"
"github.com/hashicorp/terraform/internal/plans"
"github.com/hashicorp/terraform/internal/states"
"github.com/hashicorp/terraform/internal/tfdiags"
"github.com/zclconf/go-cty/cty"
)
// NodePlanDestroyableResourceInstance represents a resource that is ready
@ -39,6 +42,19 @@ func (n *NodePlanDestroyableResourceInstance) DestroyAddr() *addrs.AbsResourceIn
func (n *NodePlanDestroyableResourceInstance) Execute(ctx EvalContext, op walkOperation) (diags tfdiags.Diagnostics) {
addr := n.ResourceInstanceAddr()
switch addr.Resource.Resource.Mode {
case addrs.ManagedResourceMode:
return n.managedResourceExecute(ctx, op)
case addrs.DataResourceMode:
return n.dataResourceExecute(ctx, op)
default:
panic(fmt.Errorf("unsupported resource mode %s", n.Config.Mode))
}
}
func (n *NodePlanDestroyableResourceInstance) managedResourceExecute(ctx EvalContext, op walkOperation) (diags tfdiags.Diagnostics) {
addr := n.ResourceInstanceAddr()
// Declare a bunch of variables that are used for state during
// evaluation. These are written to by address in the EvalNodes we
// declare below.
@ -85,3 +101,22 @@ func (n *NodePlanDestroyableResourceInstance) Execute(ctx EvalContext, op walkOp
diags = diags.Append(n.writeChange(ctx, change, ""))
return diags
}
func (n *NodePlanDestroyableResourceInstance) dataResourceExecute(ctx EvalContext, op walkOperation) (diags tfdiags.Diagnostics) {
// We may not be able to read a prior data source from the state if the
// schema was upgraded and we are destroying before ever refreshing that
// data source. Regardless, a data source "destroy" is simply writing a
// null state, which we can do with a null prior state too.
change := &plans.ResourceInstanceChange{
Addr: n.ResourceInstanceAddr(),
PrevRunAddr: n.prevRunAddr(ctx),
Change: plans.Change{
Action: plans.Delete,
Before: cty.NullVal(cty.DynamicPseudoType),
After: cty.NullVal(cty.DynamicPseudoType),
},
ProviderAddr: n.ResolvedProvider,
}
return diags.Append(n.writeChange(ctx, change, ""))
}

View File

@ -71,25 +71,6 @@ func (n *NodePlannableResourceInstance) dataResourceExecute(ctx EvalContext) (di
return diags
}
state, readDiags := n.readResourceInstanceState(ctx, addr)
diags = diags.Append(readDiags)
if diags.HasErrors() {
return diags
}
// We'll save a snapshot of what we just read from the state into the
// prevRunState which will capture the result read in the previous
// run, possibly tweaked by any upgrade steps that
// readResourceInstanceState might've made.
// However, note that we don't have any explicit mechanism for upgrading
// data resource results as we do for managed resources, and so the
// prevRunState might not conform to the current schema if the
// previous run was with a different provider version.
diags = diags.Append(n.writeResourceInstanceState(ctx, state, prevRunState))
if diags.HasErrors() {
return diags
}
diags = diags.Append(validateSelfRef(addr.Resource, config.Config, providerSchema))
if diags.HasErrors() {
return diags
@ -100,7 +81,7 @@ func (n *NodePlannableResourceInstance) dataResourceExecute(ctx EvalContext) (di
checkRuleSeverity = tfdiags.Warning
}
change, state, repeatData, planDiags := n.planDataSource(ctx, state, checkRuleSeverity)
change, state, repeatData, planDiags := n.planDataSource(ctx, checkRuleSeverity)
diags = diags.Append(planDiags)
if diags.HasErrors() {
return diags

View File

@ -21,6 +21,14 @@ import (
// If any errors occur during upgrade, error diagnostics are returned. In that
// case it is not safe to proceed with using the original state object.
func upgradeResourceState(addr addrs.AbsResourceInstance, provider providers.Interface, src *states.ResourceInstanceObjectSrc, currentSchema *configschema.Block, currentVersion uint64) (*states.ResourceInstanceObjectSrc, tfdiags.Diagnostics) {
if addr.Resource.Resource.Mode != addrs.ManagedResourceMode {
// We only do state upgrading for managed resources.
// This was a part of the normal workflow in older versions and
// returned early, so we are only going to log the error for now.
log.Printf("[ERROR] data resource %s should not require state upgrade", addr)
return src, nil
}
// Remove any attributes from state that are not present in the schema.
// This was previously taken care of by the provider, but data sources do
// not go through the UpgradeResourceState process.
@ -32,11 +40,6 @@ func upgradeResourceState(addr addrs.AbsResourceInstance, provider providers.Int
src.AttrsJSON = stripRemovedStateAttributes(src.AttrsJSON, currentSchema.ImpliedType())
}
if addr.Resource.Resource.Mode != addrs.ManagedResourceMode {
// We only do state upgrading for managed resources.
return src, nil
}
stateIsFlatmap := len(src.AttrsJSON) == 0
// TODO: This should eventually use a proper FQN.
@ -127,7 +130,7 @@ func stripRemovedStateAttributes(state []byte, ty cty.Type) []byte {
if err != nil {
// we just log any errors here, and let the normal decode process catch
// invalid JSON.
log.Printf("[ERROR] UpgradeResourceState: %s", err)
log.Printf("[ERROR] UpgradeResourceState: stripRemovedStateAttributes: %s", err)
return state
}

View File

@ -271,6 +271,10 @@
"title": "Dynamic Blocks",
"path": "expressions/dynamic-blocks"
},
{
"title": "Custom Condition Checks",
"path": "expressions/custom-conditions"
},
{
"title": "Type Constraints",
"path": "expressions/type-constraints"
@ -278,10 +282,6 @@
{
"title": "Version Constraints",
"path": "expressions/version-constraints"
},
{
"title": "Pre and Postconditions",
"path": "expressions/preconditions-postconditions"
}
]
},

View File

@ -88,11 +88,10 @@ You can use environment variables to configure one or more `cloud` block attribu
Use the following environment variables to configure the `cloud` block:
- `TF_ORGANIZATION` - The name of the organization. Serves as a fallback for `organization`
- `TF_CLOUD_ORGANIZATION` - The name of the organization. Serves as a fallback for `organization`
in the cloud configuration. If both are specified, the configuration takes precedence.
- `TF_HOSTNAME` - The hostname of a Terraform Enterprise installation. Serves as a fallback if `hostname` is not specified in the cloud configuration. If both are specified, the configuration takes precendence.
- `TF_CLOUD_HOSTNAME` - The hostname of a Terraform Enterprise installation. Serves as a fallback if `hostname` is not specified in the cloud configuration. If both are specified, the configuration takes precendence.
- `TF_WORKSPACE` - The name of a single Terraform Cloud workspace. If the `workspaces` attribute is not included in your configuration file, the `cloud` block interprets `TF_WORKSPACE` as the `name` value of the `workspaces` attribute. The workspace must exist in the organization specified in the configuration or `TF_ORGANIZATION`. You can set this variable if the `cloud` block in your configuration uses tags. However, Terraform Cloud will return an error if the value of `TF_WORKSPACE` is not included in the set of tags. Refer to [TF_WORKSPACE](https://www.terraform.io/cli/config/environment-variables#tf_workspace) for more details.

View File

@ -117,17 +117,21 @@ Terraform Cloud responds to API calls at both its current hostname
### Environment Variable Credentials
If you would prefer not to store your API tokens directly in the CLI
configuration, you may use a host-specific environment variable. Environment variable names should
have the prefix `TF_TOKEN_` added to the domain name, with periods encoded as underscores.
For example, the value of a variable named `TF_TOKEN_app_terraform_io` will be used as a
bearer authorization token when the CLI makes service requests to the hostname "app.terraform.io".
If multiple variables evaluate to the same hostname, Terraform will use the one defined later in the
operating system's variable table.
If you would prefer not to store your API tokens directly in the CLI configuration, you may use
a host-specific environment variable. Environment variable names should have the prefix
`TF_TOKEN_` added to the domain name, with periods encoded as underscores. For example, the
value of a variable named `TF_TOKEN_app_terraform_io` will be used as a bearer authorization
token when the CLI makes service requests to the hostname `app.terraform.io`.
When using domain names as a variable name, you must convert domain names containing non-ASCII characters to their [punycode equivalent](https://www.charset.org/punycode) with an ACE prefix. For example, token credentials for 例えば.com must be set in a variable called `TF_TOKEN_xn--r8j3dr99h_com`.
You must convert domain names containing non-ASCII characters to their [punycode equivalent](https://www.charset.org/punycode)
with an ACE prefix. For example, token credentials for 例えば.com must be set in a variable
called `TF_TOKEN_xn--r8j3dr99h_com`.
Some tools like the `env` utility allow hyphens in variable names, but hyphens create invalid POSIX variable names. Therefore, you can encode hyphens as double underscores when you set variables with interactive tools like Bash or Zsh. For example, you can set a token for the domain name "café.fr" as either `TF_TOKEN_xn--caf-dma_fr` or `TF_TOKEN_xn____caf__dma_fr`. If both are defined, Terraform will use the version containing hyphens.
Hyphens are also valid within host names but usually invalid as variable names and
may be encoded as double underscores. For example, you can set a token for the domain name
`café.fr` as `TF_TOKEN_xn--caf-dma.fr`, `TF_TOKEN_xn--caf-dma_fr`, or `TF_TOKEN_xn____caf__dma_fr`.
If multiple variables evaluate to the same hostname, Terraform will choose the one defined last
in the operating system's variable table.
### Credentials Helpers

View File

@ -122,6 +122,29 @@ referencing the managed resource values through a `local` value.
~> **NOTE:** **In Terraform 0.12 and earlier**, due to the data resource behavior of deferring the read until the apply phase when depending on values that are not yet known, using `depends_on` with `data` resources will force the read to always be deferred to the apply phase, and therefore a configuration that uses `depends_on` with a `data` resource can never converge. Due to this behavior, we do not recommend using `depends_on` with data resources.
## Custom Condition Checks
You can use `precondition` and `postcondition` blocks to specify assumptions and guarantees about how the data source operates. The following examples creates a postcondition that checks whether the AMI has the correct tags.
``` hcl
data "aws_ami" "example" {
id = var.aws_ami_id
lifecycle {
# The AMI ID must refer to an existing AMI that has the tag "nomad-server".
postcondition {
condition = self.tags["Component"] == "nomad-server"
error_message = "tags[\"Component\"] must be \"nomad-server\"."
}
}
}
```
Custom conditions can help capture assumptions, helping future maintainers understand the configuration design and intent. They also return useful information about errors earlier and in context, helping consumers more easily diagnose issues in their configurations.
Refer to [Custom Condition Checks](/language/expressions/custom-conditions#preconditions-and-postconditions) for more details.
## Multiple Resource Instances
Data resources support [`count`](/language/meta-arguments/count)

View File

@ -39,6 +39,14 @@ The condition can be any expression that resolves to a boolean value. This will
usually be an expression that uses the equality, comparison, or logical
operators.
### Custom Condition Checks
You can create conditions that produce custom error messages for several types of objects in a configuration. For example, you can add a condition to an input variable that checks whether incoming image IDs are formatted properly.
Custom conditions can help capture assumptions, helping future maintainers understand the configuration design and intent. They also return useful information about errors earlier and in context, helping consumers more easily diagnose issues in their configurations.
Refer to [Custom Condition Checks](/language/expressions/custom-conditions#input-variable-validation) for details.
## Result Types
The two result values may be of any type, but they must both

View File

@ -0,0 +1,357 @@
---
page_title: Custom Condition Checks - Configuration Language
description: >-
Check custom requirements for variables, outputs, data sources, and resources and provide better error messages in context.
---
# Custom Condition Checks
You can create conditions that produce custom error messages for several types of objects in a configuration. For example, you can add a condition to an input variable that checks whether incoming image IDs are formatted properly.
Custom conditions can help capture assumptions, helping future maintainers understand the configuration design and intent. They also return useful information about errors earlier and in context, helping consumers more easily diagnose issues in their configurations.
This page explains the following:
- Creating [validation conditions](#input-variable-validation) for input variables
- Creating [preconditions and postconditions](#preconditions-and-postconditions) for resources, data sources, and outputs
- Writing effective [condition expressions](#condition-expressions) and [error messages](#error-messages)
- When Terraform [evaluates custom conditions](#conditions-checked-only-during-apply) during the plan and apply cycle
## Input Variable Validation
-> **Note:** Input variable validation is available in Terraform CLI v0.13.0 and later.
Add one or more `validation` blocks within the `variable` block to specify custom conditions. Each validation requires a [`condition` argument](#condition-expressions), an expression that must use the value of the variable to return `true` if the value is valid, or `false` if it is invalid. The expression can refer only to the containing variable and must not produce errors.
If the condition evaluates to `false`, Terraform produces an [error message](#error-messages) that includes the result of the `error_message` expression. If you declare multiple validations, Terraform returns error messages for all failed conditions.
The following example checks whether the AMI ID has valid syntax.
```hcl
variable "image_id" {
type = string
description = "The id of the machine image (AMI) to use for the server."
validation {
condition = length(var.image_id) > 4 && substr(var.image_id, 0, 4) == "ami-"
error_message = "The image_id value must be a valid AMI id, starting with \"ami-\"."
}
}
```
If the failure of an expression determines the validation decision, use the [`can` function](/language/functions/can) as demonstrated in the following example.
```hcl
variable "image_id" {
type = string
description = "The id of the machine image (AMI) to use for the server."
validation {
# regex(...) fails if it cannot find a match
condition = can(regex("^ami-", var.image_id))
error_message = "The image_id value must be a valid AMI id, starting with \"ami-\"."
}
}
```
## Preconditions and Postconditions
-> **Note:** Preconditions and postconditions are available in Terraform CLI v1.2.0 and later.
Use `precondition` and `postcondition` blocks to create custom rules for resources, data sources, and outputs.
Terraform checks a precondition _before_ evaluating the object it is associated with and checks a postcondition _after_ evaluating the object. Terraform evaluates custom conditions as early as possible, but must defer conditions that depend on unknown values until the apply phase. Refer to [Conditions Checked Only During Apply](#conditions-checked-only-during-apply) for more details.
### Usage
Each precondition and postcondition requires a [`condition` argument](#condition-expressions). This is an expression that must return `true` if the conditition is fufilled or `false` if it is invalid. The expression can refer to any other objects in the same module, as long as the references do not create cyclic dependencies. Resource postconditions can also use the [`self` object](#self-object) to refer to attributes of each instance of the resource where they are configured.
If the condition evaluates to `false`, Terraform will produce an [error message](#error-messages) that includes the result of the `error_message` expression. If you declare multiple preconditions or postconditions, Terraform returns error messages for all failed conditions.
The following example uses a postcondition to detect if the caller accidentally provided an AMI intended for the wrong system component.
``` hcl
data "aws_ami" "example" {
id = var.aws_ami_id
lifecycle {
# The AMI ID must refer to an existing AMI that has the tag "nomad-server".
postcondition {
condition = self.tags["Component"] == "nomad-server"
error_message = "tags[\"Component\"] must be \"nomad-server\"."
}
}
}
```
#### Resources and Data Sources
The `lifecycle` block inside a `resource` or `data` block can include both `precondition` and `postcondition` blocks.
- Terraform evaluates `precondition` blocks after evaluating existing `count` and `for_each` arguments. This lets Terraform evaluate the precondition separately for each instance and then make `each.key`, `count.index`, etc. available to those conditions. Terraform also evaluates preconditions before evaluating the resource's configuration arguments. Preconditions can take precedence over argument evaluation errors.
- Terraform evaluates `postcondition` blocks after planning and applying changes to a managed resource, or after reading from a data source. Postcondition failures prevent changes to other resources that depend on the failing resource.
#### Outputs
An `output` block can include a `precondition` block.
Preconditions can serve a symmetrical purpose to input variable `validation` blocks. Whereas input variable validation checks assumptions the module makes about its inputs, preconditions check guarantees that the module makes about its outputs. You can use preconditions to prevent Terraform from saving an invalid new output value in the state. You can also use them to preserve the output value from the previous apply, if applicable.
Terraform evaluates output value preconditions before evaluating the `value` expression to finalize the result. Preconditions can take precedence over potential errors in the `value` expression.
### Examples
The following example shows use cases for preconditions and postconditions. The preconditions and postconditions declare the following assumptions and guarantees.
- **The AMI ID must refer to an AMI that contains an operating system for the
`x86_64` architecture.** The precondition would detect if the caller accidentally built an AMI for a different architecture, which may not be able to run the software this virtual machine is intended to host.
- **The EC2 instance must be allocated a private DNS hostname.** In Amazon Web Services, EC2 instances are assigned private DNS hostnames only if they belong to a virtual network configured in a certain way. The postcondition would detect if the selected virtual network is not configured correctly, prompting the user to debug the network settings.
- **The EC2 instance will have an encrypted root volume.** The precondition ensures that the root volume is encrypted, even though the software running in this EC2 instance would probably still operate as expected on an unencrypted volume. This lets Terraform produce an error immediately, before any other components rely on the new EC2 instance.
```hcl
resource "aws_instance" "example" {
instance_type = "t2.micro"
ami = "ami-abc123"
lifecycle {
# The AMI ID must refer to an AMI that contains an operating system
# for the `x86_64` architecture.
precondition {
condition = data.aws_ami.example.architecture == "x86_64"
error_message = "The selected AMI must be for the x86_64 architecture."
}
# The EC2 instance must be allocated a private DNS hostname.
postcondition {
condition = self.private_dns != ""
error_message = "EC2 instance must be in a VPC that has private DNS hostnames enabled."
}
}
}
data "aws_ebs_volume" "example" {
# Use data resources that refer to other resources to
# load extra data that isn't directly exported by a resource.
#
# Read the details about the root storage volume for the EC2 instance
# declared by aws_instance.example, using the exported ID.
filter {
name = "volume-id"
values = [aws_instance.example.root_block_device.volume_id]
}
}
output "api_base_url" {
value = "https://${aws_instance.example.private_dns}:8433/"
# The EC2 instance will have an encrypted root volume.
precondition {
condition = data.aws_ebs_volume.example.encrypted
error_message = "The server's root volume is not encrypted."
}
}
```
### Choosing Between Preconditions and Postconditions
You can often implement a validation check as either a postcondition of the resource producing the data or as a precondition of a resource or output value using the data. To decide which is most appropriate, consider whether the check is representing either an assumption or a guarantee.
#### Use Preconditions for Assumptions
An assumption is a condition that must be true in order for the configuration of a particular resource to be usable. For example, an `aws_instance` configuration can have the assumption that the given AMI will always be configured for the `x86_64` CPU architecture.
We recommend using preconditions for assumptions, so that future maintainers can find them close to the other expressions that rely on that condition. This lets them understand more about what that resource is intended to allow.
#### Use Postconditions for Guarantees
A guarantee is a characteristic or behavior of an object that the rest of the configuration should be able to rely on. For example, an `aws_instance` configuration can have the guarantee that an EC2 instance will be running in a network that assigns it a private DNS record.
We recommend using postconditions for guarantees, so that future maintainers can find them close to the resource configuration that is responsible for implementing those guarantees. This lets them more easily determine which behaviors they should preserve when changing the configuration.
#### Additional Decision Factors
You should also consider the following questions when creating preconditions and postconditions.
- Which resource or output value would be most helpful to report in the error message? Terraform will always report errors in the location where the condition was declared.
- Which approach is more convenient? If a particular resource has many dependencies that all make an assumption about that resource, it can be pragmatic to declare that once as a post-condition of the resource, rather than declaring it many times as preconditions on each of the dependencies.
- Is it helpful to declare the same or similar conditions as both preconditions and postconditions? This can be useful if the postcondition is in a different module than the precondition because it lets the modules verify one another as they evolve independently.
## Condition Expressions
Input variable validation, preconditions, and postconditions all require a `condition` argument. This is a boolean expression that should return `true` if the intended assumption or guarantee is fulfilled or `false` if it does not.
You can use any of Terraform's built-in functions or language operators
in a condition as long as the expression is valid and returns a boolean result. The following language features are particularly useful when writing condition expressions.
### Logical Operators
Use the logical operators `&&` (AND), `||` (OR), and `!` (NOT) to combine multiple conditions together.
```hcl
condition = var.name != "" && lower(var.name) == var.name
```
You can also use arithmetic operators (e.g. `a + b`), equality operators (eg., `a == b`) and comparison operators (e.g., `a < b`). Refer to [Arithmetic and Logical Operators](/language/expressions/operators) for details.
### `contains` Function
Use the [`contains` function](/language/functions/contains) to test whether a given value is one of a set of predefined valid values.
```hcl
condition = contains(["STAGE", "PROD"], var.environment)
```
### `length` Function
Use the [`length` function](/language/functions/length) to test a collection's length and require a non-empty list or map.
```hcl
condition = length(var.items) != 0
```
This is a better approach than directly comparing with another collection using `==` or `!=`. This is because the comparison operators can only return `true` if both operands have exactly the same type, which is often ambiguous for empty collections.
### `for` Expressions
Use [`for` expressions](/language/expressions/for) in conjunction with the functions `alltrue` and `anytrue` to test whether a condition holds for all or for any elements of a collection.
```hcl
condition = alltrue([
for v in var.instances : contains(["t2.micro", "m3.medium"], v.type)
])
```
### `can` Function
Use the [`can` function](/language/functions/can) to concisely use the validity of an expression as a condition. It returns `true` if its given expression evaluates successfully and `false` if it returns any error, so you can use various other functions that typically return errors as a part of your condition expressions.
For example, you can use `can` with `regex` to test if a string matches a particular pattern because `regex` returns an error when given a non-matching string.
```hcl
condition = can(regex("^[a-z]+$", var.name)
```
You can also use `can` with the type conversion functions to test whether a value is convertible to a type or type constraint.
```hcl
# This remote output value must have a value that can
# be used as a string, which includes strings themselves
# but also allows numbers and boolean values.
condition = can(tostring(data.terraform_remote_state.example.outputs["name"]))
```
```hcl
# This remote output value must be convertible to a list
# type of with element type.
condition = can(tolist(data.terraform_remote_state.example.outputs["items"]))
```
You can also use `can` with attribute access or index operators to test whether a collection or structural value has a particular element or index.
```hcl
# var.example must have an attribute named "foo"
condition = can(var.example.foo) ```
```hcl
# var.example must be a sequence with at least one element
condition = can(var.example[0])
# (although it would typically be clearer to write this as a
# test like length(var.example) > 0 to better represent the
# intent of the condition.)
```
### `self` Object
Use the `self` object in postcondition blocks to refer to attributes of the instance under evaluation.
```hcl
resource "aws_instance" "example" {
instance_type = "t2.micro"
ami = "ami-abc123"
lifecycle {
postcondition {
condition = self.instance_state == "running"
error_message = "EC2 instance must be running."
}
}
}
```
### `each` and `count` Objects
In blocks where [`for_each`](/language/meta-arguments/for_each) or [`count`](/language/meta-arguments/count) are set, use `each` and `count` objects to refer to other resources that are expanded in a chain.
```hcl
variable "vpc_cidrs" {
type = set(string)
}
data "aws_vpc" "example" {
for_each = var.vpc_cidrs
filter {
name = "cidr"
values = [each.key]
}
}
resource "aws_internet_gateway" "example" {
for_each = aws_vpc.example
vpc_id = each.value.id
lifecycle {
precondition {
condition = aws_vpc.example[each.key].state == "available"
error_message = "VPC ${each.key} must be available."
}
}
}
```
## Error Messages
Input variable validations, preconditions, and postconditions all must include the `error_message` argument. This contains the text that Terraform will include as part of error messages when it detects an unmet condition.
```
Error: Resource postcondition failed
with data.aws_ami.example,
on ec2.tf line 19, in data "aws_ami" "example":
72: condition = self.tags["Component"] == "nomad-server"
|----------------
| self.tags["Component"] is "consul-server"
The selected AMI must be tagged with the Component value "nomad-server".
```
The `error_message` argument can be any expression that evaluates to a string.
This includes literal strings, heredocs, and template expressions. Multi-line
error messages are supported, and lines with leading whitespace will not be
word wrapped.
We recommend writing error messages as one or more full sentences in a
style similar to Terraform's own error messages. Terraform will show the
message alongside the name of the resource that detected the problem and any
external values included in the condition expression.
## Conditions Checked Only During Apply
Terraform evaluates custom conditions as early as possible.
Input variable validations can only refer to the variable value, so Terraform always evaluates them immediately. When Terraform evaluates preconditions and postconditions depends on whether the value(s) associated with the condition are known before or after applying the configuration.
- **Known before apply:** Terraform checks the condition during the planning phase. For example, Terraform can know the value of an image ID during planning as long as it is not generated from another resource.
- **Known after apply:** Terraform delays checking that condition until the apply phase. For example, AWS only assigns the root volume ID when it starts an EC2 instance, so Terraform cannot know this value until apply.
During the apply phase, a failed _precondition_
will prevent Terraform from implementing planned actions for the associated resource. However, a failed _postcondition_ will halt processing after Terraform has already implemented these actions. The failed postcondition prevents any further downstream actions that rely on the resource, but does not undo the actions Terraform has already taken.
Terraform typically has less information during the initial creation of a
full configuration than when applying subsequent changes. Therefore, Terraform may check conditions during apply for initial creation and then check them during planning for subsequent updates.

View File

@ -1,395 +0,0 @@
---
page_title: Preconditions and Postconditions - Configuration Language
---
# Preconditions and Postconditions
Terraform providers can automatically detect and report problems related to
the remote system they are interacting with, but they typically do so using
language that describes implementation details of the target system, which
can sometimes make it hard to find the root cause of the problem in your
Terraform configuration.
Preconditions and postconditions allow you to optionally describe the
assumptions you are making as a module author, so that Terraform can detect
situations where those assumptions don't hold and potentially return an
error earlier or an error with better context about where the problem
originated.
Preconditions and postconditions both follow a similar structure, and differ
only in when Terraform evaluates them: Terraform checks a precondition prior
to evaluating the object it is associated with, and a postcondition _after_
evaluating the object. That means that preconditions are useful for stating
assumptions about data from elsewhere that the resource configuration relies
on, while postconditions are more useful for stating assumptions about the
result of the resource itself.
The following example shows some different possible uses of preconditions and
postconditions.
```hcl
variable "aws_ami_id" {
type = string
# Input variable validation can check that the AMI ID is syntactically valid.
validation {
condition = can(regex("^ami-", var.aws_ami_id))
error_message = "The AMI ID must have the prefix \"ami-\"."
}
}
data "aws_ami" "example" {
id = var.aws_ami_id
lifecycle {
# A data resource with a postcondition can ensure that the selected AMI
# meets this module's expectations, by reacting to the dynamically-loaded
# AMI attributes.
postcondition {
condition = self.tags["Component"] == "nomad-server"
error_message = "The selected AMI must be tagged with the Component value \"nomad-server\"."
}
}
}
resource "aws_instance" "example" {
instance_type = "t2.micro"
ami = "ami-abc123"
lifecycle {
# A resource with a precondition can ensure that the selected AMI
# is set up correctly to work with the instance configuration.
precondition {
condition = data.aws_ami.example.architecture == "x86_64"
error_message = "The selected AMI must be for the x86_64 architecture."
}
# A resource with a postcondition can react to server-decided values
# during the apply step and halt work immediately if the result doesn't
# meet expectations.
postcondition {
condition = self.private_dns != ""
error_message = "EC2 instance must be in a VPC that has private DNS hostnames enabled."
}
}
}
data "aws_ebs_volume" "example" {
# We can use data resources that refer to other resources in order to
# load extra data that isn't directly exported by a resource.
#
# This example reads the details about the root storage volume for
# the EC2 instance declared by aws_instance.example, using the exported ID.
filter {
name = "volume-id"
values = [aws_instance.example.root_block_device.volume_id]
}
}
output "api_base_url" {
value = "https://${aws_instance.example.private_dns}:8433/"
# An output value with a precondition can check the object that the
# output value is describing to make sure it meets expectations before
# any caller of this module can use it.
precondition {
condition = data.aws_ebs_volume.example.encrypted
error_message = "The server's root volume is not encrypted."
}
}
```
The input variable validation rule, preconditions, and postconditions in the
above example declare explicitly some assumptions and guarantees that the
module developer is making in the design of this module:
* The caller of the module must provide a syntactically-valid AMI ID in the
`aws_ami_id` input variable.
This would detect if the caller accidentally assigned an AMI name to the
argument, instead of an AMI ID.
* The AMI ID must refer to an AMI that exists and that has been tagged as
being intended for the component "nomad-server".
This would detect if the caller accidentally provided an AMI intended for
some other system component, which might otherwise be detected only after
booting the EC2 instance and noticing that the expected network service
isn't running. Terraform can therefore detect that problem earlier and
return a more actionable error message for it.
* The AMI ID must refer to an AMI which contains an operating system for the
`x86_64` architecture.
This would detect if the caller accidentally built an AMI for a different
architecture, which might therefore not be able to run the software this
virtual machine is intended to host.
* The EC2 instance must be allocated a private DNS hostname.
In AWS, EC2 instances are assigned private DNS hostnames only if they
belong to a virtual network configured in a certain way. This would
detect if the selected virtual network is not configured correctly,
giving explicit feedback to prompt the user to debug the network settings.
* The EC2 instance will have an encrypted root volume.
This ensures that the root volume is encrypted even though the software
running in this EC2 instance would probably still operate as expected
on an unencrypted volume. Therefore Terraform can draw attention to the
problem immediately, before any other components rely on the
insecurely-configured component.
Writing explicit preconditions and postconditions is always optional, but it
can be helpful to users and future maintainers of a Terraform module by
capturing assumptions that might otherwise be only implied, and by allowing
Terraform to check those assumptions and halt more quickly if they don't
hold in practice for a particular set of input variables.
## Precondition and Postcondition Locations
Terraform supports preconditions and postconditions in a number of different
locations in a module:
* The `lifecycle` block inside a `resource` or `data` block can include both
`precondition` and `postcondition` blocks associated with the containing
resource.
Terraform evaluates resource preconditions before evaluating the resource's
configuration arguments. Resource preconditions can take precedence over
argument evaluation errors.
Terraform evaluates resource postconditions after planning and after
applying changes to a managed resource, or after reading from a data
resource. Resource postcondition failures will therefore prevent applying
changes to other resources that depend on the failing resource.
* An `output` block declaring an output value can include a `precondition`
block.
Terraform evaluates output value preconditions before evaluating the
`value` expression to finalize the result. Output value preconditions
can take precedence over potential errors in the `value` expression.
Output value preconditions can be particularly useful in a root module,
to prevent saving an invalid new output value in the state and to preserve
the value from the previous apply, if any.
Output value preconditions can serve a symmetrical purpose to input
variable `validation` blocks: whereas input variable validation checks
assumptions the module makes about its inputs, output value preconditions
check guarantees that the module makes about its outputs.
## Condition Expressions
`precondition` and `postcondition` blocks both require an argument named
`condition`, whose value is a boolean expression which should return `true`
if the intended assumption holds or `false` if it does not.
Preconditions and postconditions can both refer to any other objects in the
same module, as long as the references don't create any cyclic dependencies.
Resource postconditions can additionally refer to attributes of each instance
of the resource where they are configured, using the special symbol `self`.
For example, `self.private_dns` refers to the `private_dns` attribute of
each instance of the containing resource.
Condition expressions are otherwise just normal Terraform expressions, and
so you can use any of Terraform's built-in functions or language operators
as long as the expression is valid and returns a boolean result.
### Common Condition Expression Features
Because condition expressions must produce boolean results, they can often
use built-in functions and language features that are less common elsewhere
in the Terraform language. The following language features are particularly
useful when writing condition expressions:
* You can use the built-in function `contains` to test whether a given
value is one of a set of predefined valid values:
```hcl
condition = contains(["STAGE", "PROD"], var.environment)
```
* You can use the boolean operators `&&` (AND), `||` (OR), and `!` (NOT) to
combine multiple simpler conditions together:
```hcl
condition = var.name != "" && lower(var.name) == var.name
```
* You can require a non-empty list or map by testing the collection's length:
```hcl
condition = length(var.items) != 0
```
This is a better approach than directly comparing with another collection
using `==` or `!=`, because the comparison operators can only return `true`
if both operands have exactly the same type, which is often ambiguous
for empty collections.
* You can use `for` expressions which produce lists of boolean results
themselves in conjunction with the functions `alltrue` and `anytrue` to
test whether a condition holds for all or for any elements of a collection:
```hcl
condition = alltrue([
for v in var.instances : contains(["t2.micro", "m3.medium"], v.type)
])
```
* You can use the `can` function to concisely use the validity of an expression
as a condition. It returns `true` if its given expression evaluates
successfully and `false` if it returns any error, so you can use various
other functions that typically return errors as a part of your condition
expressions.
For example, you can use `can` with `regex` to test if a string matches
a particular pattern, because `regex` returns an error when given a
non-matching string:
```hcl
condition = can(regex("^[a-z]+$", var.name)
```
You can also use `can` with the type conversion functions to test whether
a value is convertible to a type or type constraint:
```hcl
# This remote output value must have a value that can
# be used as a string, which includes strings themselves
# but also allows numbers and boolean values.
condition = can(tostring(data.terraform_remote_state.example.outputs["name"]))
```
```hcl
# This remote output value must be convertible to a list
# type of with element type.
condition = can(tolist(data.terraform_remote_state.example.outputs["items"]))
```
You can also use `can` with attribute access or index operators to
concisely test whether a collection or structural value has a particular
element or index:
```hcl
# var.example must have an attribute named "foo"
condition = can(var.example.foo)
```
```hcl
# var.example must be a sequence with at least one element
condition = can(var.example[0])
# (although it would typically be clearer to write this as a
# test like length(var.example) > 0 to better represent the
# intent of the condition.)
```
## Early Evaluation
Terraform will evaluate conditions as early as possible.
If the condition expression depends on a resource attribute that won't be known
until the apply phase then Terraform will delay checking the condition until
the apply phase, but Terraform can check all other expressions during the
planning phase, and therefore block applying a plan that would violate the
conditions.
In the earlier example on this page, Terraform would typically be able to
detect invalid AMI tags during the planning phase, as long as `var.aws_ami_id`
is not itself derived from another resource. However, Terraform will not
detect a non-encrypted root volume until the EC2 instance was already created
during the apply step, because that condition depends on the root volume's
assigned ID, which AWS decides only when the EC2 instance is actually started.
For conditions which Terraform must defer to the apply phase, a _precondition_
will prevent taking whatever action was planned for a related resource, whereas
a _postcondition_ will merely halt processing after that action was already
taken, preventing any downstream actions that rely on it but not undoing the
action.
Terraform typically has less information during the initial creation of a
full configuration than when applying subsequent changes to that configuration.
Conditions checked only during apply during initial creation may therefore
be checked during planning on subsequent updates, detecting problems sooner
in that case.
## Error Messages
Each `precondition` or `postcondition` block must include an argument
`error_message`, which provides some custom error sentences that Terraform
will include as part of error messages when it detects an unmet condition.
```
Error: Resource postcondition failed
with data.aws_ami.example,
on ec2.tf line 19, in data "aws_ami" "example":
72: condition = self.tags["Component"] == "nomad-server"
|----------------
| self.tags["Component"] is "consul-server"
The selected AMI must be tagged with the Component value "nomad-server".
```
The `error_message` argument can be any expression which evaluates to a string.
This includes literal strings, heredocs, and template expressions. Multi-line
error messages are supported, and lines with leading whitespace will not be
word wrapped.
Error message should typically be written as one or more full sentences in a
style similar to Terraform's own error messages. Terraform will show the given
message alongside the name of the resource that detected the problem and any
outside values used as part of the condition expression.
## Preconditions or Postconditions?
Because preconditions can refer to the result attributes of other resources
in the same module, it's typically true that a particular check could be
implemented either as a postcondition of the resource producing the data
or as a precondition of a resource or output value using the data.
To decide which is most appropriate for a particular situation, consider
whether the check is representing either an assumption or a guarantee:
* An _assumption_ is a condition that must be true in order for the
configuration of a particular resource to be usable. In the earlier
example on this page, the `aws_instance` configuration had the _assumption_
that the given AMI will always be for the `x86_64` CPU architecture.
Assumptions should typically be written as preconditions, so that future
maintainers can find them close to the other expressions that rely on
that condition, and thus know more about what different variations that
resource is intended to allow.
* A _guarantee_ is a characteristic or behavior of an object that the rest of
the configuration ought to be able to rely on. In the earlier example on
this page, the `aws_instance` configuration had the _guarantee_ that the
EC2 instance will be running in a network that assigns it a private DNS
record.
Guarantees should typically be written as postconditions, so that
future maintainers can find them close to the resource configuration that
is responsible for implementing those guarantees and more easily see
which behaviors are important to preserve when changing the configuration.
In practice though, the distinction between these two is subjective: is the
AMI being tagged as Component `"nomad-server"` a guarantee about the AMI or
an assumption made by the EC2 instance? To decide, it might help to consider
which resource or output value would be most helpful to report in a resulting
error message, because Terraform will always report errors in the location
where the condition was declared.
The decision between the two may also be a matter of convenience. If a
particular resource has many dependencies that _all_ make an assumption about
that resource then it can be pragmatic to declare that just once as a
post-condition of the resource, rather than many times as preconditions on
each of the dependencies.
It may sometimes be helpful to declare the same or similar conditions as both
preconditions _and_ postconditions, particularly if the postcondition is
in a different module than the precondition, so that they can verify one
another as the two modules evolve independently.

View File

@ -110,6 +110,30 @@ The following arguments can be used within a `lifecycle` block:
Only attributes defined by the resource type can be ignored.
`ignore_changes` cannot be applied to itself or to any other meta-arguments.
## Custom Condition Checks
You can add `precondition` and `postcondition` blocks with a `lifecycle` block to specify assumptions and guarantees about how resources and data sources operate. The following examples creates a precondition that checks whether the AMI is properly configured.
```hcl
resource "aws_instance" "example" {
instance_type = "t2.micro"
ami = "ami-abc123"
lifecycle {
# The AMI ID must refer to an AMI that contains an operating system
# for the `x86_64` architecture.
precondition {
condition = data.aws_ami.example.architecture == "x86_64"
error_message = "The selected AMI must be for the x86_64 architecture."
}
}
}
```
Custom conditions can help capture assumptions, helping future maintainers understand the configuration design and intent. They also return useful information about errors earlier and in context, helping consumers more easily diagnose issues in their configurations.
Refer to [Custom Conditions](/language/expressions/custom-conditions#preconditions-and-postconditions) for more details.
## Literal Values Only
The `lifecycle` settings all affect how Terraform constructs and traverses

View File

@ -186,6 +186,29 @@ be given inline as a single resource, but we can also compose together multiple
modules as described elsewhere on this page in situations where the
dependencies themselves are complicated enough to benefit from abstractions.
## Assumptions and Guarantees
Every module has implicit assumptions and guarantees that define what data it expects and what data it produces for consumers.
- **Assumption:** A condition that must be true in order for the configuration of a particular resource to be usable. For example, an `aws_instance` configuration can have the assumption that the given AMI will always be configured for the `x86_64` CPU architecture.
- **Guarantee:** A characteristic or behavior of an object that the rest of the configuration should be able to rely on. For example, an `aws_instance` configuration can have the guarantee that an EC2 instance will be running in a network that assigns it a private DNS record.
We recommend using [custom conditions](/language/expressions/custom-conditions) help capture and test for assumptions and guarantees. This helps future maintainers understand the configuration design and intent. Custom conditions also return useful information about errors earlier and in context, helping consumers more easily diagnose issues in their configurations.
The following examples creates a precondition that checks whether the EC2 instance has an encrypted root volume.
```hcl
output "api_base_url" {
value = "https://${aws_instance.example.private_dns}:8433/"
# The EC2 instance must have an encrypted root volume.
precondition {
condition = data.aws_ebs_volume.example.encrypted
error_message = "The server's root volume is not encrypted."
}
}
```
## Multi-cloud Abstractions
Terraform itself intentionally does not attempt to abstract over similar

View File

@ -131,6 +131,30 @@ The following meta-arguments are documented on separate pages:
- [`lifecycle`, for lifecycle customizations](/language/meta-arguments/lifecycle)
- [`provisioner`, for taking extra actions after resource creation](/language/resources/provisioners/syntax)
## Custom Condition Checks
You can use `precondition` and `postcondition` blocks to specify assumptions and guarantees about how the resource operates. The following examples creates a precondition that checks whether the AMI is properly configured.
```hcl
resource "aws_instance" "example" {
instance_type = "t2.micro"
ami = "ami-abc123"
lifecycle {
# The AMI ID must refer to an AMI that contains an operating system
# for the `x86_64` architecture.
precondition {
condition = data.aws_ami.example.architecture == "x86_64"
error_message = "The selected AMI must be for the x86_64 architecture."
}
}
}
```
Custom conditions can help capture assumptions, helping future maintainers understand the configuration design and intent. They also return useful information about errors earlier and in context, helping consumers more easily diagnose issues in their configurations.
Refer to [Custom Condition Checks](/language/expressions/custom-conditions#preconditions-and-postconditions) for more details.
## Operation Timeouts
Some resource types provide a special `timeouts` nested block argument that

View File

@ -9,7 +9,7 @@ Stores the state as a Blob with the given Key within the Blob Container within [
This backend supports state locking and consistency checking with Azure Blob Storage native capabilities.
-> **Note:** By default the Azure Backend uses ADAL for authentication which is deprecated in favour of MSAL - MSAL can be used by setting `use_microsoft_graph` to `true`. **The default for this will change in Terraform 1.2**, so that MSAL authentication is used by default.
-> **Note:** In Terraform 1.2 the Azure Backend uses MSAL (and Microsoft Graph) rather than ADAL (and Azure Active Directory Graph) for authentication by default - you can disable this by setting `use_microsoft_graph` to `false`. **This setting will be removed in Terraform 1.3, due to Microsoft's deprecation of ADAL**.
## Example Configuration
@ -219,15 +219,13 @@ When authenticating using the Managed Service Identity (MSI) - the following fie
* `msi_endpoint` - (Optional) The path to a custom Managed Service Identity endpoint which is automatically determined if not specified. This can also be sourced from the `ARM_MSI_ENDPOINT` environment variable.
*
* `subscription_id` - (Optional) The Subscription ID in which the Storage Account exists. This can also be sourced from the `ARM_SUBSCRIPTION_ID` environment variable.
* `tenant_id` - (Optional) The Tenant ID in which the Subscription exists. This can also be sourced from the `ARM_TENANT_ID` environment variable.
* `use_microsoft_graph` - (Optional) Should MSAL be used for authentication instead of ADAL, and should Microsoft Graph be used instead of Azure Active Directory Graph? Defaults to `false`.
* `use_microsoft_graph` - (Optional) Should MSAL be used for authentication instead of ADAL, and should Microsoft Graph be used instead of Azure Active Directory Graph? Defaults to `true`.
-> **Note:** By default the Azure Backend uses ADAL for authentication which is deprecated in favour of MSAL - MSAL can be used by setting `use_microsoft_graph` to `true`. **The default for this will change in Terraform 1.2**, so that MSAL authentication is used by default.
-> **Note:** In Terraform 1.2 the Azure Backend uses MSAL (and Microsoft Graph) rather than ADAL (and Azure Active Directory Graph) for authentication by default - you can disable this by setting `use_microsoft_graph` to `false`. **This setting will be removed in Terraform 1.3, due to Microsoft's deprecation of ADAL**.
* `use_msi` - (Optional) Should Managed Service Identity authentication be used? This can also be sourced from the `ARM_USE_MSI` environment variable.
@ -251,9 +249,9 @@ When authenticating using AzureAD Authentication - the following fields are also
-> **Note:** When using AzureAD for Authentication to Storage you also need to ensure the `Storage Blob Data Owner` role is assigned.
* `use_microsoft_graph` - (Optional) Should MSAL be used for authentication instead of ADAL, and should Microsoft Graph be used instead of Azure Active Directory Graph? Defaults to `false`.
* `use_microsoft_graph` - (Optional) Should MSAL be used for authentication instead of ADAL, and should Microsoft Graph be used instead of Azure Active Directory Graph? Defaults to `true`.
-> **Note:** By default the Azure Backend uses ADAL for authentication which is deprecated in favour of MSAL - MSAL can be used by setting `use_microsoft_graph` to `true`. **The default for this will change in Terraform 1.2**, so that MSAL authentication is used by default.
-> **Note:** In Terraform 1.2 the Azure Backend uses MSAL (and Microsoft Graph) rather than ADAL (and Azure Active Directory Graph) for authentication by default - you can disable this by setting `use_microsoft_graph` to `false`. **This setting will be removed in Terraform 1.3, due to Microsoft's deprecation of ADAL**.
***
@ -271,9 +269,9 @@ When authenticating using a Service Principal with a Client Certificate - the fo
* `tenant_id` - (Optional) The Tenant ID in which the Subscription exists. This can also be sourced from the `ARM_TENANT_ID` environment variable.
* `use_microsoft_graph` - (Optional) Should MSAL be used for authentication instead of ADAL, and should Microsoft Graph be used instead of Azure Active Directory Graph? Defaults to `false`.
* `use_microsoft_graph` - (Optional) Should MSAL be used for authentication instead of ADAL, and should Microsoft Graph be used instead of Azure Active Directory Graph? Defaults to `true`.
-> **Note:** By default the Azure Backend uses ADAL for authentication which is deprecated in favour of MSAL - MSAL can be used by setting `use_microsoft_graph` to `true`. **The default for this will change in Terraform 1.2**, so that MSAL authentication is used by default.
-> **Note:** In Terraform 1.2 the Azure Backend uses MSAL (and Microsoft Graph) rather than ADAL (and Azure Active Directory Graph) for authentication by default - you can disable this by setting `use_microsoft_graph` to `false`. **This setting will be removed in Terraform 1.3, due to Microsoft's deprecation of ADAL**.
***
@ -289,6 +287,6 @@ When authenticating using a Service Principal with a Client Secret - the followi
* `tenant_id` - (Optional) The Tenant ID in which the Subscription exists. This can also be sourced from the `ARM_TENANT_ID` environment variable.
* `use_microsoft_graph` - (Optional) Should MSAL be used for authentication instead of ADAL, and should Microsoft Graph be used instead of Azure Active Directory Graph? Defaults to `false`.
* `use_microsoft_graph` - (Optional) Should MSAL be used for authentication instead of ADAL, and should Microsoft Graph be used instead of Azure Active Directory Graph? Defaults to `true`.
-> **Note:** By default the Azure Backend uses ADAL for authentication which is deprecated in favour of MSAL - MSAL can be used by setting `use_microsoft_graph` to `true`. **The default for this will change in Terraform 1.2**, so that MSAL authentication is used by default.
-> **Note:** In Terraform 1.2 the Azure Backend uses MSAL (and Microsoft Graph) rather than ADAL (and Azure Active Directory Graph) for authentication by default - you can disable this by setting `use_microsoft_graph` to `false`. **This setting will be removed in Terraform 1.3, due to Microsoft's deprecation of ADAL**.

View File

@ -29,7 +29,7 @@ recommend upgrading one major version at a time until you reach Terraform v0.14,
following the upgrade guides of each of those versions, because those earlier
versions include mechanisms to automatically detect necessary changes to your
configuration, and in some cases also automatically edit your configuration
to include those changes. One you reach Terraform v0.14 you can then skip
to include those changes. Once you reach Terraform v0.14 you can then skip
directly from there to Terraform v1.0.
The following table summarizes the above recommendations. In each case, we

View File

@ -62,6 +62,27 @@ In a parent module, outputs of child modules are available in expressions as
`web_server` declared an output named `instance_ip_addr`, you could access that
value as `module.web_server.instance_ip_addr`.
## Custom Condition Checks
You can use `precondition` blocks to specify guarantees about output data. The following examples creates a precondition that checks whether the EC2 instance has an encrypted root volume.
```hcl
output "api_base_url" {
value = "https://${aws_instance.example.private_dns}:8433/"
# The EC2 instance must have an encrypted root volume.
precondition {
condition = data.aws_ebs_volume.example.encrypted
error_message = "The server's root volume is not encrypted."
}
}
```
Custom conditions can help capture assumptions, helping future maintainers understand the configuration design and intent. They also return useful information about errors earlier and in context, helping consumers more easily diagnose issues in their configurations.
Refer to [Custom Condition Checks](/language/expressions/custom-conditions#preconditions-and-postconditions) for more details.
## Optional Arguments
`output` blocks can optionally include `description`, `sensitive`, and `depends_on` arguments, which are described in the following sections.

View File

@ -160,9 +160,7 @@ commentary for module maintainers, use comments.
-> This feature was introduced in Terraform CLI v0.13.0.
In addition to Type Constraints as described above, a module author can specify
arbitrary custom validation rules for a particular variable using a `validation`
block nested within the corresponding `variable` block:
You can specify custom validation rules for a particular variable by adding a `validation` block within the corresponding `variable` block. The example below checks whether the AMI ID has the correct syntax.
```hcl
variable "image_id" {
@ -175,38 +173,7 @@ variable "image_id" {
}
}
```
The `condition` argument is an expression that must use the value of the
variable to return `true` if the value is valid, or `false` if it is invalid.
The expression can refer only to the variable that the condition applies to,
and _must not_ produce errors.
If the failure of an expression is the basis of the validation decision, use
[the `can` function](/language/functions/can) to detect such errors. For example:
```hcl
variable "image_id" {
type = string
description = "The id of the machine image (AMI) to use for the server."
validation {
# regex(...) fails if it cannot find a match
condition = can(regex("^ami-", var.image_id))
error_message = "The image_id value must be a valid AMI id, starting with \"ami-\"."
}
}
```
If `condition` evaluates to `false`, Terraform will produce an error message
that includes the result of the `error_message` expression. The error message
should be at least one full sentence explaining the constraint that failed,
using a sentence structure similar to the above examples.
Error messages can be literal strings, heredocs, or template expressions. The
only valid reference in an error message is the variable under validation.
Multiple `validation` blocks can be declared in which case error messages
will be returned for _all_ failed conditions.
Refer to [Custom Condition Checks](/language/expressions/custom-conditions#input-variable-validation) for more details.
### Suppressing Values in CLI Output