Merge branch 'main' into update-internal/configs

This commit is contained in:
Elbaz 2023-08-23 11:41:13 +03:00
commit 58853d209c
49 changed files with 349 additions and 345 deletions

View File

@ -11,7 +11,7 @@
# built by the (closed-source) official release process.
FROM docker.mirror.hashicorp.services/golang:alpine
LABEL maintainer="HashiCorp Terraform Team <terraform@hashicorp.com>"
LABEL maintainer="OpenTF Team <opentf@opentf.org>"
RUN apk add --no-cache git bash openssh

View File

@ -5,7 +5,7 @@ generate:
go generate ./...
# We separate the protobuf generation because most development tasks on
# Terraform do not involve changing protobuf files and protoc is not a
# OpenTF do not involve changing protobuf files and protoc is not a
# go-gettable dependency and so getting it installed can be inconvenient.
#
# If you are working on changes to protobuf interfaces, run this Makefile
@ -43,4 +43,4 @@ website/build-local:
# under parallel conditions.
.NOTPARALLEL:
.PHONY: fmtcheck importscheck generate protobuf staticcheck website website/local website/build-local
.PHONY: fmtcheck importscheck generate protobuf staticcheck website website/local website/build-local

View File

@ -1,23 +1,23 @@
# Terraform Core Codebase Documentation
# OpenTF Core Codebase Documentation
This directory contains some documentation about the Terraform Core codebase,
This directory contains some documentation about the OpenTF Core codebase,
aimed at readers who are interested in making code contributions.
If you're looking for information on _using_ Terraform, please instead refer
to [the main Terraform CLI documentation](https://www.terraform.io/docs/cli/index.html).
If you're looking for information on _using_ OpenTF, please instead refer
to [the main OpenTF CLI documentation](https://www.terraform.io/docs/cli/index.html).
## Terraform Core Architecture Documents
## OpenTF Core Architecture Documents
* [Terraform Core Architecture Summary](./architecture.md): an overview of the
main components of Terraform Core and how they interact. This is the best
* [OpenTF Core Architecture Summary](./architecture.md): an overview of the
main components of OpenTF Core and how they interact. This is the best
starting point if you are diving in to this codebase for the first time.
* [Resource Instance Change Lifecycle](./resource-instance-change-lifecycle.md):
a description of the steps in validating, planning, and applying a change
to a resource instance, from the perspective of the provider plugin RPC
operations. This may be useful for understanding the various expectations
Terraform enforces about provider behavior, either if you intend to make
changes to those behaviors or if you are implementing a new Terraform plugin
OpenTF enforces about provider behavior, either if you intend to make
changes to those behaviors or if you are implementing a new OpenTF plugin
SDK and so wish to conform to them.
(If you are planning to write a new provider using the _official_ SDK then
@ -31,10 +31,10 @@ to [the main Terraform CLI documentation](https://www.terraform.io/docs/cli/inde
This documentation is for SDK developers, and is not necessary reading for
those implementing a provider using the official SDK.
* [How Terraform Uses Unicode](./unicode.md): an overview of the various
features of Terraform that rely on Unicode and how to change those features
* [How OpenTF Uses Unicode](./unicode.md): an overview of the various
features of OpenTF that rely on Unicode and how to change those features
to adopt new versions of Unicode.
## Contribution Guides
* [Contributing to Terraform](../.github/CONTRIBUTING.md): a complete guideline for those who want to contribute to this project.
* [Contributing to OpenTF](../.github/CONTRIBUTING.md): a complete guideline for those who want to contribute to this project.

View File

@ -1,26 +1,26 @@
# Terraform Core Architecture Summary
# OpenTF Core Architecture Summary
This document is a summary of the main components of Terraform Core and how
This document is a summary of the main components of OpenTF Core and how
data and requests flow between these components. It's intended as a primer
to help navigate the codebase to dig into more details.
We assume some familiarity with user-facing Terraform concepts like
configuration, state, CLI workflow, etc. The Terraform website has
We assume some familiarity with user-facing OpenTF concepts like
configuration, state, CLI workflow, etc. The OpenTF website has
documentation on these ideas.
## Terraform Request Flow
## OpenTF Request Flow
The following diagram shows an approximation of how a user command is
executed in Terraform:
executed in OpenTF:
![Terraform Architecture Diagram, described in text below](./images/architecture-overview.png)
![OpenTF Architecture Diagram, described in text below](./images/architecture-overview.png)
Each of the different subsystems (solid boxes) in this diagram is described
in more detail in a corresponding section below.
## CLI (`command` package)
Each time a user runs the `terraform` program, aside from some initial
Each time a user runs the `opentf` program, aside from some initial
bootstrapping in the root package (not shown in the diagram) execution
transfers immediately into one of the "command" implementations in
[the `command` package](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/command).
@ -29,8 +29,8 @@ their corresponding `command` package types can be found in the `commands.go`
file in the root of the repository.
The full flow illustrated above does not actually apply to _all_ commands,
but it applies to the main Terraform workflow commands `terraform plan` and
`terraform apply`, along with a few others.
but it applies to the main OpenTF workflow commands `opentf plan` and
`opentf apply`, along with a few others.
For these commands, the role of the command implementation is to read and parse
any command line arguments, command line options, and environment variables
@ -62,18 +62,18 @@ the command-handling code calls `Operation` with the operation it has
constructed, and then the backend is responsible for executing that action.
Backends that execute operations, however, do so as an architectural implementation detail and not a
general feature of backends. That is, the term 'backend' as a Terraform feature is used to refer to
a plugin that determines where Terraform stores its state snapshots - only the default `local`
general feature of backends. That is, the term 'backend' as a OpenTF feature is used to refer to
a plugin that determines where OpenTF stores its state snapshots - only the default `local`
backend and Terraform Cloud's backends (`remote`, `cloud`) perform operations.
Thus, most backends do _not_ implement this interface, and so the `command` package wraps these
backends in an instance of
[`local.Local`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/backend/local#Local),
causing the operation to be executed locally within the `terraform` process itself.
causing the operation to be executed locally within the `opentf` process itself.
## Backends
A _backend_ determines where Terraform should store its state snapshots.
A _backend_ determines where OpenTF should store its state snapshots.
As described above, the `local` backend also executes operations on behalf of most other
backends. It uses a _state manager_
@ -86,7 +86,7 @@ initial processing/validation of the configuration specified in the
operation. It then uses these, along with the other settings given in the
operation, to construct a
[`terraform.Context`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/terraform#Context),
which is the main object that actually performs Terraform operations.
which is the main object that actually performs OpenTF operations.
The `local` backend finally calls an appropriate method on that context to
begin execution of the relevant command, such as
@ -109,13 +109,13 @@ configuration objects, but the main entry point is in the sub-package
via
[`configload.Loader`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/configs/configload#Loader).
A loader deals with all of the details of installing child modules
(during `terraform init`) and then locating those modules again when a
(during `opentf init`) and then locating those modules again when a
configuration is loaded by a backend. It takes the path to a root module
and recursively loads all of the child modules to produce a single
[`configs.Config`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/configs#Config)
representing the entire configuration.
Terraform expects configuration files written in the Terraform language, which
OpenTF expects configuration files written in the OpenTF language, which
is a DSL built on top of
[HCL](https://github.com/hashicorp/hcl). Some parts of the configuration
cannot be interpreted until we build and walk the graph, since they depend
@ -124,12 +124,12 @@ the configuration remain represented as the low-level HCL types
[`hcl.Body`](https://pkg.go.dev/github.com/hashicorp/hcl/v2/#Body)
and
[`hcl.Expression`](https://pkg.go.dev/github.com/hashicorp/hcl/v2/#Expression),
allowing Terraform to interpret them at a more appropriate time.
allowing OpenTF to interpret them at a more appropriate time.
## State Manager
A _state manager_ is responsible for storing and retrieving snapshots of the
[Terraform state](https://www.terraform.io/docs/language/state/index.html)
[OpenTF state](https://www.terraform.io/docs/language/state/index.html)
for a particular workspace. Each manager is an implementation of
some combination of interfaces in
[the `statemgr` package](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/states/statemgr),
@ -144,7 +144,7 @@ that does not implement all of `statemgr.Full`.
The implementation
[`statemgr.Filesystem`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/states/statemgr#Filesystem) is used
by default (by the `local` backend) and is responsible for the familiar
`terraform.tfstate` local file that most Terraform users start with, before
`terraform.tfstate` local file that most OpenTF users start with, before
they switch to [remote state](https://www.terraform.io/docs/language/state/remote.html).
Other implementations of `statemgr.Full` are used to implement remote state.
Each of these saves and retrieves state via a remote network service
@ -166,12 +166,12 @@ to represent the necessary steps for that operation and the dependency
relationships between them.
In most cases, the
[vertices](https://en.wikipedia.org/wiki/Vertex_(graph_theory)) of Terraform's
[vertices](https://en.wikipedia.org/wiki/Vertex_(graph_theory)) of OpenTF's
graphs each represent a specific object in the configuration, or something
derived from those configuration objects. For example, each `resource` block
in the configuration has one corresponding
[`GraphNodeConfigResource`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/terraform#GraphNodeConfigResource)
vertex representing it in the "plan" graph. (Terraform Core uses terminology
vertex representing it in the "plan" graph. (OpenTF Core uses terminology
inconsistently, describing graph _vertices_ also as graph _nodes_ in various
places. These both describe the same concept.)
@ -228,7 +228,7 @@ itself is implemented in
[the low-level `dag` package](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/dag#AcyclicGraph.Walk)
(where "DAG" is short for [_Directed Acyclic Graph_](https://en.wikipedia.org/wiki/Directed_acyclic_graph)), in
[`AcyclicGraph.Walk`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/dag#AcyclicGraph.Walk).
However, the "interesting" Terraform walk functionality is implemented in
However, the "interesting" OpenTF walk functionality is implemented in
[`terraform.ContextGraphWalker`](https://pkg.go.dev/github.com/placeholderplaceholderplaceholder/opentf/internal/terraform#ContextGraphWalker),
which implements a small set of higher-level operations that are performed
during the graph walk:
@ -346,7 +346,7 @@ or
Expression evaluation produces a dynamic value represented as a
[`cty.Value`](https://pkg.go.dev/github.com/zclconf/go-cty/cty#Value).
This Go type represents values from the Terraform language and such values
This Go type represents values from the OpenTF language and such values
are eventually passed to provider plugins.
### Sub-graphs

View File

@ -1,4 +1,4 @@
# Terraform Core Resource Destruction Notes
# OpenTF Core Resource Destruction Notes
This document intends to describe some of the details and complications
involved in the destruction of resources. It covers the ordering defined for
@ -8,7 +8,7 @@ all possible combinations of dependency ordering, only to outline the basics
and document some of the more complicated aspects of resource destruction.
The graph diagrams here will continue to use the inverted graph structure used
internally by Terraform, where edges represent dependencies rather than order
internally by OpenTF, where edges represent dependencies rather than order
of operations.
## Simple Resource Creation

View File

@ -1,6 +1,6 @@
# Planning Behaviors
A key design tenet for Terraform is that any actions with externally-visible
A key design tenet for OpenTF is that any actions with externally-visible
side-effects should be carried out via the standard process of creating a
plan and then applying it. Any new features should typically fit within this
model.
@ -8,25 +8,25 @@ model.
There are also some historical exceptions to this rule, which we hope to
supplement with plan-and-apply-based equivalents over time.
This document describes the default planning behavior of Terraform in the
This document describes the default planning behavior of OpenTF in the
absence of any special instructions, and also describes the three main
design approaches we can choose from when modelling non-default behaviors that
require additional information from outside of Terraform Core.
require additional information from outside of OpenTF Core.
This document focuses primarily on actions relating to _resource instances_,
because that is Terraform's main concern. However, these design principles can
because that is OpenTF's main concern. However, these design principles can
potentially generalize to other externally-visible objects, if we can describe
their behaviors in a way comparable to the resource instance behaviors.
This is developer-oriented documentation rather than user-oriented
documentation. See
[the main Terraform documentation](https://www.terraform.io/docs) for
[the main OpenTF documentation](https://www.terraform.io/docs) for
information on existing planning behaviors and other behaviors as viewed from
an end-user perspective.
## Default Planning Behavior
When given no explicit information to the contrary, Terraform Core will
When given no explicit information to the contrary, OpenTF Core will
automatically propose taking the following actions in the appropriate
situations:
@ -52,21 +52,21 @@ situations:
the configuration (in a `resource` block) and recorded in the prior state
_marked as "tainted"_. The special "tainted" status means that the process
of creating the object failed partway through and so the existing object does
not necessarily match the configuration, so Terraform plans to replace it
not necessarily match the configuration, so OpenTF plans to replace it
in order to ensure that the resulting object is complete.
- **Read**, if there is a `data` block in the configuration.
- If possible, Terraform will eagerly perform this action during the planning
- If possible, OpenTF will eagerly perform this action during the planning
phase, rather than waiting until the apply phase.
- If the configuration contains at least one unknown value, or if the
data resource directly depends on a managed resource that has any change
proposed elsewhere in the plan, Terraform will instead delay this action
proposed elsewhere in the plan, OpenTF will instead delay this action
to the apply phase so that it can react to the completion of modification
actions on other objects.
- **No-op**, to explicitly represent that Terraform considered a particular
- **No-op**, to explicitly represent that OpenTF considered a particular
resource instance but concluded that no action was required.
The **Replace** action described above is really a sort of "meta-action", which
Terraform expands into separate **Create** and **Delete** operations. There are
OpenTF expands into separate **Create** and **Delete** operations. There are
two possible orderings, and the first one is the default planning behavior
unless overridden by a special planning behavior as described later. The
two possible lowerings of **Replace** are:
@ -81,7 +81,7 @@ two possible lowerings of **Replace** are:
## Special Planning Behaviors
For the sake of this document, a "special" planning behavior is one where
Terraform Core will select a different action than the defaults above,
OpenTF Core will select a different action than the defaults above,
based on explicit instructions given either by a module author, an operator,
or a provider.
@ -107,27 +107,27 @@ of the following depending on which stakeholder is activating the behavior:
"automatic".
Because these special behaviors are activated by values in the provider's
response to the planning request from Terraform Core, behaviors of this
response to the planning request from OpenTF Core, behaviors of this
sort will typically represent "tweaks" to or variants of the default
planning behaviors, rather than entirely different behaviors.
- [Single-run Behaviors](#single-run-behaviors) are activated by explicitly
setting additional "plan options" when calling Terraform Core's plan
setting additional "plan options" when calling OpenTF Core's plan
operation.
This design pattern is good for situations where the direct operator of
Terraform needs to do something exceptional or one-off, such as when the
OpenTF needs to do something exceptional or one-off, such as when the
configuration is correct but the real system has become degraded or damaged
in a way that Terraform cannot automatically understand.
in a way that OpenTF cannot automatically understand.
However, this design pattern has the disadvantage that each new single-run
behavior type requires custom work in every wrapping UI or automaton around
Terraform Core, in order provide the user of that wrapper some way
OpenTF Core, in order provide the user of that wrapper some way
to directly activate the special option, or to offer an "escape hatch" to
use Terraform CLI directly and bypass the wrapping automation for a
use OpenTF CLI directly and bypass the wrapping automation for a
particular change.
We've also encountered use-cases that seem to call for a hybrid between these
different patterns. For example, a configuration construct might cause Terraform
different patterns. For example, a configuration construct might cause OpenTF
Core to _invite_ a provider to activate a special behavior, but let the
provider make the final call about whether to do it. Or conversely, a provider
might advertise the possibility of a special behavior but require the user to
@ -153,36 +153,36 @@ configuration-driven behaviors, selected to illustrate some different variations
that might be useful inspiration for new designs:
- The `ignore_changes` argument inside `resource` block `lifecycle` blocks
tells Terraform that if there is an existing object bound to a particular
resource instance address then Terraform should ignore the configured value
tells OpenTF that if there is an existing object bound to a particular
resource instance address then OpenTF should ignore the configured value
for a particular argument and use the corresponding value from the prior
state instead.
This can therefore potentially cause what would've been an **Update** to be
a **No-op** instead.
- The `replace_triggered_by` argument inside `resource` block `lifecycle`
blocks can use a proposed change elsewhere in a module to force Terraform
blocks can use a proposed change elsewhere in a module to force OpenTF
to propose one of the two **Replace** variants for a particular resource.
- The `create_before_destroy` argument inside `resource` block `lifecycle`
blocks only takes effect if a particular resource instance has a proposed
**Replace** action. If not set or set to `false`, Terraform will decompose
it to **Destroy** then **Create**, but if set to `true` Terraform will use
**Replace** action. If not set or set to `false`, OpenTF will decompose
it to **Destroy** then **Create**, but if set to `true` OpenTF will use
the inverted ordering.
Because Terraform Core will never select a **Replace** action automatically
Because OpenTF Core will never select a **Replace** action automatically
by itself, this is an example of a hybrid design where the config-driven
`create_before_destroy` combines with any other behavior (config-driven or
otherwise) that might cause **Replace** to customize exactly what that
**Replace** will mean.
- Top-level `moved` blocks in a module activate a special behavior during the
planning phase, where Terraform will first try to change the bindings of
planning phase, where OpenTF will first try to change the bindings of
existing objects in the prior state to attach to new addresses before running
the normal planning process. This therefore allows a module author to
document certain kinds of refactoring so that Terraform can update the
document certain kinds of refactoring so that OpenTF can update the
state automatically once users upgrade to a new version of the module.
This special behavior is interesting because it doesn't _directly_ change
what actions Terraform will propose, but instead it adds an extra
what actions OpenTF will propose, but instead it adds an extra
preparation step before the typical planning process which changes the
addresses that the planning process will consider. It can therefore
_indirectly_ cause different proposed actions for affected resource
@ -201,13 +201,13 @@ Providers get an opportunity to activate some special behaviors for a particular
resource instance when they respond to the `PlanResourceChange` function of
the provider plugin protocol.
When Terraform Core executes this RPC, it has already selected between
When OpenTF Core executes this RPC, it has already selected between
**Create**, **Delete**, or **Update** actions for the particular resource
instance, and so the special behaviors a provider may activate will typically
serve as modifiers or tweaks to that base action, and will not allow
the provider to select another base action altogether. The provider wire
protocol does not talk about the action types explicitly, and instead only
implies them via other content of the request and response, with Terraform Core
implies them via other content of the request and response, with OpenTF Core
making the final decision about how to react to that information.
The following is a non-exhaustive list of existing examples of
@ -218,7 +218,7 @@ that might be useful inspiration for new designs:
more paths to attributes which have changes that the provider cannot
implement as an in-place update due to limitations of the remote system.
In that case, Terraform Core will replace the **Update** action with one of
In that case, OpenTF Core will replace the **Update** action with one of
the two **Replace** variants, which means that from the provider's
perspective the apply phase will really be two separate calls for the
decomposed **Create** and **Delete** actions (in either order), rather
@ -232,31 +232,31 @@ that might be useful inspiration for new designs:
remote system.
If all of those taken together causes the new object to match the prior
state, Terraform Core will treat the update as a **No-op** instead.
state, OpenTF Core will treat the update as a **No-op** instead.
Of the three genres of special behaviors, provider-driven behaviors is the one
we've made the least use of historically but one that seems to have a lot of
opportunities for future exploration. Provider-driven behaviors can often be
ideal because their effects appear as if they are built in to Terraform so
that "it just works", with Terraform automatically deciding and explaining what
ideal because their effects appear as if they are built in to OpenTF so
that "it just works", with OpenTF automatically deciding and explaining what
needs to happen and why, without any special effort on the user's part.
### Single-run Behaviors
Terraform Core's "plan" operation takes a set of arguments that we collectively
call "plan options", that can modify Terraform's planning behavior on a per-run
OpenTF Core's "plan" operation takes a set of arguments that we collectively
call "plan options", that can modify OpenTF's planning behavior on a per-run
basis without any configuration changes or special provider behaviors.
As noted above, this particular genre of designs is the most burdensome to
implement because any wrapping software that can ask Terraform Core to create
implement because any wrapping software that can ask OpenTF Core to create
a plan must ideally offer some way to set all of the available planning options,
or else some part of Terraform's functionality won't be available to anyone
or else some part of OpenTF's functionality won't be available to anyone
using that wrapper.
However, we've seen various situations where single-run behaviors really are the
most appropriate way to handle a particular use-case, because the need for the
behavior originates in some process happening outside of the scope of any
particular Terraform module or provider.
particular OpenTF module or provider.
The following is a non-exhaustive list of existing examples of
single-run behaviors, selected to illustrate some different variations
@ -265,25 +265,25 @@ that might be useful inspiration for new designs:
- The "replace" planning option specifies zero or more resource instance
addresses.
For any resource instance specified, Terraform Core will transform any
For any resource instance specified, OpenTF Core will transform any
**Update** or **No-op** action for that instance into one of the
**Replace** actions, thereby allowing an operator to respond to something
having become degraded in a way that Terraform and providers cannot
automatically detect and force Terraform to replace that object with
having become degraded in a way that OpenTF and providers cannot
automatically detect and force OpenTF to replace that object with
a new one that will hopefully function correctly.
- The "refresh only" planning mode ("planning mode" is a single planning option
that selects between a few mutually-exclusive behaviors) forces Terraform
that selects between a few mutually-exclusive behaviors) forces OpenTF
to treat every resource instance as **No-op**, regardless of what is bound
to that address in state or present in the configuration.
## Legacy Operations
Some of the legacy operations Terraform CLI offers that _aren't_ integrated
Some of the legacy operations OpenTF CLI offers that _aren't_ integrated
with the plan and apply flow could be thought of as various degenerate kinds
of single-run behaviors. Most don't offer any opportunity to preview an effect
before applying it, but do meet a similar set of use-cases where an operator
needs to take some action to respond to changes to the context Terraform is
in rather than to the Terraform configuration itself.
needs to take some action to respond to changes to the context OpenTF is
in rather than to the OpenTF configuration itself.
Most of these legacy operations could therefore most readily be translated to
single-run behaviors, but before doing so it's worth researching whether people

View File

@ -1,7 +1,7 @@
# Terraform Plugin Protocol
# OpenTF Plugin Protocol
This directory contains documentation about the physical wire protocol that
Terraform Core uses to communicate with provider plugins.
OpenTF Core uses to communicate with provider plugins.
Most providers are not written directly against this protocol. Instead, prefer
to use an SDK that implements this protocol and write the provider against
@ -9,35 +9,35 @@ the SDK's API.
----
**If you want to write a plugin for Terraform, please refer to
[Extending Terraform](https://www.terraform.io/docs/extend/index.html) instead.**
**If you want to write a plugin for OpenTF, please refer to
[Extending OpenTF](https://www.terraform.io/docs/extend/index.html) instead.**
This documentation is for those who are developing _Terraform SDKs_, rather
This documentation is for those who are developing _OpenTF SDKs_, rather
than those implementing plugins.
----
From Terraform v0.12.0 onwards, Terraform's plugin protocol is built on
From OpenTF v0.12.0 onwards, OpenTF's plugin protocol is built on
[gRPC](https://grpc.io/). This directory contains `.proto` definitions of
different versions of Terraform's protocol.
different versions of OpenTF's protocol.
Only `.proto` files published as part of Terraform release tags are actually
Only `.proto` files published as part of OpenTF release tags are actually
official protocol versions. If you are reading this directory on the `main`
branch or any other development branch then it may contain protocol definitions
that are not yet finalized and that may change before final release.
## RPC Plugin Model
Terraform plugins are normal executable programs that, when launched, expose
gRPC services on a server accessed via the loopback interface. Terraform Core
OpenTF plugins are normal executable programs that, when launched, expose
gRPC services on a server accessed via the loopback interface. OpenTF Core
discovers and launches plugins, waits for a handshake to be printed on the
plugin's `stdout`, and then connects to the indicated port number as a
gRPC client.
For this reason, we commonly refer to Terraform Core itself as the plugin
For this reason, we commonly refer to OpenTF Core itself as the plugin
"client" and the plugin program itself as the plugin "server". Both of these
processes run locally, with the server process appearing as a child process
of the client. Terraform Core controls the lifecycle of these server processes
of the client. OpenTF Core controls the lifecycle of these server processes
and will terminate them when they are no longer required.
The startup and handshake protocol is not currently documented. We hope to
@ -51,7 +51,7 @@ more significant breaking changes from time to time while allowing old and
new plugins to be used together for some period.
The versioning strategy described below was introduced with protocol version
5.0 in Terraform v0.12. Prior versions of Terraform and prior protocol versions
5.0 in OpenTF v0.12. Prior versions of OpenTF and prior protocol versions
do not follow this strategy.
The authoritative definition for each protocol version is in this directory
@ -64,11 +64,11 @@ is the minor version.
The minor version increases for each change introducing optional new
functionality that can be ignored by implementations of prior versions. For
example, if a new field were added to an response message, it could be a minor
release as long as Terraform Core can provide some default behavior when that
release as long as OpenTF Core can provide some default behavior when that
field is not populated.
The major version increases for any significant change to the protocol where
compatibility is broken. However, Terraform Core and an SDK may both choose
compatibility is broken. However, OpenTF Core and an SDK may both choose
to support multiple major versions at once: the plugin handshake includes a
negotiation step where client and server can work together to select a
mutually-supported major version.
@ -84,9 +84,9 @@ features.
## Version compatibility for Core, SDK, and Providers
A particular version of Terraform Core has both a minimum minor version it
A particular version of OpenTF Core has both a minimum minor version it
requires and a maximum major version that it supports. A particular version of
Terraform Core may also be able to optionally use a newer minor version when
OpenTF Core may also be able to optionally use a newer minor version when
available, but fall back on older behavior when that functionality is not
available.
@ -95,16 +95,16 @@ The compatible versions for a provider are a list of major and minor version
pairs, such as "4.0", "5.2", which indicates that the provider supports the
baseline features of major version 4 and supports major version 5 including
the enhancements from both minor versions 1 and 2. This provider would
therefore be compatible with a Terraform Core release that supports only
therefore be compatible with a OpenTF Core release that supports only
protocol version 5.0, since major version 5 is supported and the optional
5.1 and 5.2 enhancements will be ignored.
If Terraform Core and the plugin do not have at least one mutually-supported
major version, Terraform Core will return an error from `terraform init`
If OpenTF Core and the plugin do not have at least one mutually-supported
major version, OpenTF Core will return an error from `opentf init`
during plugin installation:
```
Provider "aws" v1.0.0 is not compatible with Terraform v0.12.0.
Provider "aws" v1.0.0 is not compatible with OpenTF v0.12.0.
Provider version v2.0.0 is the earliest compatible version.
Select it with the following version constraint:
@ -113,24 +113,24 @@ Select it with the following version constraint:
```
```
Provider "aws" v3.0.0 is not compatible with Terraform v0.12.0.
Provider "aws" v3.0.0 is not compatible with OpenTF v0.12.0.
Provider version v2.34.0 is the latest compatible version. Select
it with the following constraint:
version = "~> 2.34.0"
Alternatively, upgrade to the latest version of Terraform for compatibility with newer provider releases.
Alternatively, upgrade to the latest version of OpenTF for compatibility with newer provider releases.
```
The above messages are for plugins installed via `terraform init` from a
Terraform registry, where the registry API allows Terraform Core to recognize
The above messages are for plugins installed via `opentf init` from a
OpenTF registry, where the registry API allows OpenTF Core to recognize
the protocol compatibility for each provider release. For plugins that are
installed manually to a local plugin directory, Terraform Core has no way to
installed manually to a local plugin directory, OpenTF Core has no way to
suggest specific versions to upgrade or downgrade to, and so the error message
is more generic:
```
The installed version of provider "example" is not compatible with Terraform v0.12.0.
The installed version of provider "example" is not compatible with OpenTF v0.12.0.
This provider was loaded from:
/usr/local/bin/terraform-provider-example_v0.1.0
@ -154,14 +154,14 @@ of the plugin in ways that affect its semver-based version numbering:
For this reason, SDK developers must be clear in their release notes about
the addition and removal of support for major versions.
Terraform Core also makes an assumption about major version support when
OpenTF Core also makes an assumption about major version support when
it produces actionable error messages for users about incompatibilities:
a particular protocol major version is supported for a single consecutive
range of provider releases, with no "gaps".
## Using the protobuf specifications in an SDK
If you wish to build an SDK for Terraform plugins, an early step will be to
If you wish to build an SDK for OpenTF plugins, an early step will be to
copy one or more `.proto` files from this directory into your own repository
(depending on which protocol versions you intend to support) and use the
`protoc` protocol buffers compiler (with gRPC extensions) to generate suitable
@ -178,7 +178,7 @@ You can find out more about the tool usage for each target language in
[the gRPC Quick Start guides](https://grpc.io/docs/quickstart/).
The protobuf specification for a version is immutable after it has been
included in at least one Terraform release. Any changes will be documented in
included in at least one OpenTF release. Any changes will be documented in
a new `.proto` file establishing a new protocol version.
The protocol buffer compiler will produce some sort of library object appropriate
@ -200,7 +200,7 @@ and copy the relevant `.proto` file into it, creating a separate set of stubs
that can in principle allow your SDK to support both major versions at the
same time. We recommend supporting both the previous and current major versions
together for a while across a major version upgrade so that users can avoid
having to upgrade both Terraform Core and all of their providers at the same
having to upgrade both OpenTF Core and all of their providers at the same
time, but you can delete the previous major version stubs once you remove
support for that version.

View File

@ -1,33 +1,33 @@
# Wire Format for Terraform Objects and Associated Values
# Wire Format for OpenTF Objects and Associated Values
The provider wire protocol (as of major version 5) includes a protobuf message
type `DynamicValue` which Terraform uses to represent values from the Terraform
type `DynamicValue` which OpenTF uses to represent values from the OpenTF
Language type system, which result from evaluating the content of `resource`,
`data`, and `provider` blocks, based on a schema defined by the corresponding
provider.
Because the structure of these values is determined at runtime, `DynamicValue`
uses one of two possible dynamic serialization formats for the values
themselves: MessagePack or JSON. Terraform most commonly uses MessagePack,
themselves: MessagePack or JSON. OpenTF most commonly uses MessagePack,
because it offers a compact binary representation of a value. However, a server
implementation of the provider protocol should fall back to JSON if the
MessagePack field is not populated, in order to support both formats.
The remainder of this document describes how Terraform translates from its own
The remainder of this document describes how OpenTF translates from its own
type system into the type system of the two supported serialization formats.
A server implementation of the Terraform provider protocol can use this
A server implementation of the OpenTF provider protocol can use this
information to decode `DynamicValue` values from incoming messages into
whatever representation is convenient for the provider implementation.
A server implementation must also be able to _produce_ `DynamicValue` messages
as part of various response messages. When doing so, servers should always
use MessagePack encoding, because Terraform does not consistently support
JSON responses across all request types and all Terraform versions.
use MessagePack encoding, because OpenTF does not consistently support
JSON responses across all request types and all OpenTF versions.
Both the MessagePack and JSON serializations are driven by information the
provider previously returned in a `Schema` message. Terraform will encode each
provider previously returned in a `Schema` message. OpenTF will encode each
value depending on the type constraint given for it in the corresponding schema,
using the closest possible MessagePack or JSON type to the Terraform language
using the closest possible MessagePack or JSON type to the OpenTF language
type. Therefore a server implementation can decode a serialized value using a
standard MessagePack or JSON library and assume it will conform to the
serialization rules described below.
@ -38,8 +38,8 @@ The MessagePack types referenced in this section are those defined in
[The MessagePack type system specification](https://github.com/msgpack/msgpack/blob/master/spec.md#type-system).
Note that MessagePack defines several possible serialization formats for each
type, and Terraform may choose any of the formats of a specified type.
The exact serialization chosen for a given value may vary between Terraform
type, and OpenTF may choose any of the formats of a specified type.
The exact serialization chosen for a given value may vary between OpenTF
versions, but the types given here are contractual.
Conversely, server implementations that are _producing_ MessagePack-encoded
@ -49,7 +49,7 @@ the value without a loss of range.
### `Schema.Block` Mapping Rules for MessagePack
To represent the content of a block as MessagePack, Terraform constructs a
To represent the content of a block as MessagePack, OpenTF constructs a
MessagePack map that contains one key-value pair per attribute and one
key-value pair per distinct nested block described in the `Schema.Block` message.
@ -63,7 +63,7 @@ The key-value pairs representing nested block types have values based on
The MessagePack serialization of an attribute value depends on the value of the
`type` field of the corresponding `Schema.Attribute` message. The `type` field is
a compact JSON serialization of a
[Terraform type constraint](https://www.terraform.io/docs/configuration/types.html),
[OpenTF type constraint](https://www.terraform.io/docs/configuration/types.html),
which consists either of a single
string value (for primitive types) or a two-element array giving a type kind
and a type argument.
@ -84,7 +84,7 @@ in the table below, regardless of type:
| `"number"` | Either MessagePack integer, MessagePack float, or MessagePack string representing the number. If a number is represented as a string then the string contains a decimal representation of the number which may have a larger mantissa than can be represented by a 64-bit float. |
| `"bool"` | A MessagePack boolean value corresponding to the value. |
| `["list",T]` | A MessagePack array with the same number of elements as the list value, each of which is represented by the result of applying these same mapping rules to the nested type `T`. |
| `["set",T]` | Identical in representation to `["list",T]`, but the order of elements is undefined because Terraform sets are unordered. |
| `["set",T]` | Identical in representation to `["list",T]`, but the order of elements is undefined because OpenTF sets are unordered. |
| `["map",T]` | A MessagePack map with one key-value pair per element of the map value, where the element key is serialized as the map key (always a MessagePack string) and the element value is represented by a value constructed by applying these same mapping rules to the nested type `T`. |
| `["object",ATTRS]` | A MessagePack map with one key-value pair per attribute defined in the `ATTRS` object. The attribute name is serialized as the map key (always a MessagePack string) and the attribute value is represented by a value constructed by applying these same mapping rules to each attribute's own type. |
| `["tuple",TYPES]` | A MessagePack array with one element per element described by the `TYPES` array. The element values are constructed by applying these same mapping rules to the corresponding element of `TYPES`. |
@ -97,7 +97,7 @@ values.
The older encoding is for unrefined unknown values and uses an extension
code of zero, with the extension value payload completely ignored.
Newer Terraform versions can produce "refined" unknown values which carry some
Newer OpenTF versions can produce "refined" unknown values which carry some
additional information that constrains the possible range of the final value/
Refined unknown values have extension code 12 and then the extension object's
payload is a MessagePack-encoded map using integer keys to represent different
@ -161,7 +161,7 @@ by applying
to the block's contents based on the `block` field, producing what we'll call
a _block value_ in the table below.
The `nesting` value then in turn defines how Terraform will collect all of the
The `nesting` value then in turn defines how OpenTF will collect all of the
individual block values together to produce a single property value representing
the nested block type. For all `nesting` values other than `MAP`, blocks may
not have any labels. For the `nesting` value `MAP`, blocks must have exactly
@ -173,13 +173,13 @@ one label, which is a string we'll call a _block label_ in the table below.
| `LIST` | A MessagePack array of all of the block values, preserving the order of definition of the blocks in the configuration. |
| `SET` | A MessagePack array of all of the block values in no particular order. |
| `MAP` | A MessagePack map with one key-value pair per block value, where the key is the block label and the value is the block value. |
| `GROUP` | The same as with `SINGLE`, except that if there is no block of that type Terraform will synthesize a block value by pretending that all of the declared attributes are null and that there are zero blocks of each declared block type. |
| `GROUP` | The same as with `SINGLE`, except that if there is no block of that type OpenTF will synthesize a block value by pretending that all of the declared attributes are null and that there are zero blocks of each declared block type. |
For the `LIST` and `SET` nesting modes, Terraform guarantees that the
For the `LIST` and `SET` nesting modes, OpenTF guarantees that the
MessagePack array will have a number of elements between the `min_items` and
`max_items` values given in the schema, _unless_ any of the block values contain
nested unknown values. When unknown values are present, Terraform considers
the value to be potentially incomplete and so Terraform defers validation of
nested unknown values. When unknown values are present, OpenTF considers
the value to be potentially incomplete and so OpenTF defers validation of
the number of blocks. For example, if the configuration includes a `dynamic`
block whose `for_each` argument is unknown then the final number of blocks is
not predictable until the apply phase.
@ -198,7 +198,7 @@ _current_ version of that provider.
### `Schema.Block` Mapping Rules for JSON
To represent the content of a block as JSON, Terraform constructs a
To represent the content of a block as JSON, OpenTF constructs a
JSON object that contains one property per attribute and one property per
distinct nested block described in the `Schema.Block` message.
@ -212,7 +212,7 @@ The properties representing nested block types have property values based on
The JSON serialization of an attribute value depends on the value of the `type`
field of the corresponding `Schema.Attribute` message. The `type` field is
a compact JSON serialization of a
[Terraform type constraint](https://www.terraform.io/docs/configuration/types.html),
[OpenTF type constraint](https://www.terraform.io/docs/configuration/types.html),
which consists either of a single
string value (for primitive types) or a two-element array giving a type kind
and a type argument.
@ -226,10 +226,10 @@ table regardless of type:
| `type` Pattern | JSON Representation |
|---|---|
| `"string"` | A JSON string containing the Unicode characters from the string value. |
| `"number"` | A JSON number representing the number value. Terraform numbers are arbitrary-precision floating point, so the value may have a larger mantissa than can be represented by a 64-bit float. |
| `"number"` | A JSON number representing the number value. OpenTF numbers are arbitrary-precision floating point, so the value may have a larger mantissa than can be represented by a 64-bit float. |
| `"bool"` | Either JSON `true` or JSON `false`, depending on the boolean value. |
| `["list",T]` | A JSON array with the same number of elements as the list value, each of which is represented by the result of applying these same mapping rules to the nested type `T`. |
| `["set",T]` | Identical in representation to `["list",T]`, but the order of elements is undefined because Terraform sets are unordered. |
| `["set",T]` | Identical in representation to `["list",T]`, but the order of elements is undefined because OpenTF sets are unordered. |
| `["map",T]` | A JSON object with one property per element of the map value, where the element key is serialized as the property name string and the element value is represented by a property value constructed by applying these same mapping rules to the nested type `T`. |
| `["object",ATTRS]` | A JSON object with one property per attribute defined in the `ATTRS` object. The attribute name is serialized as the property name string and the attribute value is represented by a property value constructed by applying these same mapping rules to each attribute's own type. |
| `["tuple",TYPES]` | A JSON array with one element per element described by the `TYPES` array. The element values are constructed by applying these same mapping rules to the corresponding element of `TYPES`. |
@ -248,7 +248,7 @@ by applying
to the block's contents based on the `block` field, producing what we'll call
a _block value_ in the table below.
The `nesting` value then in turn defines how Terraform will collect all of the
The `nesting` value then in turn defines how OpenTF will collect all of the
individual block values together to produce a single property value representing
the nested block type. For all `nesting` values other than `MAP`, blocks may
not have any labels. For the `nesting` value `MAP`, blocks must have exactly
@ -260,8 +260,8 @@ one label, which is a string we'll call a _block label_ in the table below.
| `LIST` | A JSON array of all of the block values, preserving the order of definition of the blocks in the configuration. |
| `SET` | A JSON array of all of the block values in no particular order. |
| `MAP` | A JSON object with one property per block value, where the property name is the block label and the value is the block value. |
| `GROUP` | The same as with `SINGLE`, except that if there is no block of that type Terraform will synthesize a block value by pretending that all of the declared attributes are null and that there are zero blocks of each declared block type. |
| `GROUP` | The same as with `SINGLE`, except that if there is no block of that type OpenTF will synthesize a block value by pretending that all of the declared attributes are null and that there are zero blocks of each declared block type. |
For the `LIST` and `SET` nesting modes, Terraform guarantees that the JSON
For the `LIST` and `SET` nesting modes, OpenTF guarantees that the JSON
array will have a number of elements between the `min_items` and `max_items`
values given in the schema.

View File

@ -1,14 +1,14 @@
# Releasing a New Version of the Protocol
Terraform's plugin protocol is the contract between Terraform's plugins and
Terraform, and as such releasing a new version requires some coordination
OpenTF's plugin protocol is the contract between OpenTF's plugins and
OpenTF, and as such releasing a new version requires some coordination
between those pieces. This document is intended to be a checklist to consult
when adding a new major version of the protocol (X in X.Y) to ensure that
everything that needs to be is aware of it.
## New Protobuf File
The protocol is defined in protobuf files that live in the hashicorp/terraform
The protocol is defined in protobuf files that live in the opentffoundation/opentf
repository. Adding a new version of the protocol involves creating a new
`.proto` file in that directory. It is recommended that you copy the latest
protocol file, and modify it accordingly.
@ -17,7 +17,7 @@ protocol file, and modify it accordingly.
The
[hashicorp/terraform-plugin-go](https://github.com/hashicorp/terraform-plugin-go)
repository serves as the foundation for Terraform's plugin ecosystem. It needs
repository serves as the foundation for OpenTF's plugin ecosystem. It needs
to know about the new major protocol version. Either open an issue in that repo
to have the Plugin SDK team add the new package, or if you would like to
contribute it yourself, open a PR. It is recommended that you copy the package
@ -25,16 +25,16 @@ for the latest protocol version and modify it accordingly.
## Update the Registry's List of Allowed Versions
The Terraform Registry validates the protocol versions a provider advertises
The OpenTF Registry validates the protocol versions a provider advertises
support for when ingesting providers. Providers will not be able to advertise
support for the new protocol version until it is added to that list.
## Update Terraform's Version Constraints
## Update OpenTF's Version Constraints
Terraform only downloads providers that speak protocol versions it is
compatible with from the Registry during `terraform init`. When adding support
for a new protocol, you need to tell Terraform it knows that protocol version.
Modify the `SupportedPluginProtocols` variable in hashicorp/terraform's
OpenTF only downloads providers that speak protocol versions it is
compatible with from the Registry during `opentf init`. When adding support
for a new protocol, you need to tell OpenTF it knows that protocol version.
Modify the `SupportedPluginProtocols` variable in opentffoundation/opentf's
`internal/getproviders/registry_client.go` file to include the new protocol.
## Test Running a Provider With the Test Framework
@ -42,12 +42,12 @@ Modify the `SupportedPluginProtocols` variable in hashicorp/terraform's
Use the provider test framework to test a provider written with the new
protocol. This end-to-end test ensures that providers written with the new
protocol work correctly with the test framework, especially in communicating
the protocol version between the test framework and Terraform.
the protocol version between the test framework and OpenTF.
## Test Retrieving and Running a Provider From the Registry
Publish a provider, either to the public registry or to the staging registry,
and test running `terraform init` and `terraform apply`, along with exercising
and test running `opentf init` and `opentf apply`, along with exercising
any of the new functionality the protocol version introduces. This end-to-end
test ensures that all the pieces needing to be updated before practitioners can
use providers built with the new protocol have been updated.

View File

@ -1,7 +1,7 @@
# Terraform Resource Instance Change Lifecycle
# OpenTF Resource Instance Change Lifecycle
This document describes the relationships between the different operations
called on a Terraform Provider to handle a change to a resource instance.
called on a OpenTF Provider to handle a change to a resource instance.
![](https://user-images.githubusercontent.com/20180/172506401-777597dc-3e6e-411d-9580-b192fd34adba.png)
@ -28,18 +28,18 @@ The various object values used in different parts of this process are:
* **Prior State**: The provider's representation of the current state of the
remote object at the time of the most recent read.
* **Proposed New State**: Terraform Core uses some built-in logic to perform
* **Proposed New State**: OpenTF Core uses some built-in logic to perform
an initial basic merger of the **Configuration** and the **Prior State**
which a provider may use as a starting point for its planning operation.
The built-in logic primarily deals with the expected behavior for attributes
marked in the schema as "computed". If an attribute is only "computed",
Terraform expects the value to only be chosen by the provider and it will
OpenTF expects the value to only be chosen by the provider and it will
preserve any Prior State. If an attribute is marked as "computed" and
"optional", this means that the user may either set it or may leave it
unset to allow the provider to choose a value.
Terraform Core therefore constructs the proposed new state by taking the
OpenTF Core therefore constructs the proposed new state by taking the
attribute value from Configuration if it is non-null, and then using the
Prior State as a fallback otherwise, thereby helping a provider to
preserve its previously-chosen value for the attribute where appropriate.
@ -55,7 +55,7 @@ The various object values used in different parts of this process are:
must mark these by including unknown values in the state objects.
The distinction between the _Initial_ and _Final_ planned states is that
the initial one is created during Terraform Core's planning phase based
the initial one is created during OpenTF Core's planning phase based
on a possibly-incomplete configuration, whereas the final one is created
during the apply step once all of the dependencies have already been
updated and so the configuration should then be wholly known.
@ -67,9 +67,9 @@ The various object values used in different parts of this process are:
actual state of the system, rather than a hypothetical future state.
* **Previous Run State** is the same object as the **New State** from
the previous run of Terraform. This is exactly what the provider most
the previous run of OpenTF. This is exactly what the provider most
recently returned, and so it will not take into account any changes that
may have been made outside of Terraform in the meantime, and it may conform
may have been made outside of OpenTF in the meantime, and it may conform
to an earlier version of the resource type schema and therefore be
incompatible with the _current_ schema.
@ -77,22 +77,22 @@ The various object values used in different parts of this process are:
provider-specified logic to upgrade the existing data to the latest schema.
However, it still represents the remote system as it was at the end of the
last run, and so still doesn't take into account any changes that may have
been made outside of Terraform.
been made outside of OpenTF.
* The **Import ID** and **Import Stub State** are both details of the special
process of importing pre-existing objects into a Terraform state, and so
process of importing pre-existing objects into a OpenTF state, and so
we'll wait to discuss those in a later section on importing.
## Provider Protocol API Functions
The following sections describe the three provider API functions that are
called to plan and apply a change, including the expectations Terraform Core
called to plan and apply a change, including the expectations OpenTF Core
enforces for each.
For historical reasons, the original Terraform SDK is exempt from error
For historical reasons, the original OpenTF SDK is exempt from error
messages produced when certain assumptions are violated, but violating them
will often cause downstream errors nonetheless, because Terraform's workflow
will often cause downstream errors nonetheless, because OpenTF's workflow
depends on these contracts being met.
The following section uses the word "attribute" to refer to the named
@ -116,7 +116,7 @@ expressed via schema alone.
In principle a provider can make any rule it wants here, although in practice
providers should typically avoid reporting errors for values that are unknown.
Terraform Core will call this function multiple times at different phases
OpenTF Core will call this function multiple times at different phases
of evaluation, and guarantees to _eventually_ call with a wholly-known
configuration so that the provider will have an opportunity to belatedly catch
problems related to values that are initially unknown during planning.
@ -133,7 +133,7 @@ modify the user's supplied configuration.
### PlanResourceChange
The purpose of `PlanResourceChange` is to predict the approximate effect of
a subsequent apply operation, allowing Terraform to render the plan for the
a subsequent apply operation, allowing OpenTF to render the plan for the
user and to propagate the predictable subset of results downstream through
expressions in the configuration.
@ -159,20 +159,20 @@ following constraints:
`PlanResourceChange` is actually called twice per run for each resource type.
The first call is during the planning phase, before Terraform prints out a
The first call is during the planning phase, before OpenTF prints out a
diff to the user for confirmation. Because no changes at all have been applied
at that point, the given **Configuration** may contain unknown values as
placeholders for the results of expressions that derive from unknown values
of other resource instances. The result of this initial call is the
**Initial Planned State**.
If the user accepts the plan, Terraform will call `PlanResourceChange` a
If the user accepts the plan, OpenTF will call `PlanResourceChange` a
second time during the apply step, and that call is guaranteed to have a
wholly-known **Configuration** with any values from upstream dependencies
taken into account already. The result of this second call is the
**Final Planned State**.
Terraform Core compares the final with the initial planned state, enforcing
OpenTF Core compares the final with the initial planned state, enforcing
the following additional constraints along with those listed above:
* Any attribute that had a known value in the **Initial Planned State** must
@ -213,49 +213,49 @@ constraints:
After calling `ApplyResourceChange` for each resource instance in the plan,
and dealing with any other bookkeeping to return the results to the user,
a single Terraform run is complete. Terraform Core saves the **New State**
a single OpenTF run is complete. OpenTF Core saves the **New State**
in a state snapshot for the entire configuration, so it'll be preserved for
use on the next run.
When the user subsequently runs Terraform again, the **New State** becomes
When the user subsequently runs OpenTF again, the **New State** becomes
the **Previous Run State** verbatim, and passes into `UpgradeResourceState`.
### UpgradeResourceState
Because the state values for a particular resource instance persist in a
saved state snapshot from one run to the next, Terraform Core must deal with
saved state snapshot from one run to the next, OpenTF Core must deal with
the possibility that the user has upgraded to a newer version of the provider
since the last run, and that the new provider version has an incompatible
schema for the relevant resource type.
Terraform Core therefore begins by calling `UpgradeResourceState` and passing
OpenTF Core therefore begins by calling `UpgradeResourceState` and passing
the **Previous Run State** in a _raw_ form, which in current protocol versions
is the raw JSON data structure as was stored in the state snapshot. Terraform
is the raw JSON data structure as was stored in the state snapshot. OpenTF
Core doesn't have access to the previous schema versions for a provider's
resource types, so the provider itself must handle the data decoding in this
upgrade function.
The provider can then use whatever logic is appropriate to update the shape
of the data to conform to the current schema for the resource type. Although
Terraform Core has no way to enforce it, a provider should only change the
OpenTF Core has no way to enforce it, a provider should only change the
shape of the data structure and should _not_ change the meaning of the data.
In particular, it should not try to update the state data to capture any
changes made to the corresponding remote object outside of Terraform.
changes made to the corresponding remote object outside of OpenTF.
This function then returns the **Upgraded State**, which captures the same
information as the **Previous Run State** but does so in a way that conforms
to the current version of the resource type schema, which therefore allows
Terraform Core to interact with the data fully for subsequent steps.
OpenTF Core to interact with the data fully for subsequent steps.
### ReadResource
Although Terraform typically expects to have exclusive control over any remote
Although OpenTF typically expects to have exclusive control over any remote
object that is bound to a resource instance, in practice users may make changes
to those objects outside of Terraform, causing Terraform's records of the
to those objects outside of OpenTF, causing OpenTF's records of the
object to become stale.
The `ReadResource` function asks the provider to make a best effort to detect
any such external changes and describe them so that Terraform Core can use
any such external changes and describe them so that OpenTF Core can use
an up-to-date **Prior State** as the input to the next `PlanResourceChange`
call.
@ -266,7 +266,7 @@ a provider might not be able to detect certain changes. For example:
* There may be new features of the underlying API which the current provider
version doesn't know how to ask about.
Terraform Core expects a provider to carefully distinguish between the
OpenTF Core expects a provider to carefully distinguish between the
following two situations for each attribute:
* **Normalization**: the remote API has returned some data in a different form
than was recorded in the **Previous Run State**, but the meaning is unchanged.
@ -282,8 +282,8 @@ following two situations for each attribute:
In this case, the provider should return the value from the remote system,
thereby discarding the value from the **Previous Run State**. When a
provider does this, Terraform _may_ report it to the user as a change
made outside of Terraform, if Terraform Core determined that the detected
provider does this, OpenTF _may_ report it to the user as a change
made outside of OpenTF, if OpenTF Core determined that the detected
change was a possible cause of another planned action for a downstream
resource instance.
@ -296,7 +296,7 @@ over again.
Nested blocks are a configuration-only construct and so the number of blocks
cannot be changed on the fly during planning or during apply: each block
represented in the configuration must have a corresponding nested object in
the planned new state and new state, or Terraform Core will raise an error.
the planned new state and new state, or OpenTF Core will raise an error.
If a provider wishes to report about new instances of the sub-object type
represented by nested blocks that are created implicitly during the apply
@ -315,12 +315,12 @@ follow the same rules as for a nested block type of the same nesting mode.
## Import Behavior
The main resource instance change lifecycle is concerned with objects whose
entire lifecycle is driven through Terraform, including the initial creation
entire lifecycle is driven through OpenTF, including the initial creation
of the object.
As an aid to those who are adopting Terraform as a replacement for existing
processes or software, Terraform also supports adopting pre-existing objects
to bring them under Terraform's management without needing to recreate them
As an aid to those who are adopting OpenTF as a replacement for existing
processes or software, OpenTF also supports adopting pre-existing objects
to bring them under OpenTF's management without needing to recreate them
first.
When using this facility, the user provides the address of the resource
@ -331,7 +331,7 @@ by the provider on a per-resource-type basis, which we'll call the
The import process trades the user's **Import ID** for a special
**Import Stub State**, which behaves as a placeholder for the
**Previous Run State** pretending as if a previous Terraform run is what had
**Previous Run State** pretending as if a previous OpenTF run is what had
created the object.
### ImportResourceState
@ -340,7 +340,7 @@ The `ImportResourceState` operation takes the user's given **Import ID** and
uses it to verify that the given object exists and, if so, to retrieve enough
data about it to produce the **Import Stub State**.
Terraform Core will always pass the returned **Import Stub State** to the
OpenTF Core will always pass the returned **Import Stub State** to the
normal `ReadResource` operation after `ImportResourceState` returns it, so
in practice the provider may populate only the minimal subset of attributes
that `ReadResource` will need to do its work, letting the normal function
@ -348,7 +348,7 @@ deal with populating the rest of the data to match what is currently set in
the remote system.
For the same reasons that `ReadResource` is only a _best effort_ at detecting
changes outside of Terraform, a provider may not be able to fully support
changes outside of OpenTF, a provider may not be able to fully support
importing for all resource types. In that case, the provider developer must
choose between the following options:
@ -364,9 +364,9 @@ choose between the following options:
* Return an error explaining why importing isn't possible.
This is a last resort because of course it will then leave the user unable
to bring the existing object under Terraform's management. However, if a
to bring the existing object under OpenTF's management. However, if a
particular object's design doesn't suit importing then it can be a better
user experience to be clear and honest that the user must replace the object
as part of adopting Terraform, rather than to perform an import that will
leave the object in a situation where Terraform cannot meaningfully manage
as part of adopting OpenTF, rather than to perform an import that will
leave the object in a situation where OpenTF cannot meaningfully manage
it.

View File

@ -1,15 +1,15 @@
# How Terraform Uses Unicode
# How OpenTF Uses Unicode
The Terraform language uses the Unicode standards as the basis of various
The OpenTF language uses the Unicode standards as the basis of various
different features. The Unicode Consortium publishes new versions of those
standards periodically, and we aim to adopt those new versions in new
minor releases of Terraform in order to support additional characters added
minor releases of OpenTF in order to support additional characters added
in those new versions.
Unfortunately due to those features being implemented by relying on a number
of external libraries, adopting a new version of Unicode is not as simple as
just updating a version number somewhere. This document aims to describe the
various steps required to adopt a new version of Unicode in Terraform.
various steps required to adopt a new version of Unicode in OpenTF.
We typically aim to be consistent across all of these dependencies as to which
major version of Unicode we currently conform to. The usual initial driver
@ -21,7 +21,7 @@ upgrading to a new Go version.
## Unicode tables in the Go standard library
Several Terraform language features are implemented in terms of functions in
Several OpenTF language features are implemented in terms of functions in
[the Go `strings` package](https://pkg.go.dev/strings),
[the Go `unicode` package](https://pkg.go.dev/unicode), and other supporting
packages in the Go standard library.
@ -32,13 +32,13 @@ particular Go version is available in
[`unicode.Version`](https://pkg.go.dev/unicode#Version).
We adopt a new version of Go by editing the `.go-version` file in the root
of this repository. Although it's typically possible to build Terraform with
of this repository. Although it's typically possible to build OpenTF with
other versions of Go, that file documents the version we intend to use for
official releases and thus the primary version we use for development and
testing. Adopting a new Go version typically also implies other behavior
changes inherited from the Go standard library, so it's important to review the
relevant version changelog(s) to note any behavior changes we'll need to pass
on to our own users via the Terraform changelog.
on to our own users via the OpenTF changelog.
The other subsystems described below should always be set up to match
`unicode.Version`. In some cases those libraries automatically try to align
@ -55,7 +55,7 @@ HCL uses a superset of that specification for its own identifier tokenization
rules, and so it includes some code derived from the TF31 data tables that
describe which characters belong to the "ID_Start" and "ID_Continue" classes.
Since Terraform is the primary user of HCL, it's typically Terraform's adoption
Since OpenTF is the primary user of HCL, it's typically OpenTF's adoption
of a new Unicode version which drives HCL to adopt one. To update the Unicode
tables to a new version:
* Edit `hclsyntax/generate.go`'s line which runs `unicode2ragel.rb` to specify
@ -67,7 +67,7 @@ tables to a new version:
order to complete this step.)
* Run all the tests to check for regressions: `go test ./...`
* If all looks good, commit all of the changes and open a PR to HCL.
* Once that PR is merged and released, update Terraform to use the new version
* Once that PR is merged and released, update OpenTF to use the new version
of HCL.
## Unicode Text Segmentation
@ -76,7 +76,7 @@ _Text Segmentation_ (TR29) is a Unicode standards annex which describes
algorithms for breaking strings into smaller units such as sentences, words,
and grapheme clusters.
Several Terraform language features make use of the _grapheme cluster_
Several OpenTF language features make use of the _grapheme cluster_
algorithm in particular, because it provides a practical definition of
individual visible characters, taking into account combining sequences such
as Latin letters with separate diacritics or Emoji characters with gender
@ -108,27 +108,27 @@ are needed.
Once a new Unicode version is included, the maintainer of that library will
typically publish a new major version that we can depend on. Two different
codebases included in Terraform all depend directly on the `go-textseg` module
codebases included in OpenTF all depend directly on the `go-textseg` module
for parts of their functionality:
* [`hashicorp/hcl`](https://github.com/hashicorp/hcl) uses text
segmentation as part of producing visual column offsets in source ranges
returned by the tokenizer and parser. Terraform in turn uses that library
for the underlying syntax of the Terraform language, and so it passes on
returned by the tokenizer and parser. OpenTF in turn uses that library
for the underlying syntax of the OpenTF language, and so it passes on
those source ranges to the end-user as part of diagnostic messages.
* The third-party module [`github.com/zclconf/go-cty`](https://github.com/zclconf/go-cty)
provides several of the Terraform language built in functions, including
provides several of the OpenTF language built in functions, including
functions like `substr` and `length` which need to count grapheme clusters
as part of their implementation.
As part of upgrading Terraform's Unicode support we therefore typically also
As part of upgrading OpenTF's Unicode support we therefore typically also
open pull requests against these other codebases, and then adopt the new
versions that produces. Terraform work often drives the adoption of new Unicode
versions that produces. OpenTF work often drives the adoption of new Unicode
versions in those codebases, with other dependencies following along when they
next upgrade.
At the time of writing Terraform itself doesn't _directly_ depend on
`go-textseg`, and so there are no specific changes required in this Terraform
At the time of writing OpenTF itself doesn't _directly_ depend on
`go-textseg`, and so there are no specific changes required in this OpenTF
codebase aside from the `go.sum` file update that always follows from
changes to transitive dependencies.

View File

@ -5,14 +5,14 @@ package main
// experimentsAllowed can be set to any non-empty string using Go linker
// arguments in order to enable the use of experimental features for a
// particular Terraform build:
// particular OpenTF build:
//
// go install -ldflags="-X 'main.experimentsAllowed=yes'"
//
// By default this variable is initialized as empty, in which case
// experimental features are not available.
//
// The Terraform release process should arrange for this variable to be
// The OpenTF release process should arrange for this variable to be
// set for alpha releases and development snapshots, but _not_ for
// betas, release candidates, or final releases.
//

View File

@ -13,7 +13,7 @@ import (
"github.com/mitchellh/cli"
)
// helpFunc is a cli.HelpFunc that can be used to output the help CLI instructions for Terraform.
// helpFunc is a cli.HelpFunc that can be used to output the help CLI instructions for OpenTF.
func helpFunc(commands map[string]cli.CommandFactory) string {
// Determine the maximum key length, and classify based on type
var otherCommands []string

View File

@ -125,7 +125,7 @@ func applyDataStoreResourceChange(req providers.ApplyResourceChangeRequest) (res
if !req.PlannedState.GetAttr("id").IsKnown() {
idString, err := uuid.GenerateUUID()
// Terraform would probably never get this far without a good random
// OpenTF would probably never get this far without a good random
// source, but catch the error anyway.
if err != nil {
diag := tfdiags.AttributeValue(

View File

@ -281,7 +281,7 @@ func (b *Cloud) Configure(obj cty.Value) tfdiags.Diagnostics {
// Return an error if we still don't have a token at this point.
if token == "" {
loginCommand := "terraform login"
loginCommand := "opentf login"
if b.hostname != defaultHostname {
loginCommand = loginCommand + " " + b.hostname
}
@ -741,7 +741,7 @@ func (b *Cloud) StateMgr(name string) (statemgr.Full, error) {
// Explicitly ignore the pseudo-version "latest" here, as it will cause
// plan and apply to always fail.
if remoteTFVersion != tfversion.String() && remoteTFVersion != "latest" {
return nil, fmt.Errorf("Remote workspace Terraform version %q does not match local Terraform version %q", remoteTFVersion, tfversion.String())
return nil, fmt.Errorf("Remote workspace TF version %q does not match local OpenTF version %q", remoteTFVersion, tfversion.String())
}
}
@ -785,7 +785,7 @@ func (b *Cloud) Operation(ctx context.Context, op *backend.Operation) (*backend.
case backend.OperationTypeApply:
f = b.opApply
case backend.OperationTypeRefresh:
// The `terraform refresh` command has been deprecated in favor of `terraform apply -refresh-state`.
// The `opentf refresh` command has been deprecated in favor of `opentf apply -refresh-state`.
// Rather than respond with an error telling the user to run the other command we can just run
// that command instead. We will tell the user what we are doing, and then do it.
if b.CLI != nil {
@ -965,8 +965,8 @@ func (b *Cloud) VerifyWorkspaceTerraformVersion(workspaceName string) tfdiags.Di
remoteConstraint, err := version.NewConstraint(workspace.TerraformVersion)
if err != nil {
message := fmt.Sprintf(
"The remote workspace specified an invalid Terraform version or constraint (%s), "+
"and it isn't possible to determine whether the local Terraform version (%s) is compatible.",
"The remote workspace specified an invalid TF version or constraint (%s), "+
"and it isn't possible to determine whether the local OpenTF version (%s) is compatible.",
workspace.TerraformVersion,
tfversion.String(),
)
@ -1018,7 +1018,7 @@ func (b *Cloud) VerifyWorkspaceTerraformVersion(workspaceName string) tfdiags.Di
}
message := fmt.Sprintf(
"The local Terraform version (%s) does not meet the version requirements for remote workspace %s/%s (%s).",
"The local OpenTF version (%s) does not meet the version requirements for remote workspace %s/%s (%s).",
tfversion.String(),
b.organization,
workspace.Name,
@ -1148,7 +1148,7 @@ func (b *Cloud) validWorkspaceEnvVar(ctx context.Context, organization, workspac
return tfdiags.Sourceless(
tfdiags.Error,
"Invalid workspace selection",
fmt.Sprintf(`Terraform failed to find workspace %q in organization %s.`, workspace, organization),
fmt.Sprintf(`OpenTF failed to find workspace %q in organization %s.`, workspace, organization),
)
}
@ -1185,7 +1185,7 @@ func (b *Cloud) validWorkspaceEnvVar(ctx context.Context, organization, workspac
tfdiags.Error,
"Invalid workspace selection",
fmt.Sprintf(
"Terraform failed to find workspace %q with the tags specified in your configuration:\n[%s]",
"OpenTF failed to find workspace %q with the tags specified in your configuration:\n[%s]",
workspace,
strings.ReplaceAll(opts.Tags, ",", ", "),
),
@ -1245,7 +1245,7 @@ func generalError(msg string, err error) error {
// The newline in this error is to make it look good in the CLI!
const initialRetryError = `
[reset][yellow]There was an error connecting to Terraform Cloud. Please do not exit
Terraform to prevent data loss! Trying to restore the connection...
OpenTF to prevent data loss! Trying to restore the connection...
[reset]
`
@ -1261,10 +1261,10 @@ const operationNotCanceled = `
[reset][red]The remote operation was not cancelled.[reset]
`
const refreshToApplyRefresh = `[bold][yellow]Proceeding with 'terraform apply -refresh-only -auto-approve'.[reset]`
const refreshToApplyRefresh = `[bold][yellow]Proceeding with 'opentf apply -refresh-only -auto-approve'.[reset]`
const unavailableTerraformVersion = `
[reset][yellow]The local Terraform version (%s) is not available in Terraform Cloud, or your
[reset][yellow]The local OpenTF version (%s) is not available in Terraform Cloud, or your
organization does not have access to it. The new workspace will use %s. You can
change this later in the workspace settings.[reset]`
@ -1272,11 +1272,11 @@ const cloudIntegrationUsedInUnsupportedTFE = `
This version of Terraform Cloud/Enterprise does not support the state mechanism
attempting to be used by the platform. This should never happen.
Please reach out to HashiCorp Support to resolve this issue.`
Please reach out to OpenTF Support to resolve this issue.`
var (
workspaceConfigurationHelp = fmt.Sprintf(
`The 'workspaces' block configures how Terraform CLI maps its workspaces for this single
`The 'workspaces' block configures how OpenTF CLI maps its workspaces for this single
configuration to workspaces within a Terraform Cloud organization. Two strategies are available:
[bold]tags[reset] - %s
@ -1289,12 +1289,12 @@ for use with Terraform Cloud.`
schemaDescriptionOrganization = `The name of the organization containing the targeted workspace(s).`
schemaDescriptionToken = `The token used to authenticate with Terraform Cloud/Enterprise. Typically this argument should not
be set, and 'terraform login' used instead; your credentials will then be fetched from your CLI
be set, and 'opentf login' used instead; your credentials will then be fetched from your CLI
configuration file or configured credential helper.`
schemaDescriptionTags = `A set of tags used to select remote Terraform Cloud workspaces to be used for this single
configuration. New workspaces will automatically be tagged with these tag values. Generally, this
is the primary and recommended strategy to use. This option conflicts with "name".`
is the primary and recommended strategy to use. This option conflicts with "name".`
schemaDescriptionName = `The name of a single Terraform Cloud workspace to be used with this configuration.
When configured, only the specified workspace can be used. This option conflicts with "tags".`

View File

@ -71,7 +71,7 @@ func (b *Cloud) opApply(stopCtx, cancelCtx context.Context, op *backend.Operatio
"No configuration files found",
`Apply requires configuration to be present. Applying without a configuration `+
`would mark everything for destruction, which is normally not what is desired. `+
`If you would like to destroy everything, please run 'terraform destroy' which `+
`If you would like to destroy everything, please run 'opentf destroy' which `+
`does not require any configuration files.`,
))
}
@ -158,11 +158,11 @@ func (b *Cloud) opApply(stopCtx, cancelCtx context.Context, op *backend.Operatio
if op.PlanMode == plans.DestroyMode {
opts.Query = "\nDo you really want to destroy all resources in workspace \"" + op.Workspace + "\"?"
opts.Description = "Terraform will destroy all your managed infrastructure, as shown above.\n" +
opts.Description = "OpenTF will destroy all your managed infrastructure, as shown above.\n" +
"There is no undo. Only 'yes' will be accepted to confirm."
} else {
opts.Query = "\nDo you want to perform these actions in workspace \"" + op.Workspace + "\"?"
opts.Description = "Terraform will perform the actions described above.\n" +
opts.Description = "OpenTF will perform the actions described above.\n" +
"Only 'yes' will be accepted to approve."
}

View File

@ -1948,7 +1948,7 @@ func TestCloud_applyVersionCheck(t *testing.T) {
}
const applySuccessOneResourceAdded = `
Terraform v0.11.10
OpenTF v0.11.10
Initializing plugins and modules...
null_resource.hello: Creating...

View File

@ -147,7 +147,7 @@ func (b *Cloud) LocalRun(op *backend.Operation) (*backend.LocalRun, statemgr.Ful
diags = diags.Append(ctxDiags)
ret.Core = tfCtx
log.Printf("[TRACE] cloud: finished building terraform.Context")
log.Printf("[TRACE] cloud: finished building opentf.Context")
return ret, stateMgr, diags
}

View File

@ -74,7 +74,7 @@ func (b *Cloud) opPlan(stopCtx, cancelCtx context.Context, op *backend.Operation
`would mark everything for destruction, which is normally not what is desired. `+
`If you would like to destroy everything, please run plan with the "-destroy" `+
`flag or create a single empty configuration file. Otherwise, please create `+
`a Terraform configuration file in the path being executed and try again.`,
`a OpenTF configuration file in the path being executed and try again.`,
))
}
@ -160,7 +160,7 @@ func (b *Cloud) plan(stopCtx, cancelCtx context.Context, op *backend.Operation,
The remote workspace is configured to work with configuration at
%s relative to the target repository.
Terraform will upload the contents of the following directory,
OpenTF will upload the contents of the following directory,
excluding files or directories as defined by a .terraformignore file
at %s/.terraformignore (if it is present),
in order to capture the filesystem context the remote workspace expects:
@ -400,7 +400,7 @@ func (b *Cloud) AssertImportCompatible(config *configs.Config) error {
// First, check the remote API version is high enough.
currentAPIVersion, err := version.NewVersion(b.client.RemoteAPIVersion())
if err != nil {
return fmt.Errorf("Error parsing remote API version. To proceed, please remove any import blocks from your config. Please report the following error to the Terraform team: %s", err)
return fmt.Errorf("Error parsing remote API version. To proceed, please remove any import blocks from your config. Please report the following error to the OpenTF team: %s", err)
}
desiredAPIVersion, _ := version.NewVersion("2.6")
if currentAPIVersion.LessThan(desiredAPIVersion) {
@ -410,11 +410,11 @@ func (b *Cloud) AssertImportCompatible(config *configs.Config) error {
// Second, check the agent version is high enough.
agentEnv, isSet := os.LookupEnv("TFC_AGENT_VERSION")
if !isSet {
return fmt.Errorf("Error reading TFC agent version. To proceed, please remove any import blocks from your config. Please report the following error to the Terraform team: TFC_AGENT_VERSION not present.")
return fmt.Errorf("Error reading TFC agent version. To proceed, please remove any import blocks from your config. Please report the following error to the OpenTF team: TFC_AGENT_VERSION not present.")
}
currentAgentVersion, err := version.NewVersion(agentEnv)
if err != nil {
return fmt.Errorf("Error parsing TFC agent version. To proceed, please remove any import blocks from your config. Please report the following error to the Terraform team: %s", err)
return fmt.Errorf("Error parsing TFC agent version. To proceed, please remove any import blocks from your config. Please report the following error to the OpenTF team: %s", err)
}
desiredAgentVersion, _ := version.NewVersion("1.10")
if currentAgentVersion.LessThan(desiredAgentVersion) {

View File

@ -71,7 +71,7 @@ func TestCloud_refreshBasicActuallyRunsApplyRefresh(t *testing.T) {
}
output := b.CLI.(*cli.MockUi).OutputWriter.String()
if !strings.Contains(output, "Proceeding with 'terraform apply -refresh-only -auto-approve'") {
if !strings.Contains(output, "Proceeding with 'opentf apply -refresh-only -auto-approve'") {
t.Fatalf("expected TFC header in output: %s", output)
}

View File

@ -31,7 +31,7 @@ func (b *Cloud) ShowPlanForRun(ctx context.Context, runID, runHostname string, r
// Get run and plan
r, err := b.client.Runs.ReadWithOptions(ctx, runID, &tfe.RunReadOptions{Include: []tfe.RunIncludeOpt{tfe.RunPlan, tfe.RunWorkspace}})
if err == tfe.ErrResourceNotFound {
return nil, fmt.Errorf("couldn't read information for cloud run %s; make sure you've run `terraform login` and that you have permission to view the run", runID)
return nil, fmt.Errorf("couldn't read information for cloud run %s; make sure you've run `opentf login` and that you have permission to view the run", runID)
} else if err != nil {
return nil, fmt.Errorf("couldn't read information for cloud run %s: %w", runID, err)
}
@ -67,9 +67,9 @@ func (b *Cloud) ShowPlanForRun(ctx context.Context, runID, runHostname string, r
}
if err == tfe.ErrResourceNotFound {
if redacted {
return nil, fmt.Errorf("couldn't read plan data for cloud run %s; make sure you've run `terraform login` and that you have permission to view the run", runID)
return nil, fmt.Errorf("couldn't read plan data for cloud run %s; make sure you've run `opentf login` and that you have permission to view the run", runID)
} else {
return nil, fmt.Errorf("couldn't read unredacted JSON plan data for cloud run %s; make sure you've run `terraform login` and that you have admin permissions on the workspace", runID)
return nil, fmt.Errorf("couldn't read unredacted JSON plan data for cloud run %s; make sure you've run `opentf login` and that you have admin permissions on the workspace", runID)
}
} else if err != nil {
return nil, fmt.Errorf("couldn't read plan data for cloud run %s: %w", runID, err)

View File

@ -39,7 +39,7 @@ func TestCloud_showMissingRun(t *testing.T) {
absentRunID := "run-WwwwXxxxYyyyZzzz"
_, err := b.ShowPlanForRun(context.Background(), absentRunID, "app.terraform.io", true)
if !strings.Contains(err.Error(), "terraform login") {
if !strings.Contains(err.Error(), "opentf login") {
t.Fatalf("expected error message to suggest checking your login status, instead got: %s", err)
}
}

View File

@ -356,7 +356,7 @@ func WithEnvVars(t *testing.T) {
vars: map[string]string{
"TF_WORKSPACE": "i-dont-exist-in-org",
},
expectedErr: `Invalid workspace selection: Terraform failed to find workspace "i-dont-exist-in-org" in organization hashicorp`,
expectedErr: `Invalid workspace selection: OpenTF failed to find workspace "i-dont-exist-in-org" in organization hashicorp`,
},
"workspaces and env var specified": {
config: cty.ObjectVal(map[string]cty.Value{
@ -399,7 +399,7 @@ func WithEnvVars(t *testing.T) {
vars: map[string]string{
"TF_WORKSPACE": "shire",
},
expectedErr: "Terraform failed to find workspace \"shire\" with the tags specified in your configuration:\n[cloud]",
expectedErr: "OpenTF failed to find workspace \"shire\" with the tags specified in your configuration:\n[cloud]",
},
"env var workspace has specified tag": {
setup: func(b *Cloud) {
@ -610,7 +610,7 @@ func TestCloud_config(t *testing.T) {
"project": cty.NullVal(cty.String),
}),
}),
confErr: "terraform login localhost",
confErr: "opentf login localhost",
},
"with_tags": {
config: cty.ObjectVal(map[string]cty.Value{
@ -808,7 +808,7 @@ func TestCloud_setUnavailableTerraformVersion(t *testing.T) {
_, err = b.StateMgr(workspaceName)
if err != nil {
t.Fatalf("expected no error from StateMgr, despite not being able to set remote Terraform version: %#v", err)
t.Fatalf("expected no error from StateMgr, despite not being able to set remote TF version: %#v", err)
}
// Make sure the workspace was created:
workspace, err := b.client.Workspaces.Read(context.Background(), b.organization, workspaceName)
@ -822,7 +822,7 @@ func TestCloud_setUnavailableTerraformVersion(t *testing.T) {
tfe.WorkspaceUpdateOptions{TerraformVersion: tfe.String("1.1.0")},
)
if err == nil {
t.Fatalf("the mocks aren't emulating a nonexistent remote Terraform version correctly, so this test isn't trustworthy anymore")
t.Fatalf("the mocks aren't emulating a nonexistent remote TF version correctly, so this test isn't trustworthy anymore")
}
}
@ -1089,7 +1089,7 @@ func TestCloud_StateMgr_versionCheck(t *testing.T) {
}
// This should fail
want := `Remote workspace Terraform version "0.13.5" does not match local Terraform version "0.14.0"`
want := `Remote workspace TF version "0.13.5" does not match local OpenTF version "0.14.0"`
if _, err := b.StateMgr(testBackendSingleWorkspaceName); err.Error() != want {
t.Fatalf("wrong error\n got: %v\nwant: %v", err.Error(), want)
}
@ -1203,7 +1203,7 @@ func TestCloud_VerifyWorkspaceTerraformVersion(t *testing.T) {
if len(diags) != 1 {
t.Fatal("expected diag, but none returned")
}
if got := diags.Err().Error(); !strings.Contains(got, "Incompatible Terraform version") {
if got := diags.Err().Error(); !strings.Contains(got, "Incompatible TF version") {
t.Fatalf("unexpected error: %s", got)
}
} else {
@ -1252,7 +1252,7 @@ func TestCloud_VerifyWorkspaceTerraformVersion_workspaceErrors(t *testing.T) {
if len(diags) != 1 {
t.Fatal("expected diag, but none returned")
}
if got := diags.Err().Error(); !strings.Contains(got, "Incompatible Terraform version: The remote workspace specified") {
if got := diags.Err().Error(); !strings.Contains(got, "Incompatible TF version: The remote workspace specified") {
t.Fatalf("unexpected error: %s", got)
}
}
@ -1304,10 +1304,10 @@ func TestCloud_VerifyWorkspaceTerraformVersion_ignoreFlagSet(t *testing.T) {
if got, want := diags[0].Severity(), tfdiags.Warning; got != want {
t.Errorf("wrong severity: got %#v, want %#v", got, want)
}
if got, want := diags[0].Description().Summary, "Incompatible Terraform version"; got != want {
if got, want := diags[0].Description().Summary, "Incompatible TF version"; got != want {
t.Errorf("wrong summary: got %s, want %s", got, want)
}
wantDetail := "The local Terraform version (0.14.0) does not meet the version requirements for remote workspace hashicorp/app-prod (0.13.5)."
wantDetail := "The local OpenTF version (0.14.0) does not meet the version requirements for remote workspace hashicorp/app-prod (0.13.5)."
if got := diags[0].Description().Detail; got != wantDetail {
t.Errorf("wrong summary: got %s, want %s", got, wantDetail)
}

View File

@ -6,7 +6,7 @@ TFE_TOKEN=<token> TFE_HOSTNAME=<hostname> TF_ACC=1 go test ./internal/cloud/e2e
```
Required flags
* `TF_ACC=1`. This variable is used as part of terraform for tests that make
* `TF_ACC=1`. This variable is used as part of opentf for tests that make
external network calls. This is needed to run these tests. Without it, the
tests do not run.
* `TFE_TOKEN=<admin token>` and `TFE_HOSTNAME=<hostname>`. The helpers
@ -16,9 +16,9 @@ for these tests require admin access to a TFC/TFE instance.
### Flags
* Use the `-v` flag for normal verbose mode.
* Use the `-tfoutput` flag to print the terraform output to standard out.
* Use the `-tfoutput` flag to print the opentf output to standard out.
* Use `-ldflags` to change the version Prerelease to match a version
available remotely. Some behaviors rely on the exact local version Terraform
available remotely. Some behaviors rely on the exact local version OpenTF
being available in TFC/TFE, and manipulating the Prerelease during build is
often the only way to ensure this.
[(More on `-ldflags`.)](https://www.digitalocean.com/community/tutorials/using-ldflags-to-set-version-information-for-go-applications)

View File

@ -40,7 +40,7 @@ var (
)
)
const ignoreRemoteVersionHelp = "If you're sure you want to upgrade the state, you can force Terraform to continue using the -ignore-remote-version flag. This may result in an unusable workspace."
const ignoreRemoteVersionHelp = "If you're sure you want to upgrade the state, you can force OpenTF to continue using the -ignore-remote-version flag. This may result in an unusable workspace."
func missingConfigAttributeAndEnvVar(attribute string, envVar string) tfdiags.Diagnostic {
detail := strings.TrimSpace(fmt.Sprintf("\"%s\" must be set in the cloud configuration or as an environment variable: %s.\n", attribute, envVar))
@ -59,5 +59,5 @@ func incompatibleWorkspaceTerraformVersion(message string, ignoreVersionConflict
suggestion = ""
}
description := strings.TrimSpace(fmt.Sprintf("%s\n\n%s", message, suggestion))
return tfdiags.Sourceless(severity, "Incompatible Terraform version", description)
return tfdiags.Sourceless(severity, "Incompatible TF version", description)
}

View File

@ -76,7 +76,7 @@ type State struct {
var ErrStateVersionUnauthorizedUpgradeState = errors.New(strings.TrimSpace(`
You are not authorized to read the full state version containing outputs.
State versions created by terraform v1.3.0 and newer do not require this level
State versions created by opentf v1.3.0 and newer do not require this level
of authorization and therefore this error can usually be fixed by upgrading the
remote state version.
`))
@ -338,7 +338,7 @@ func (s *State) Lock(info *statemgr.LockInfo) (string, error) {
// Lock the workspace.
_, err := s.tfeClient.Workspaces.Lock(ctx, s.workspace.ID, tfe.WorkspaceLockOptions{
Reason: tfe.String("Locked by Terraform"),
Reason: tfe.String("Locked by OpenTF"),
})
if err != nil {
if err == tfe.ErrWorkspaceLocked {

View File

@ -83,7 +83,7 @@ func ParseApply(args []string) (*Apply, tfdiags.Diagnostics) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Plan file or auto-approve required",
"Terraform cannot ask for interactive approval when -json is set. You can either apply a saved plan file, or enable the -auto-approve option.",
"OpenTF cannot ask for interactive approval when -json is set. You can either apply a saved plan file, or enable the -auto-approve option.",
))
}
@ -100,13 +100,13 @@ func ParseApply(args []string) (*Apply, tfdiags.Diagnostics) {
}
// ParseApplyDestroy is a special case of ParseApply that deals with the
// "terraform destroy" command, which is effectively an alias for
// "terraform apply -destroy".
// "opentf destroy" command, which is effectively an alias for
// "opentf apply -destroy".
func ParseApplyDestroy(args []string) (*Apply, tfdiags.Diagnostics) {
apply, diags := ParseApply(args)
// So far ParseApply was using the command line options like -destroy
// and -refresh-only to determine the plan mode. For "terraform destroy"
// and -refresh-only to determine the plan mode. For "opentf destroy"
// we expect neither of those arguments to be set, and so the plan mode
// should currently be set to NormalMode, which we'll replace with
// DestroyMode here. If it's already set to something else then that
@ -121,13 +121,13 @@ func ParseApplyDestroy(args []string) (*Apply, tfdiags.Diagnostics) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Invalid mode option",
"The -destroy option is not valid for \"terraform destroy\", because this command always runs in destroy mode.",
"The -destroy option is not valid for \"opentf destroy\", because this command always runs in destroy mode.",
))
case plans.RefreshOnlyMode:
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Invalid mode option",
"The -refresh-only option is not valid for \"terraform destroy\".",
"The -refresh-only option is not valid for \"opentf destroy\".",
))
default:
// This is a non-ideal error message for if we forget to handle a
@ -136,7 +136,7 @@ func ParseApplyDestroy(args []string) (*Apply, tfdiags.Diagnostics) {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Invalid mode option",
fmt.Sprintf("The \"terraform destroy\" command doesn't support %s.", apply.Operation.PlanMode),
fmt.Sprintf("The \"opentf destroy\" command doesn't support %s.", apply.Operation.PlanMode),
))
}

View File

@ -10,16 +10,17 @@ import (
"github.com/hashicorp/hcl/v2"
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/placeholderplaceholderplaceholder/opentf/internal/addrs"
"github.com/placeholderplaceholderplaceholder/opentf/internal/plans"
"github.com/placeholderplaceholderplaceholder/opentf/internal/tfdiags"
)
// DefaultParallelism is the limit Terraform places on total parallel
// DefaultParallelism is the limit OpenTF places on total parallel
// operations as it walks the dependency graph.
const DefaultParallelism = 10
// State describes arguments which are used to define how Terraform interacts
// State describes arguments which are used to define how OpenTF interacts
// with state.
type State struct {
// Lock controls whether or not the state manager is used to lock state
@ -46,7 +47,7 @@ type State struct {
BackupPath string
}
// Operation describes arguments which are used to configure how a Terraform
// Operation describes arguments which are used to configure how a OpenTF
// operation such as a plan or apply executes.
type Operation struct {
// PlanMode selects one of the mutually-exclusive planning modes that
@ -54,7 +55,7 @@ type Operation struct {
// only for an operation that produces a plan.
PlanMode plans.Mode
// Parallelism is the limit Terraform places on total parallel operations
// Parallelism is the limit OpenTF places on total parallel operations
// as it walks the dependency graph.
Parallelism int
@ -66,7 +67,7 @@ type Operation struct {
// their dependencies.
Targets []addrs.Targetable
// ForceReplace addresses cause Terraform to force a particular set of
// ForceReplace addresses cause OpenTF to force a particular set of
// resource instances to generate "replace" actions in any plan where they
// would normally have generated "no-op" or "update" actions.
//
@ -169,7 +170,7 @@ func (o *Operation) Parse() tfdiags.Diagnostics {
diags = diags.Append(tfdiags.Sourceless(
tfdiags.Error,
"Incompatible refresh options",
"It doesn't make sense to use -refresh-only at the same time as -refresh=false, because Terraform would have nothing to do.",
"It doesn't make sense to use -refresh-only at the same time as -refresh=false, because OpenTF would have nothing to do.",
))
}
default:

View File

@ -25,7 +25,7 @@ type Plan struct {
// OutPath contains an optional path to store the plan file
OutPath string
// GenerateConfigPath tells Terraform that config should be generated for
// GenerateConfigPath tells OpenTF that config should be generated for
// unmatched import target paths and which path the generated file should
// be written to.
GenerateConfigPath string

View File

@ -18,7 +18,7 @@ type Validate struct {
// Path.
TestDirectory string
// NoTests indicates that Terraform should not validate any test files
// NoTests indicates that OpenTF should not validate any test files
// included with the module.
NoTests bool

View File

@ -6,7 +6,7 @@
//
// The CLI config is a small collection of settings that a user can override via
// some files in their home directory or, in some cases, via environment
// variables. The CLI config is not the same thing as a Terraform configuration
// variables. The CLI config is not the same thing as a OpenTF configuration
// written in the Terraform language; the logic for those lives in the top-level
// directory "configs".
package cliconfig
@ -24,15 +24,16 @@ import (
"github.com/hashicorp/hcl"
svchost "github.com/hashicorp/terraform-svchost"
"github.com/placeholderplaceholderplaceholder/opentf/internal/tfdiags"
)
const pluginCacheDirEnvVar = "TF_PLUGIN_CACHE_DIR"
const pluginCacheMayBreakLockFileEnvVar = "TF_PLUGIN_CACHE_MAY_BREAK_DEPENDENCY_LOCK_FILE"
// Config is the structure of the configuration for the Terraform CLI.
// Config is the structure of the configuration for the OpenTF CLI.
//
// This is not the configuration for Terraform itself. That is in the
// This is not the configuration for OpenTF itself. That is in the
// "config" package.
type Config struct {
Providers map[string]string
@ -49,9 +50,9 @@ type Config struct {
// those who wish to use the Plugin Cache Dir even in cases where doing so
// will cause the dependency lock file to be incomplete.
//
// This is likely to become a silent no-op in future Terraform versions but
// This is likely to become a silent no-op in future OpenTF versions but
// is here in recognition of the fact that the dependency lock file is not
// yet a good fit for all Terraform workflows and folks in that category
// yet a good fit for all OpenTF workflows and folks in that category
// would prefer to have the plugin cache dir's behavior to take priority
// over the requirements of the dependency lock file.
PluginCacheMayBreakDependencyLockFile bool `hcl:"plugin_cache_may_break_dependency_lock_file"`
@ -94,7 +95,7 @@ func ConfigFile() (string, error) {
return configFile()
}
// ConfigDir returns the configuration directory for Terraform.
// ConfigDir returns the configuration directory for OpenTF.
func ConfigDir() (string, error) {
return configDir()
}
@ -122,7 +123,7 @@ func LoadConfig() (*Config, tfdiags.Diagnostics) {
// in the config directory. We skip the config directory when source
// file override is set because we interpret the environment variable
// being set as an intention to ignore the default set of CLI config
// files because we're doing something special, like running Terraform
// files because we're doing something special, like running OpenTF
// in automation with a locally-customized configuration.
if cliConfigFileOverride() == "" {
if configDir, err := ConfigDir(); err == nil {

View File

@ -18,6 +18,7 @@ import (
svchost "github.com/hashicorp/terraform-svchost"
svcauth "github.com/hashicorp/terraform-svchost/auth"
"github.com/placeholderplaceholderplaceholder/opentf/internal/configs/hcl2shim"
pluginDiscovery "github.com/placeholderplaceholderplaceholder/opentf/internal/plugin/discovery"
"github.com/placeholderplaceholderplaceholder/opentf/internal/replacefile"
@ -150,12 +151,12 @@ func collectCredentialsFromEnv() map[svchost.Hostname]string {
// libraries that might interfere with how they are encoded, we'll
// be tolerant of them being given either directly as UTF-8 IDNs
// or in Punycode form, normalizing to Punycode form here because
// that is what the Terraform credentials helper protocol will
// that is what the OpenTF credentials helper protocol will
// use in its requests.
//
// Using ForDisplay first here makes this more liberal than Terraform
// Using ForDisplay first here makes this more liberal than OpenTF
// itself would usually be in that it will tolerate pre-punycoded
// hostnames that Terraform normally rejects in other contexts in order
// hostnames that OpenTF normally rejects in other contexts in order
// to ensure stored hostnames are human-readable.
dispHost := svchost.ForDisplay(rawHost)
hostname, err := svchost.ForComparison(dispHost)

View File

@ -9,6 +9,7 @@ import (
"github.com/hashicorp/hcl"
hclast "github.com/hashicorp/hcl/hcl/ast"
"github.com/placeholderplaceholderplaceholder/opentf/internal/addrs"
"github.com/placeholderplaceholderplaceholder/opentf/internal/getproviders"
"github.com/placeholderplaceholderplaceholder/opentf/internal/tfdiags"
@ -30,7 +31,7 @@ type ProviderInstallation struct {
// This is _not_ intended for "production" use because it bypasses the
// usual version selection and checksum verification mechanisms for
// the providers in question. To make that intent/effect clearer, some
// Terraform commands emit warnings when overrides are present. Local
// OpenTF commands emit warnings when overrides are present. Local
// mirror directories are a better way to distribute "released"
// providers, because they are still subject to version constraints and
// checksum verification.

View File

@ -6,7 +6,7 @@ credentials "example.com" {
credentials "example.net" {
# Username and password are not currently supported, but we want to tolerate
# unknown keys in case future versions add new keys when both old and new
# versions of Terraform are installed on a system, sharing the same
# versions of OpenTF are installed on a system, sharing the same
# CLI config.
username = "foo"
password = "baz"

View File

@ -23,18 +23,18 @@ const (
LockThreshold = 400 * time.Millisecond
LockErrorMessage = `Error message: %s
Terraform acquires a state lock to protect the state from being written
OpenTF acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.`
UnlockErrorMessage = `Error message: %s
Terraform acquires a lock when accessing your state to prevent others
running Terraform to potentially modify the state at the same time. An
OpenTF acquires a lock when accessing your state to prevent others
running OpenTF to potentially modify the state at the same time. An
error occurred while releasing this lock. This could mean that the lock
did or did not release properly. If the lock didn't release properly,
Terraform may not be able to run future commands since it'll appear as if
OpenTF may not be able to run future commands since it'll appear as if
the lock is held.
In this scenario, please call the "force-unlock" command to unlock the

View File

@ -1,16 +1,16 @@
# jsonformat
This package contains functionality around formatting and displaying the JSON
structured output produced by adding the `-json` flag to various Terraform
structured output produced by adding the `-json` flag to various OpenTF
commands.
## Terraform Structured Plan Renderer
## OpenTF Structured Plan Renderer
As of January 2023, this package contains only a single structure: the
`Renderer`.
The renderer accepts the JSON structured output produced by the
`terraform show <plan-file> -json` command and writes it in a human-readable
`opentf show <plan-file> -json` command and writes it in a human-readable
format.
Implementation details and decisions for the `Renderer` are discussed in the
@ -30,9 +30,9 @@ concerned with the complex diff calculations.
#### The `differ` package
The `differ` package operates on `Change` objects. These are produced from
`jsonplan.Change` objects (which are produced by the `terraform show` command).
`jsonplan.Change` objects (which are produced by the `opentf show` command).
Each `jsonplan.Change` object represents a single resource within the overall
Terraform configuration.
OpenTF configuration.
The `differ` package will iterate through the `Change` objects and produce a
single `Diff` that represents a processed summary of the changes described by

View File

@ -2,7 +2,7 @@
// SPDX-License-Identifier: MPL-2.0
// Package computed contains types that represent the computed diffs for
// Terraform blocks, attributes, and outputs.
// OpenTF blocks, attributes, and outputs.
//
// Each Diff struct is made up of a renderer, an action, and a boolean
// describing the diff. The renderer internally holds child diffs or concrete

View File

@ -2743,7 +2743,7 @@ func TestSpecificCases(t *testing.T) {
}, nil, nil, nil, nil, plans.Create, false),
},
// The following tests are from issue 33472. Basically Terraform allows
// The following tests are from issue 33472. Basically OpenTF allows
// callers to treat numbers as strings in references and expects us
// to coerce the strings into numbers. For example the following are
// equivalent.

View File

@ -26,7 +26,7 @@ func computeAttributeDiffAsList(change structured.Change, elementType cty.Type)
// we just treat all children of a relevant list or set as also
// relevant.
//
// Interestingly the terraform plan builder also agrees with this, and
// Interestingly the opentf plan builder also agrees with this, and
// never sets relevant attributes beneath lists or sets. We're just
// going to enforce this logic here as well. If the collection is
// relevant (decided elsewhere), then every element in the collection is

View File

@ -90,7 +90,7 @@ func processSet(change structured.Change, process func(value structured.Change))
// we just treat all children of a relevant list or set as also
// relevant.
//
// Interestingly the terraform plan builder also agrees with this, and
// Interestingly the opentf plan builder also agrees with this, and
// never sets relevant attributes beneath lists or sets. We're just
// going to enforce this logic here as well. If the collection is
// relevant (decided elsewhere), then every element in the collection is

View File

@ -108,7 +108,7 @@ func (opts JsonOpts) processArray(change structured.ChangeSlice) computed.Diff {
// we just treat all children of a relevant list as also relevant, so we
// ignore the relevant attributes field.
//
// Interestingly the terraform plan builder also agrees with this, and
// Interestingly the opentf plan builder also agrees with this, and
// never sets relevant attributes beneath lists or sets. We're just
// going to enforce this logic here as well. If the list is relevant
// (decided elsewhere), then every element in the list is also relevant.

View File

@ -85,7 +85,7 @@ type Renderer struct {
func (renderer Renderer) RenderHumanPlan(plan Plan, mode plans.Mode, opts ...plans.Quality) {
if incompatibleVersions(jsonplan.FormatVersion, plan.PlanFormatVersion) || incompatibleVersions(jsonprovider.FormatVersion, plan.ProviderFormatVersion) {
renderer.Streams.Println(format.WordWrap(
renderer.Colorize.Color("\n[bold][red]Warning:[reset][bold] This plan was generated using a different version of Terraform, the diff presented here may be missing representations of recent features."),
renderer.Colorize.Color("\n[bold][red]Warning:[reset][bold] This plan was generated using a different version of OpenTF, the diff presented here may be missing representations of recent features."),
renderer.Streams.Stdout.Columns()))
}
@ -95,7 +95,7 @@ func (renderer Renderer) RenderHumanPlan(plan Plan, mode plans.Mode, opts ...pla
func (renderer Renderer) RenderHumanState(state State) {
if incompatibleVersions(jsonstate.FormatVersion, state.StateFormatVersion) || incompatibleVersions(jsonprovider.FormatVersion, state.ProviderFormatVersion) {
renderer.Streams.Println(format.WordWrap(
renderer.Colorize.Color("\n[bold][red]Warning:[reset][bold] This state was retrieved using a different version of Terraform, the state presented here maybe missing representations of recent features."),
renderer.Colorize.Color("\n[bold][red]Warning:[reset][bold] This state was retrieved using a different version of OpenTF, the state presented here maybe missing representations of recent features."),
renderer.Streams.Stdout.Columns()))
}

View File

@ -177,13 +177,13 @@ func (p *PathMatcher) GetChildWithIndex(index int) Matcher {
continue
}
// Terraform actually allows user to provide strings into indexes as
// OpenTF actually allows user to provide strings into indexes as
// long as the string can be interpreted into a number. For example, the
// following are equivalent and we need to support them.
// - test_resource.resource.list[0].attribute
// - test_resource.resource.list["0"].attribute
//
// Note, that Terraform will raise a validation error if the string
// Note, that OpenTF will raise a validation error if the string
// can't be coerced into a number, so we will panic here if anything
// goes wrong safe in the knowledge the validation should stop this from
// happening.
@ -196,13 +196,13 @@ func (p *PathMatcher) GetChildWithIndex(index int) Matcher {
case string:
f, err := strconv.ParseFloat(val, 64)
if err != nil {
panic(fmt.Errorf("found invalid type within path (%v:%T), the validation shouldn't have allowed this to happen; this is a bug in Terraform, please report it", val, val))
panic(fmt.Errorf("found invalid type within path (%v:%T), the validation shouldn't have allowed this to happen; this is a bug in OpenTF, please report it", val, val))
}
if int(f) == index {
child.Paths = append(child.Paths, path[1:])
}
default:
panic(fmt.Errorf("found invalid type within path (%v:%T), the validation shouldn't have allowed this to happen; this is a bug in Terraform, please report it", val, val))
panic(fmt.Errorf("found invalid type within path (%v:%T), the validation shouldn't have allowed this to happen; this is a bug in OpenTF, please report it", val, val))
}
}
return child

View File

@ -179,7 +179,7 @@ func marshalVertexID(v Vertex) string {
return VertexName(v)
// we could try harder by attempting to read the arbitrary value from the
// interface, but we shouldn't get here from terraform right now.
// interface, but we shouldn't get here from OpenTF right now.
}
// check for a Subgrapher, and return the underlying *Graph.

View File

@ -30,7 +30,7 @@ func init() {
registerConcludedExperiment(SuppressProviderSensitiveAttrs, "Provider-defined sensitive attributes are now redacted by default, without enabling an experiment.")
registerConcludedExperiment(ConfigDrivenMove, "Declarations of moved resource instances using \"moved\" blocks can now be used by default, without enabling an experiment.")
registerConcludedExperiment(PreconditionsPostconditions, "Condition blocks can now be used by default, without enabling an experiment.")
registerConcludedExperiment(ModuleVariableOptionalAttrs, "The final feature corresponding to this experiment differs from the experimental form and is available in the Terraform language from Terraform v1.3.0 onwards.")
registerConcludedExperiment(ModuleVariableOptionalAttrs, "The final feature corresponding to this experiment differs from the experimental form and is available in the OpenTF language from OpenTF v1.3.0 onwards.")
}
// GetCurrent takes an experiment name and returns the experiment value

View File

@ -218,7 +218,7 @@ func (i *ModuleInstaller) moduleInstallWalker(ctx context.Context, manifest mods
Severity: hcl.DiagError,
Summary: "Failed to remove local module cache",
Detail: fmt.Sprintf(
"Terraform tried to remove %s in order to reinstall this module, but encountered an error: %s",
"OpenTF tried to remove %s in order to reinstall this module, but encountered an error: %s",
instPath, err,
),
})
@ -463,7 +463,7 @@ func (i *ModuleInstaller) installRegistryModule(ctx context.Context, req *config
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Invalid response from remote module registry",
Detail: fmt.Sprintf("The registry at %s returned an invalid response when Terraform requested available versions for module %q (%s:%d).", hostname, req.Name, req.CallRange.Filename, req.CallRange.Start.Line),
Detail: fmt.Sprintf("The registry at %s returned an invalid response when OpenTF requested available versions for module %q (%s:%d).", hostname, req.Name, req.CallRange.Filename, req.CallRange.Start.Line),
Subject: req.CallRange.Ptr(),
})
return nil, nil, diags
@ -482,7 +482,7 @@ func (i *ModuleInstaller) installRegistryModule(ctx context.Context, req *config
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagWarning,
Summary: "Invalid response from remote module registry",
Detail: fmt.Sprintf("The registry at %s returned an invalid version string %q for module %q (%s:%d), which Terraform ignored.", hostname, mv.Version, req.Name, req.CallRange.Filename, req.CallRange.Start.Line),
Detail: fmt.Sprintf("The registry at %s returned an invalid version string %q for module %q (%s:%d), which OpenTF ignored.", hostname, mv.Version, req.Name, req.CallRange.Filename, req.CallRange.Start.Line),
Subject: req.CallRange.Ptr(),
})
continue
@ -658,7 +658,7 @@ func (i *ModuleInstaller) installRegistryModule(ctx context.Context, req *config
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unreadable module directory",
Detail: fmt.Sprintf("The directory %s could not be read. This is a bug in Terraform and should be reported.", modDir),
Detail: fmt.Sprintf("The directory %s could not be read. This is a bug in OpenTF and should be reported.", modDir),
})
} else if vDiags := mod.CheckCoreVersionRequirements(req.Path, req.SourceAddr); vDiags.HasErrors() {
// If the core version requirements are not met, we drop any other
@ -759,7 +759,7 @@ func (i *ModuleInstaller) installGoGetterModule(ctx context.Context, req *config
diags = diags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,
Summary: "Unreadable module directory",
Detail: fmt.Sprintf("The directory %s could not be read. This is a bug in Terraform and should be reported.", modDir),
Detail: fmt.Sprintf("The directory %s could not be read. This is a bug in OpenTF and should be reported.", modDir),
})
} else if vDiags := mod.CheckCoreVersionRequirements(req.Path, req.SourceAddr); vDiags.HasErrors() {
// If the core version requirements are not met, we drop any other
@ -891,7 +891,7 @@ func maybeImproveLocalInstallError(req *configs.ModuleRequest, diags hcl.Diagnos
// treats relative paths as local, so if it seems like that's
// what the user was doing then we'll add an additional note
// about it.
suggestion = "\n\nTerraform treats absolute filesystem paths as external modules which establish a new module package. To treat this directory as part of the same package as its caller, use a local path starting with either \"./\" or \"../\"."
suggestion = "\n\nOpenTF treats absolute filesystem paths as external modules which establish a new module package. To treat this directory as part of the same package as its caller, use a local path starting with either \"./\" or \"../\"."
}
newDiags = newDiags.Append(&hcl.Diagnostic{
Severity: hcl.DiagError,

View File

@ -1,6 +1,6 @@
#!/usr/bin/env bash
# This is a helper script to launch Terraform inside the "dlv" debugger,
# This is a helper script to launch OpenTF inside the "dlv" debugger,
# configured to await a remote debugging connection on port 2345. You can
# then connect to it using the following command, or its equivalent in your
# debugging frontend of choice:

View File

@ -1,24 +1,24 @@
# terraform-bundle
`terraform-bundle` was a solution intended to help with the problem
of distributing Terraform providers to environments where direct registry
access is impossible or undesirable, created in response to the Terraform v0.10
change to distribute providers separately from Terraform CLI.
of distributing OpenTF providers to environments where direct registry
access is impossible or undesirable, created in response to the OpenTF v0.10
change to distribute providers separately from OpenTF CLI.
The Terraform v0.13 series introduced our intended longer-term solutions
The OpenTF v0.13 series introduced our intended longer-term solutions
to this need:
* [Alternative provider installation methods](https://www.terraform.io/docs/cli/config/config-file.html#provider-installation),
including the possibility of running server containing a local mirror of
providers you intend to use which Terraform can then use instead of the
providers you intend to use which OpenTF can then use instead of the
origin registry.
* [The `terraform providers mirror` command](https://www.terraform.io/docs/cli/commands/providers/mirror.html),
built in to Terraform v0.13.0 and later, can automatically construct a
built in to OpenTF v0.13.0 and later, can automatically construct a
suitable directory structure to serve from a local mirror based on your
current Terraform configuration, serving a similar (though not identical)
current OpenTF configuration, serving a similar (though not identical)
purpose than `terraform-bundle` had served.
For those using Terraform CLI alone, without Terraform Cloud, we recommend
For those using OpenTF CLI alone, without OpenTF Cloud, we recommend
planning to transition to the above features instead of using
`terraform-bundle`.
@ -26,14 +26,14 @@ planning to transition to the above features instead of using
However, if you need to continue using `terraform-bundle`
during a transitional period then you can use the version of the tool included
in the Terraform v0.15 branch to build bundles compatible with
Terraform v0.13.0 and later.
in the OpenTF v0.15 branch to build bundles compatible with
OpenTF v0.13.0 and later.
If you have a working toolchain for the Go programming language, you can
build a `terraform-bundle` executable as follows:
* `git clone --single-branch --branch=v0.15 --depth=1 https://github.com/hashicorp/terraform.git`
* `cd terraform`
* `git clone --single-branch --branch=v0.15 --depth=1 https://github.com/opentffoundation/opentf.git`
* `cd opentf`
* `go build -o ../terraform-bundle ./tools/terraform-bundle`
After running these commands, your original working directory will have an
@ -42,19 +42,19 @@ executable named `terraform-bundle`, which you can then run.
For information
on how to use `terraform-bundle`, see
[the README from the v0.15 branch](https://github.com/hashicorp/terraform/blob/v0.15/tools/terraform-bundle/README.md).
[the README from the v0.15 branch](https://github.com/opentffoundation/opentf/blob/v0.15/tools/terraform-bundle/README.md).
You can follow a similar principle to build a `terraform-bundle` release
compatible with Terraform v0.12 by using `--branch=v0.12` instead of
`--branch=v0.15` in the command above. Terraform CLI versions prior to
compatible with OpenTF v0.12 by using `--branch=v0.12` instead of
`--branch=v0.15` in the command above. OpenTF CLI versions prior to
v0.13 have different expectations for plugin packaging due to them predating
Terraform v0.13's introduction of automatic third-party provider installation.
OpenTF v0.13's introduction of automatic third-party provider installation.
## Terraform Enterprise Users
If you use Terraform Enterprise, the self-hosted distribution of
Terraform Cloud, you can use `terraform-bundle` as described above to build
custom Terraform packages with bundled provider plugins.
custom OpenTF packages with bundled provider plugins.
For more information, see
[Installing a Bundle in Terraform Enterprise](https://github.com/hashicorp/terraform/blob/v0.15/tools/terraform-bundle/README.md#installing-a-bundle-in-terraform-enterprise).
[Installing a Bundle in Terraform Enterprise](https://github.com/opentffoundation/opentf/blob/v0.15/tools/terraform-bundle/README.md#installing-a-bundle-in-terraform-enterprise).

View File

@ -55,7 +55,7 @@ You should preview all of your changes locally before creating a pull request. T
**Launch Site Locally**
1. Navigate into your local `terraform` top-level directory and run `make website`.
1. Navigate into your local `opentf` top-level directory and run `make website`.
1. Open `http://localhost:3000` in your web browser. While the preview is running, you can edit pages and Next.js automatically rebuilds them.
1. Press `ctrl-C` in your terminal to stop the server and end the preview.
@ -63,7 +63,7 @@ You should preview all of your changes locally before creating a pull request. T
Merging a PR to `main` queues up documentation changes for the next minor product release. Your changes are not immediately available on the website.
The website generates versioned documentation by pointing to the HEAD of the release branch for that version. For example, the `v1.2.x` documentation on the website points to the HEAD of the `v1.2` release branch in the `terraform` repository. To update existing documentation versions, you must also backport your changes to that release branch. Backported changes become live on the site within one hour.
The website generates versioned documentation by pointing to the HEAD of the release branch for that version. For example, the `v1.2.x` documentation on the website points to the HEAD of the `v1.2` release branch in the `opentf` repository. To update existing documentation versions, you must also backport your changes to that release branch. Backported changes become live on the site within one hour.
### Backporting