Instead of having two different places where we keep the current version
which must be manually kept in sync, let's use the same one that the
release process uses (version/VERSION).
Local builds will remain tagged with -dev by default, and we'll
disable this behavior with a linker flag at release time.
Several times over the years we've considered adding tracing
instrumentation to Terraform, since even when running in isolation as a
CLI program it has a "distributed system-like" structure, with lots of
concurrent internal work and also some work delegated to provider plugins
that are essentially temporarily-running microservices.
However, it's always felt a bit overwhelming to do it because much of
Terraform predates the Go context.Context idiom and so it's tough to get
a clean chain of context.Context values all the way down the stack without
disturbing a lot of existing APIs.
This commit aims to just get that process started by establishing how a
context can propagate from "package main" into the command package,
focusing initially on "terraform init" and some other commands that share
some underlying functions with that command.
OpenTelemetry has emerged as a de-facto industry standard and so this uses
its API directly, without any attempt to hide it behind an abstraction.
The OpenTelemetry API is itself already an adapter layer, so we should be
able to swap in any backend that uses comparable concepts. For now we just
discard the tracing reports by default, and allow users to opt in to
delivering traces over OTLP by setting an environment variable when
running Terraform (the environment variable was established in an earlier
commit, so this commit builds on that.)
When tracing collection is enabled, every Terraform CLI run will generate
at least one overall span representing the command that was run. Some
commands might also create child spans, but most currently do not.
Terraform CLI is sometimes used as part of a larger distributed system, in
which case it would be helpful to be able to gather telemetry from it
as part of the larger request it's being run in response to.
We'll now support optionally enabling an OTLP exporter by setting the
environment variable OTEL_TRACES_EXPORTER=otlp (a standard OpenTelemetry
convention). As of this commit there isn't actually anything emitting
traces to the specified collector, but we'll gradually add tracing
instrumentation to parts of Terraform CLI and Core in later commits.
* [testing framework] prepare for beta phase of development
* [Testing Framework] Add module block to test run blocks
* [testing framework] allow tests to define and override providers
* testing framework: introduce interrupts for stopping tests
* remove panic handling, will do it properly later
* [testing framework] prepare for beta phase of development
* [Testing Framework] Add module block to test run blocks
* [testing framework] allow tests to define and override providers
This is to upgrade past the vulnerability described here:
https://github.com/advisories/GHSA-cfgp-2977-2fmm
Terraform does not seem to be significantly affected by it since our use
is primarily between Terraform Core and provider plugins where at worst
a provider could just make its own connection to Terraform malfunction.
However, this also appears to be a relatively low-risk upgrade.
This does force upgrading some of the Google Cloud Platform dependencies,
which the "gcs" (Google Cloud Storage) backend depends on, so there is
some minor risk to that backend but the upstream changes to those
dependencies do not seem to be significant.
* main: disambiguate arg ordering test
Make it extra clear what order of args we are asserting.
* command: fix plan -refresh=false test
The test for plan -refresh=false was not functioning, since ReadResource will not be called if the resource is not in prior state.
Add a new fixture directory with state, and also test the converse, to prevent regression.
* command: add test for refresh flag precedence
A consumer relies on the fact that running terraform plan -refresh=false -refresh true gives the same result as terraform plan -refresh=true.
Use the global providers.SchemaCache and update all schema access to the
providers.Schemas, except where the provider.GetProviderSchemaResponse
type name would be expected.
Some tests that reuse provider factories needed a little more careful
handling. Change the fixed func to only reset the provider on the first
call.
Add a single global schema cache for providers. This allows multiple
provider instances to share a single copy of the schema, and prevents
loading the schema multiple times for a given provider type during a
single command.
This does not currently work with some provider releases, which are
using GetProviderSchema to trigger certain initializations. A new server
capability will be introduced to trigger reloading their schemas, but
not store duplicate results.
A module output is generally not used during destroy, however it must be
evaluated when its value is used by a provider for configuration,
because that configuration is not stored between walks.
There was an oversight in the output expansion node where the output
node was not created because the operation was destroy, and module
outputs have nothing to destroy. This however skipped evaluation when
the output is needed by a provider as mentioned above. Because of the
way an implied plan is stored internally when executing `terraform
destroy`, this went unnoticed by the test.
Allowing the output to be evaluated during destroy fixes the issue, and
should be acceptable because an output is classified as temporary in the
graph, and will be pruned when not actually needed.
Update the existing test to serialize the plan, which triggers the
failure.