Add tofu test command definition (#886)

Signed-off-by: Dmitry Kisler <admin@dkisler.com>
This commit is contained in:
Dmitry Kisler 2023-11-17 18:40:22 +01:00 committed by GitHub
parent 0a0787edf2
commit d077a33939
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 295 additions and 303 deletions

View File

@ -1,12 +1,301 @@
---
description: Part of the ongoing design research for module integration testing.
description: >-
The tofu test command performs integration tests of OpenTofu modules.
---
# Command: test
The `tofu test` command is currently serving as part of
[the module integration testing experiment](/docs/language/modules/testing-experiment).
The `tofu test` command runs integration tests to ensure intended behavior of your OpenTofu configuration. It creates real
infrastructure, asserts conditional checks against the attributes of provisioned resources and attempts
to clean up the testing infrastructure upon completion.
It's not ready for routine use, but if you'd be interested in trying the
prototype functionality then we'd love to hear your feedback. See the
experiment details page linked above for more information.
## Usage
Usage: `tofu test [options]`.
The `test` command executes automated integration tests following the next steps:
1. [Define](#how-to-define-a-test-suite) a test `suite` by reading [test files](#how-to-define-a-test-file) `*.tftest.hcl`.
2. Read all global input [variables](#variables) for the entire test `suite`.
3. Establish execution order by sorting test files alphabetically.
4. Sequentially execute all test files in the `suite`:
1. Sequentially execute all test [`runs`](#run-block) defined in a single test file:
1. Prepare and validate the `run` configuration;
2. Perform `tofu plan+apply`, or `tofu apply`;
3. Check for expected failures;
4. Verify assertion conditions;
5. Update the `suite` _status_.
2. Printout the summary for the entire test file.
3. Printout the summary for every `run`.
4. Sequentially destroy the resources provisioned by every `run` in a single test file in the reverse order,
starting from the last executed `run`.
5. Printout an overall summary of the test `suite` execution.
6. Terminate with the status code `0` if all tests passed; `1` otherwise.
:::info
Monitor the test output for successful cleanup because the execution will continue even if some destroy operations fail.
:::
## General Options
* `-test-directory=path` Set the test directory (default: "tests"). OpenTofu will search for test files in the set directory
as well as in the directly in which `tofu test` was executed.
:::warning
The path should be relative to the directory in which `tofu test` is called.
:::
* `-filter=testfile` Specify an individual test file to use for testing.
The option can be used multiple times to specify several files.
:::warning
The path should be relative to the directory in which `tofu test` is called.
:::
* `-var 'foo=bar'` Set the input variables of the root module.
The option can be used multiple times to set several variables.
* `-var-file=filename` Set the path to the file with variables values in addition to the default
files terraform.tfvars and *.auto.tfvars. The option can be used multiple times to specify several files.
* `-json` Set for test outputs to be formatted as JSON.
* `-no-color` Disable color codes in the command output.
* `-verbose` Print the plan or state for each test run block as it executes.
## How to define a test suite
In this subsection we will illustrate how to filter test files when defining a test `suite`.
### Flat structure
Assuming the OpenTofu module structure below
```commandline
.
├── LICENSE
├── README.md
├── main.tf
├── main.tftest.hcl
├── foo.tf
├── foo.tftest.hcl
├── bar.tf
├── bar.tftest.hcl
├── outputs.tf
└── variables.tf
```
- When you execute `tofu test` in the project directory, the test `suite` will consist of the files:
`bar.tftest.hcl`, `foo.tftest.hcl`, `main.tftest.hcl`.
- To select some specific files from the `suite`, use the `-filter` option.
For example, when you execute `tofu test -filter=foo.tftest.hcl -filter=bar.tftest.hcl` in the project directory,
the files `foo.tftest.hcl` and `bar.tftest.hcl` will be selected only.
### Nested structure
Assuming the OpenTofu module structure below
```commandline
.
├── LICENSE
├── README.md
├── main.tf
├── main.tftest.hcl
├── outputs.tf
├── variables.tf
├── tests
│   └── foo.tftest.hcl
└── submodule
    ├── foo.tf
    ├── foo.tftest.hcl
    ├── main.tf
    ├── main.tftest.hcl
    ├── outputs.tf
    ├── variables.tf
    └── tests
        └── foo.tftest.hcl
```
- When you execute `tofu test` in the project directory, the test `suite` will consist of the files: `main.tftest.hcl`
and `tests/foo.tftest.hcl`.
- To specify the directory other that the default `tests`, use the `-test-directory` option.
For example, when you execute `tofu test -test-directory=submodule` in the project directory,
the test `suite` will consist of the files: `main.tftest.hcl`, `submodule/foo.tftest.hcl` and `submodule/main.tftest.hcl`.
- You can combine the options `-test-directory` and `-filter` to select specific test files.
For example, when you execute `tofu test -test-directory=submodule -filter=submodule/foo.tftest.hcl` in the project directory,
the file `submodule/foo.tftest.hcl` will be selected only.
:::warning
Make sure that the path is specified relative to the directory in which `tofu test` is called.
For example, when you execute `tofu test -test-directory=submodule -filter=foo.tftest.hcl` in the project directory,
no test files will be found.
:::
## How to define a test file
Tests are defined using three kinds of blocks in the way that every test file
- _shall_ contain _at least one_ `run` block to define a single test case;
- _may_ contain _at most one_ `variables` block to define "global" variables shared by all `runs` defined in a single file;
- _may_ contain _multiple_ `provider` blocks to configure providers shared by all `runs` in a single file.
:::info
Note that if a test file contains several `run` blocks, OpenTofu will execute them sequentially from the top to
the bottom of the file.
:::
### `run` block
<!-- details: /internal/configs/test_file.go:239 decodeTestRunBlock -->
The `run` block defines a _single test_. It specifies the operation which OpenTofu will execute,
sets mocks for providers,
and defines the checks to assess the state after the OpenTofu operation was executed.
The following non-mutually exclusive blocks and attributes can be defined within the scope of the `run` block to configure the test.
| Name | Type | Multiple Allowed | Description |
|:------------------|:-------|:-----------------|:-------------------------------------------------------------------------------------------------------------------------------------|
| `assert` | block | true | Defines the state assessment criteria and the assertion message. |
| `expect_failures` | list | false | Defines the check for "unhappy path" by listing resources and values which are expected to fail as the result of OpenTofu operation. |
| `variables` | block | false | Defines "local" variables as [key-value pairs](/docs/language/expressions/types/#mapsobjects). |
| `command` | string | false | Defines the command which OpenTofu will execute, _plan_ or _apply_. Defaults to _apply_. |
| `plan_options` | block | false | Defines options for the _plan_ operation. |
| `module` | block | false | Defines module used at execution of the OpenTofu operation. |
| `providers` | object | false | Defines aliases to mock providers. |
Find below detailed description of every listed attribute.
### `run.assert`
The `assert` block is the key configuration which defines how to assess OpenTofu configuration behavior against
a reference using two required attributes:
- `condition`: check [condition](https://opentofu.org/docs/language/expressions/custom-conditions#condition-expressions)
as a boolean expression.
- `error_message`: assertion message which will be printed if the test fails.
```hcl
run "test" {
variables {
want_attr0 = "test-value"
want_attr1 = 1
}
assert {
condition = resource_type.resource_name.attr0 == var.want_attr0
error_message = "failed validation of attr0"
}
assert {
condition = resource_type.resource_name.attr1 == var.want_attr1
error_message = "failed validation of attr1"
}
}
```
:::info
If several assertions are present in a single `run`, they will be run sequentially, and if some of them fail,
the total numer of failed tests will be reported as 1.
The opposite is also true, 1 successful test will be reported if all checks are successful.
:::
### `run.expect_failures`
The `expect_failures` attribute lists the resources, or variable which are expected to fail during the test.
```hcl
run "unhappy-path" {
variable {
input0 = "faulty-input"
}
expect_failures = [
# we expect the module to validate the input,
# and the validation to fail given the wrong input value defined in the variables block
input.input0,
]
}
```
### `run.plan_options`
The `plan_options` block can be used to defined how OpenTofu will execute the `plan` command during the test.
It can be configured by the following optional attributes.
| Name | Type | Default | Description |
|:--------|:-------|:---------|:-----------------------------------------------------------------------|
| mode | string | "normal" | The planning mode to run. Allowed values: _normal_ or _refresh-only_. |
| refresh | bool | true | Analog to `tofu plan -refresh`. |
| replace | list | null | Analog to `tofu plan -refresh=ADDRESS`. |
| target | list | null | Analog to `tofu plan -target=ADDRESS`. |
:::warning
If the `mode` is set to _refresh-only_, the `refresh` parameter cannot be set to _false_.
:::
### `run.module`
The `module` block defines a module used during the execution of the OpenTofu operation within the scope of the test.
It is useful for mocking providers required by the tested OpenTofu configuration.
The `module` block is defined by two attributes:
- `source` (required): module [source](/docs/language/modules/sources).
- `version` (optional): module version.
```hcl
run "setup-module" {
module {
source = "./modules/my_module"
}
}
```
:::info
If a module is specified in the `run` block, `tofu init` execution will be required before running `tofu test`.
:::
### `run.providers`
The `providers` object allows to mock providers for execution of the `run`.
### `provider` block
<!-- details: /internal/command/testing/test_provider.go -->
The `provier` block defines a single provider shared by all `runs` defined in a single test file.
It is defined using the following attributes:
| Name | Type | Description |
|:----------------|:-------|:-----------------------------------------------------------------------------------------------------|
| alias | string | The provider's alies within the scope of a given test file. |
| resource_prefix | string | The OpenTofu configuration tag word used to recognize a [resource](docs/language/resources/syntax/). |
| data_prefix | string | The OpenTofu configuration tag word used to recognize a [data source](/docs/language/data-sources/). |
## Variables
Tests can be defined using variables specified in several ways.
The variables will be loaded in the following order, with later sources taking precedence over earlier ones:
| Order | Source | Scope |
|:------:|:-------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| 1 | Environment variables with the prefix `TF_VAR_`; | All test files |
| 2 | tfvar files specified: `terraform.tfvars` and `*.auto.tfvars`; | All test files |
| 3 | Commandline variables defined using the flag `-var`, and the variables defined in the files specified by the flag `-var-file`; | All test files |
| 4 | The variables from the `variables` block in a test file. | Single test file |
| 5 | The variables from the `variables` block in `run` block. | Single test |
:::info
The variable value defined by the flags `-var` and `-var-file` will depend on their order: the latest found value will be used.
For example, **given** that the file `test.tfvars` contains the value `my_var="bar"`,
**when** `tofu test -var=my_var="foo" -var-file=test.tfvars -var=my_var="qux"` is executed,
**then** the variable `my_var` will be assigned the value `"qux"`.
:::

View File

@ -1,297 +0,0 @@
---
description: Part of the ongoing design research for module integration testing.
---
# Module Testing Experiment
This page is about some experimental features available in recent versions of
OpenTofu CLI related to integration testing of shared modules.
The OpenTofu team is aiming to use these features to gather feedback as part
of ongoing research into different strategies for testing modules.
These features are likely to change significantly in future releases based on
feedback.
## Current Research Goals
Our initial area of research is into the question of whether it's helpful and
productive to write module integration tests in the OpenTofu language itself,
or whether it's better to handle that as a separate concern orchestrated by
code written in other languages.
Some existing efforts have piloted both approaches:
* [Terratest](https://terratest.gruntwork.io/) and
[kitchen-terraform](https://github.com/newcontext-oss/kitchen-terraform)
both pioneered the idea of writing tests for modules with explicit
orchestration written in the Go and Ruby programming languages, respectively.
* The OpenTofu provider
[`apparentlymart/testing`](https://registry.terraform.io/providers/apparentlymart/testing/latest)
introduced the idea of writing module tests in the OpenTofu
language itself, using a special provider that can evaluate assertions
and fail `tofu apply` if they don't pass.
Both of these approaches have both advantages and disadvantages, and so it's
likely that both will coexist for different situations, but the community
efforts have already explored the external-language testing model quite deeply
while the OpenTofu-integrated testing model has not yet been widely trialled.
For that reason, the current iteration of the module testing experiment is
aimed at trying to make the OpenTofu-integrated approach more accessible so
that more module authors can hopefully try it and share their experiences.
## Current Experimental Features
Our current area of interest is in what sorts of tests can and cannot be
written using features integrated into the OpenTofu language itself. As a
means to investigate that without invasive, cross-cutting changes to OpenTofu
Core we're using a special built-in OpenTofu provider as a placeholder for
potential new features.
If this experiment is successful then we expect to run a second round of
research and design about exactly what syntax is most ergonomic for writing
tests, but for the moment we're interested less in the specific syntax and more
in the capabilities of this approach.
The temporary extensions to OpenTofu for this experiment consist of the
following parts:
* A temporary experimental provider `terraform.io/builtin/test`, which acts as
a placeholder for potential new language features related to test assertions.
* A `tofu test` command for more conveniently running multiple tests in
a single action.
* An experimental convention of placing test configurations in subdirectories
of a `tests` directory within your module, which `tofu test` will then
discover and run.
We would like to invite adventurous module authors to try writing integration
tests for their modules using these mechanisms, and ideally also share the
tests you write (in a temporary VCS branch, if necessary) so we can see what
you were able to test, along with anything you felt unable to test in this way.
If you're interested in giving this a try, see the following sections for
usage details. Because these features are temporary experimental extensions,
there's some boilerplate required to activate and make use of it which would
likely not be required in a final design.
### Writing Tests for a Module
For the purposes of the current experiment, module tests are arranged into
_test suites_, each of which is a root OpenTofu module which includes a
`module` block calling the module under test, and ideally also a number of
test assertions to verify that the module outputs match expectations.
In the same directory where you keep your module's `.tf` and/or `.tf.json`
source files, create a subdirectory called `tests`. Under that directory,
make another directory which will serve as your first test suite, with a
directory name that concisely describes what the suite is aiming to test.
Here's an example directory structure of a typical module directory layout
with the addition of a test suite called `defaults`:
```
main.tf
outputs.tf
providers.tf
variables.tf
versions.tf
tests/
defaults/
test_defaults.tf
```
The `tests/defaults/test_defaults.tf` file will contain a call to the
main module with a suitable set of arguments and hopefully also one or more
resources that will, for the sake of the experiment, serve as the temporary
syntax for defining test assertions. For example:
```hcl
terraform {
required_providers {
# Because we're currently using a built-in provider as
# a substitute for dedicated OpenTofu language syntax
# for now, test suite modules must always declare a
# dependency on this provider. This provider is only
# available when running tests, so you shouldn't use it
# in non-test modules.
test = {
source = "terraform.io/builtin/test"
}
# This example also uses the "http" data source to
# verify the behavior of the hypothetical running
# service, so we should declare that too.
http = {
source = "hashicorp/http"
}
}
}
module "main" {
# source is always ../.. for test suite configurations,
# because they are placed two subdirectories deep under
# the main module directory.
source = "../.."
# This test suite is aiming to test the "defaults" for
# this module, so it doesn't set any input variables
# and just lets their default values be selected instead.
}
# As with all OpenTofu modules, we can use local values
# to do any necessary post-processing of the results from
# the module in preparation for writing test assertions.
locals {
# This expression also serves as an implicit assertion
# that the base URL uses URL syntax; the test suite
# will fail if this function fails.
api_url_parts = regex(
"^(?:(?P<scheme>[^:/?#]+):)?(?://(?P<authority>[^/?#]*))?",
module.main.api_url,
)
}
# The special test_assertions resource type, which belongs
# to the test provider we required above, is a temporary
# syntax for writing out explicit test assertions.
resource "test_assertions" "api_url" {
# "component" serves as a unique identifier for this
# particular set of assertions in the test results.
component = "api_url"
# equal and check blocks serve as the test assertions.
# the labels on these blocks are unique identifiers for
# the assertions, to allow more easily tracking changes
# in success between runs.
equal "scheme" {
description = "default scheme is https"
got = local.api_url_parts.scheme
want = "https"
}
check "port_number" {
description = "default port number is 8080"
condition = can(regex(":8080$", local.api_url_parts.authority))
}
}
# We can also use data resources to respond to the
# behavior of the real remote system, rather than
# just to values within the OpenTofu configuration.
data "http" "api_response" {
depends_on = [
# make sure the syntax assertions run first, so
# we'll be sure to see if it was URL syntax errors
# that let to this data resource also failing.
test_assertions.api_url,
]
url = module.main.api_url
}
resource "test_assertions" "api_response" {
component = "api_response"
check "valid_json" {
description = "base URL responds with valid JSON"
condition = can(jsondecode(data.http.api_response.body))
}
}
```
If you like, you can create additional directories alongside
the `default` directory to define additional test suites that
pass different variable values into the main module, and
then include assertions that verify that the result has changed
in the expected way.
### Running Your Tests
The `tofu test` command aims to make it easier to exercise all of your
defined test suites at once, and see only the output related to any test
failures or errors.
The current experimental incarnation of this command expects to be run from
your main module directory. In our example directory structure above,
that was the directory containing `main.tf` etc, and _not_ the specific test
suite directory containing `test_defaults.tf`.
Because these test suites are integration tests rather than unit tests, you'll
need to set up any credentials files or environment variables needed by the
providers your module uses before running `tofu test`. The test command
will, for each suite:
* Install the providers and any external modules the test configuration depends
on.
* Create an execution plan to create the objects declared in the module.
* Apply that execution plan to create the objects in the real remote system.
* Collect all of the test results from the apply step, which would also have
"created" the `test_assertions` resources.
* Destroy all of the objects recorded in the temporary test state, as if running
`tofu destroy` against the test configuration.
```shellsession
$ tofu test
─── Failed: defaults.api_url.scheme (default scheme is https) ───────────────
wrong value
got: "http"
want: "https"
─────────────────────────────────────────────────────────────────────────────
```
In this case, it seems like the module returned an `http` rather than an
`https` URL in the default case, and so the `defaults.api_url.scheme`
assertion failed, and the `tofu test` command detected and reported it.
The `test_assertions` resource captures any assertion failures but does not
return an error, because that can then potentially allow downstream
assertions to also run and thus capture as much context as possible.
However, if OpenTofu encounters any _errors_ while processing the test
configuration it will halt processing, which may cause some of the test
assertions to be skipped.
## Known Limitations
The design above is very much a prototype aimed at gathering more experience
with the possibilities of testing inside the OpenTofu language. We know it's
currently somewhat non-ergonomic, and hope to improve on that in later phases
of research and design, but the main focus of this iteration is on available
functionality and so with that in mind there are some specific possibilities
that we know the current prototype doesn't support well:
* Testing of subsequent updates to an existing deployment of a module.
Tests written in this way can only exercise the create and destroy
behaviors.
* Assertions about expected errors. For a module that includes variable
validation rules and data resources that function as assertion checks,
the current prototype doesn't have any way to express that a particular
set of inputs is _expected_ to produce an error, and thus report a test
failure if it doesn't. We'll hopefully be able to improve on this in a future
iteration with the test assertions better integrated into the language.
* Capturing context about failures. Due to this prototype using a provider as
an approximation for new assertion syntax, the `tofu test` command is
limited in how much context it's able to gather about failures. A design
more integrated into the language could potentially capture the source
expressions and input values to give better feedback about what went wrong,
similar to what OpenTofu typically returns from expression evaluation errors
in the main language.
* Unit testing without creating real objects. Although we do hope to spend more
time researching possibilities for unit testing against fake test doubles in
the future, we've decided to focus on integration testing to start because
it feels like the better-defined problem.
## Sending Feedback
The sort of feedback we'd most like to see at this stage of the experiment is
to see the source code of any tests you've written against real modules using
the features described above, along with notes about anything that you
attempted to test but were blocked from doing so by limitations of the above
features. The most ideal way to share that would be to share a link to a
version control branch where you've added such tests, if your module is open
source.
If you've previously written or attempted to write tests in an external
language, using a system like Terratest or kitchen-terraform, we'd also be
interested to hear about comparative differences between the two: what worked
well in each and what didn't work so well.
Our ultimate goal is to work towards an integration testing methodology which
strikes the best compromise between the capabilities of these different
approaches, ideally avoiding a hard requirement on any particular external
language and fitting well into the OpenTofu workflow.
Any feedback you'd like to share would be very welcome!