mirror of
https://github.com/opentofu/opentofu.git
synced 2025-02-25 18:45:20 -06:00
Merge branch 'main' into bm/azure-backend
This commit is contained in:
commit
99ebbe75a9
9
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
9
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
@ -23,7 +23,14 @@ body:
|
||||
* Set defaults on (or omit) any variables. The person reproducing it should not need to invent variable settings
|
||||
* If multiple steps are required, such as running tofu twice, consider scripting it in a simple shell script. Providing a script can be easier than explaining what changes to make to the config between runs.
|
||||
* Omit any unneeded complexity: remove variables, conditional statements, functions, modules, providers, and resources that are not needed to trigger the bug
|
||||
|
||||
- type: textarea
|
||||
id: community-note
|
||||
attributes:
|
||||
label: Community note
|
||||
description: Please leave this note unchanged.
|
||||
value: |
|
||||
> [!TIP]
|
||||
> 👋 Hi there, OpenTofu community! The OpenTofu team prioritizes issues based on upvotes. Please make sure to upvote this issue and describe how it affects you in detail in the comments to show your support.
|
||||
- type: textarea
|
||||
id: tf-version
|
||||
attributes:
|
||||
|
25
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
25
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
@ -11,7 +11,20 @@ body:
|
||||
attributes:
|
||||
value: |
|
||||
# Thank you for opening a feature request.
|
||||
|
||||
|
||||
In order to make your feature request a success, here are some simple tips to follow:
|
||||
|
||||
1. Try to describe what you need to achieve rather than how you would like to change OpenTofu to change.
|
||||
2. Be as specific as possible. Overarching large changes to OpenTofu have a lower chance of getting accepted than specific changes.
|
||||
3. Describe how it affects your current project specifically. Try to support it with specific code and describe why the current situation is unsatisfactory.
|
||||
- type: textarea
|
||||
id: community-note
|
||||
attributes:
|
||||
label: Community note
|
||||
description: Please leave this note unchanged.
|
||||
value: |
|
||||
> [!TIP]
|
||||
> 👋 Hi there, OpenTofu community! The OpenTofu team prioritizes issues based on upvotes. Please make sure to upvote this issue and describe how it affects you in detail in the comments to show your support.
|
||||
- type: textarea
|
||||
id: tf-version
|
||||
attributes:
|
||||
@ -60,6 +73,16 @@ body:
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
id: tf-workarounds
|
||||
attributes:
|
||||
label: Workarounds and Alternatives
|
||||
description: |
|
||||
What workarounds and alternatives have you tried? What worked and what didn't? How would this proposal make life easier compared to these solutions?
|
||||
placeholder:
|
||||
value:
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: tf-references
|
||||
attributes:
|
||||
|
2
.gitignore
vendored
2
.gitignore
vendored
@ -27,6 +27,8 @@ vendor/
|
||||
.vscode/launch.json
|
||||
|
||||
# Coverage
|
||||
coverage.html
|
||||
coverage.out
|
||||
coverage.txt
|
||||
|
||||
# GoReleaser build directory
|
||||
|
@ -18,6 +18,7 @@ ENHANCEMENTS:
|
||||
* The `element` function now accepts negative indices, which extends the existing "wrapping" model into the negative direction. In particular, choosing element `-1` selects the final element in the sequence. ([#2371](https://github.com/opentofu/opentofu/pull/2371))
|
||||
* `moved` now supports moving between different types ([#2370](https://github.com/opentofu/opentofu/pull/2370))
|
||||
* `moved` block can now be used to migrate from the `null_resource` to the `terraform_data` resource. ([#2481](https://github.com/opentofu/opentofu/pull/2481))
|
||||
* Warn on implicit references of providers without a `required_providers` entry. ([#2084](https://github.com/opentofu/opentofu/issues/2084))
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
|
@ -8,6 +8,59 @@ The Technical Steering Committee is a group comprised of people from companies a
|
||||
- Wojciech Barczynski ([@wojciech12](https://github.com/wojciech12)) representing Spacelift Inc.
|
||||
- Zach Goldberg ([@ZachGoldberg](https://github.com/ZachGoldberg)) representing Gruntwork, Inc.
|
||||
|
||||
## 2024-01-28
|
||||
- Christan Mesh ([@cam72cam](https://github.com/cam72cam)) (OpenTofu Tech Lead)
|
||||
- Roger Simms ([@allofthesepeople](https://github.com/allofthesepeople))
|
||||
- Zach Goldberg ([@ZachGoldberg](https://github.com/ZachGoldberg))
|
||||
- Igor Savchenko ([@DiscyDel](https://github.com/DicsyDel))
|
||||
- Roni Frantchi ([@roni-frantchi](https://github.com/roni-frantchi))
|
||||
|
||||
### Agenda
|
||||
- Discuss OpenTofu Charter updates. Vote: All present members voted yes to submitting the updated charter to the Linux Foundation
|
||||
- CNCF Application Review - Continuing to find ways to communicate with them
|
||||
|
||||
### Discussion
|
||||
- Actively planning OpenTofu day at CNCF London - Roger, James, Christian are planning to attend in person
|
||||
- Moving forward with interviews for a candidate core team member sponsored by Gruntwork
|
||||
- OCI Survey was published and has ~100 results so far
|
||||
- [Make the Switch to OpenTofu](https://blog.gruntwork.io/make-the-switch-to-opentofu-6904ba95e799) published by Gruntwork
|
||||
- Discussed the status of stacks, discussed the need to gather requirements from the community on how/if OpenTofu should be doing anything here
|
||||
|
||||
|
||||
## 2024-01-14
|
||||
|
||||
- Christan Mesh ([@cam72cam](https://github.com/cam72cam)) (OpenTofu Tech Lead)
|
||||
- Roger Simms ([@allofthesepeople](https://github.com/allofthesepeople))
|
||||
- Zach Goldberg ([@ZachGoldberg](https://github.com/ZachGoldberg))
|
||||
- Igor Savchenko ([@DiscyDel](https://github.com/DicsyDel))
|
||||
|
||||
### Agenda
|
||||
- Release process discussion. Guiding principle decided "The policy should be guided by need to balance the desire to provide assurance to adopters with the resourcing required to maintain older versions, we’re open to feedback"
|
||||
- No formal votes.
|
||||
|
||||
### Discussion
|
||||
- Discussion of release process, and that we think its important for enterprise support that we provide patches for every major version going back at least 1 year. Christian agreed to discuss w/Core team.
|
||||
- Discussion that regardless of final policy, we want to be explict about what versions are supported with e.g. an actual table on opentofu.org
|
||||
- Discussion of how much traction we're seeing, especially post-releases. Setup a download-tracker spreadsheet to track github release download counts, we don't do much else on reddit, linkedin etc.
|
||||
- Discussion of getting feedback from TACOs for OpenTofu - "Make it Faster" - Add OpenTelemetry.
|
||||
- Discussion of CNCF application and what steps are needed to continue to advance the application and gain an exception to the apache license policy
|
||||
|
||||
|
||||
## 2024-01-07
|
||||
|
||||
- Christan Mesh ([@cam72cam](https://github.com/cam72cam)) (OpenTofu Tech Lead)
|
||||
- Roger Simms ([@allofthesepeople](https://github.com/allofthesepeople))
|
||||
- Wojciech Barczynski ([@wojciech12](https://github.com/wojciech12))
|
||||
- Zach Goldberg ([@ZachGoldberg](https://github.com/ZachGoldberg))
|
||||
- Roni Frantchi ([@roni-frantchi](https://github.com/roni-frantchi))
|
||||
|
||||
### Agenda
|
||||
|
||||
- Vote on Emepheral values. Results: Roger, Roni, Oleksandr, Zach and Woj for Pushing Ephemeral to after 1.10.
|
||||
|
||||
### Discussion
|
||||
- Timing of 1.9 release, confirming its this week
|
||||
|
||||
## 2024-12-10
|
||||
|
||||
### Attendees
|
||||
|
76
go.mod
76
go.mod
@ -17,12 +17,12 @@ require (
|
||||
github.com/apparentlymart/go-userdirs v0.0.0-20200915174352-b0c018a67c13
|
||||
github.com/apparentlymart/go-versions v1.0.2
|
||||
github.com/armon/circbuf v0.0.0-20190214190532-5111143e8da2
|
||||
github.com/aws/aws-sdk-go-v2 v1.23.2
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.6
|
||||
github.com/aws/aws-sdk-go-v2/service/dynamodb v1.25.5
|
||||
github.com/aws/aws-sdk-go-v2/service/kms v1.26.5
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.46.0
|
||||
github.com/aws/smithy-go v1.17.0
|
||||
github.com/aws/aws-sdk-go-v2 v1.32.7
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.22
|
||||
github.com/aws/aws-sdk-go-v2/service/dynamodb v1.39.1
|
||||
github.com/aws/aws-sdk-go-v2/service/kms v1.37.6
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.72.1
|
||||
github.com/aws/smithy-go v1.22.1
|
||||
github.com/bgentry/speakeasy v0.1.0
|
||||
github.com/bmatcuk/doublestar/v4 v4.6.0
|
||||
github.com/chzyer/readline v1.5.1
|
||||
@ -34,7 +34,7 @@ require (
|
||||
github.com/google/go-cmp v0.6.0
|
||||
github.com/google/uuid v1.6.0
|
||||
github.com/googleapis/gax-go/v2 v2.12.0
|
||||
github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.43
|
||||
github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.60
|
||||
github.com/hashicorp/consul/api v1.13.0
|
||||
github.com/hashicorp/consul/sdk v0.8.0
|
||||
github.com/hashicorp/copywrite v0.16.3
|
||||
@ -86,17 +86,17 @@ require (
|
||||
github.com/zclconf/go-cty-debug v0.0.0-20240509010212-0d6042c53940
|
||||
github.com/zclconf/go-cty-yaml v1.1.0
|
||||
go.opentelemetry.io/contrib/exporters/autoexport v0.0.0-20230703072336-9a582bd098a2
|
||||
go.opentelemetry.io/otel v1.21.0
|
||||
go.opentelemetry.io/otel/sdk v1.21.0
|
||||
go.opentelemetry.io/otel/trace v1.21.0
|
||||
go.opentelemetry.io/otel v1.33.0
|
||||
go.opentelemetry.io/otel/sdk v1.33.0
|
||||
go.opentelemetry.io/otel/trace v1.33.0
|
||||
go.uber.org/mock v0.4.0
|
||||
golang.org/x/crypto v0.31.0
|
||||
golang.org/x/crypto v0.32.0
|
||||
golang.org/x/exp v0.0.0-20230905200255-921286631fa9
|
||||
golang.org/x/mod v0.17.0
|
||||
golang.org/x/net v0.33.0
|
||||
golang.org/x/net v0.34.0
|
||||
golang.org/x/oauth2 v0.16.0
|
||||
golang.org/x/sys v0.28.0
|
||||
golang.org/x/term v0.27.0
|
||||
golang.org/x/sys v0.29.0
|
||||
golang.org/x/term v0.28.0
|
||||
golang.org/x/text v0.21.0
|
||||
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d
|
||||
google.golang.org/api v0.155.0
|
||||
@ -138,23 +138,24 @@ require (
|
||||
github.com/armon/go-radix v1.0.0 // indirect
|
||||
github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef // indirect
|
||||
github.com/aws/aws-sdk-go v1.44.122 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/config v1.25.8 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.16.6 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.5 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.5 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.5 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/iam v1.27.5 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.1 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.5 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery v1.8.5 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.5 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.5 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sqs v1.28.4 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.17.5 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.20.3 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.25.6 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/config v1.28.8 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.17.49 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.26 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.26 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.26 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/iam v1.38.3 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.7 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery v1.10.7 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.7 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.7 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sns v1.33.7 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sqs v1.37.4 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.24.8 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.7 // indirect
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.33.4 // indirect
|
||||
github.com/bgentry/go-netrc v0.0.0-20140422174119-9fd32a8b3d3d // indirect
|
||||
github.com/bradleyfalzon/ghinstallation/v2 v2.1.0 // indirect
|
||||
github.com/cenkalti/backoff/v3 v3.0.0 // indirect
|
||||
@ -165,11 +166,11 @@ require (
|
||||
github.com/cloudflare/circl v1.3.7 // indirect
|
||||
github.com/creack/pty v1.1.18 // indirect
|
||||
github.com/dylanmei/iso8601 v0.1.0 // indirect
|
||||
github.com/fatih/color v1.16.0 // indirect
|
||||
github.com/fatih/color v1.18.0 // indirect
|
||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||
github.com/fsnotify/fsnotify v1.5.4 // indirect
|
||||
github.com/go-jose/go-jose/v3 v3.0.3 // indirect
|
||||
github.com/go-logr/logr v1.3.0 // indirect
|
||||
github.com/go-logr/logr v1.4.2 // indirect
|
||||
github.com/go-logr/stdr v1.2.2 // indirect
|
||||
github.com/go-openapi/errors v0.20.2 // indirect
|
||||
github.com/go-openapi/strfmt v0.21.3 // indirect
|
||||
@ -238,13 +239,14 @@ require (
|
||||
github.com/vmihailenco/tagparser/v2 v2.0.0 // indirect
|
||||
go.mongodb.org/mongo-driver v1.11.6 // indirect
|
||||
go.opencensus.io v0.24.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/github.com/aws/aws-sdk-go-v2/otelaws v0.46.1 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/github.com/aws/aws-sdk-go-v2/otelaws v0.58.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.1 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.21.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.33.0 // indirect
|
||||
go.opentelemetry.io/proto/otlp v1.0.0 // indirect
|
||||
golang.org/x/exp/typeparams v0.0.0-20221208152030-732eee02a75a // indirect
|
||||
golang.org/x/sync v0.10.0 // indirect
|
||||
@ -264,6 +266,8 @@ require (
|
||||
software.sslmate.com/src/go-pkcs12 v0.4.0 // indirect
|
||||
)
|
||||
|
||||
go 1.22
|
||||
go 1.22.0
|
||||
|
||||
toolchain go1.22.8
|
||||
|
||||
replace github.com/hashicorp/hcl/v2 v2.20.1 => github.com/opentofu/hcl/v2 v2.0.0-20240814143621-8048794c5c52
|
||||
|
152
go.sum
152
go.sum
@ -301,61 +301,63 @@ github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef/go.mod h1:W
|
||||
github.com/aws/aws-sdk-go v1.44.122 h1:p6mw01WBaNpbdP2xrisz5tIkcNwzj/HysobNoaAHjgo=
|
||||
github.com/aws/aws-sdk-go v1.44.122/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
|
||||
github.com/aws/aws-sdk-go-v2 v1.9.2/go.mod h1:cK/D0BBs0b/oWPIcX/Z/obahJK1TT7IPVjy53i/mX/4=
|
||||
github.com/aws/aws-sdk-go-v2 v1.23.2 h1:UoTll1Y5b88x8h53OlsJGgOHwpggdMr7UVnLjMb3XYg=
|
||||
github.com/aws/aws-sdk-go-v2 v1.23.2/go.mod h1:i1XDttT4rnf6vxc9AuskLc6s7XBee8rlLilKlc03uAA=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1 h1:ZY3108YtBNq96jNZTICHxN1gSBSbnvIdYwwqnvCV4Mc=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1/go.mod h1:t8PYl/6LzdAqsU4/9tz28V/kU+asFePvpOMkdul0gEQ=
|
||||
github.com/aws/aws-sdk-go-v2 v1.32.7 h1:ky5o35oENWi0JYWUZkB7WYvVPP+bcRF5/Iq7JWSb5Rw=
|
||||
github.com/aws/aws-sdk-go-v2 v1.32.7/go.mod h1:P5WJBrYqqbWVaOxgH0X/FYYD47/nooaPOZPlQdmiN2U=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7 h1:lL7IfaFzngfx0ZwUGOZdsFFnQ5uLvR0hWqqhyE7Q9M8=
|
||||
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7/go.mod h1:QraP0UcVlQJsmHfioCrveWOC1nbiWUl3ej08h4mXWoc=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.8.3/go.mod h1:4AEiLtAb8kLs7vgw2ZV3p2VZ1+hBavOc84hqxVNpCyw=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.25.8 h1:CHr7PIzyfevjNiqL9rU6xoqHZKCO2ldY6LmvRDfpRuI=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.25.8/go.mod h1:zefIy117FDPOVU0xSOFG8mx9kJunuVopzI639tjYXc0=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.28.8 h1:4nUeC9TsZoHm9GHlQ5tnoIklNZgISXXVGPKP5/CS0fk=
|
||||
github.com/aws/aws-sdk-go-v2/config v1.28.8/go.mod h1:2C+fhFxnx1ymomFjj5NBUc/vbjyIUR7mZ/iNRhhb7BU=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.4.3/go.mod h1:FNNC6nQZQUuyhq5aE5c7ata8o9e4ECGmS4lAXC7o1mQ=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.16.6 h1:TimIpn1p4v44i0sJMKsnpby1P9sP1ByKLsdm7bvOmwM=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.16.6/go.mod h1:+CLPlYf9FQLeXD8etOYiZxpLQqc3GL4EikxjkFFp1KA=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.17.49 h1:+7u6eC8K6LLGQwWMYKHSsHAPQl+CGACQmnzd/EPMW0k=
|
||||
github.com/aws/aws-sdk-go-v2/credentials v1.17.49/go.mod h1:0SgZcTAEIlKoYw9g+kuYUwbtUUVjfxnR03YkCOhMbQ0=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.6.0/go.mod h1:gqlclDEZp4aqJOancXK6TN24aKhT0W0Ae9MHk3wzTMM=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.6 h1:pPs23/JLSOlwnmSRNkdbt3upmBeF6QL/3MHEb6KzTyo=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.6/go.mod h1:jsoDHV44SxWv00wlbx0yA5M7n5rmE5rGk+OGA0suXSw=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.5 h1:16Z1XuMUv63fcyW5bIUno6AFcX4drsrE0gof+xue6g4=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.5/go.mod h1:pRvFacV2qbRKy34ZFptHZW4wpauJA445bqFbvA6ikSo=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.5 h1:RxpMuBgzP3Dj1n5CZY6droLFcsn5gc7QsrIcaGQoeCs=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.5/go.mod h1:dO8Js7ym4Jzg/wcjTgCRVln/jFn3nI82XNhsG2lWbDI=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.22 h1:kqOrpojG71DxJm/KDPO+Z/y1phm1JlC8/iT+5XRmAn8=
|
||||
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.22/go.mod h1:NtSFajXVVL8TA2QNngagVZmUtXciyrHOt7xgz4faS/M=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.26 h1:I/5wmGMffY4happ8NOCuIUEWGUvvFp5NSeQcXl9RHcI=
|
||||
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.26/go.mod h1:FR8f4turZtNy6baO0KJ5FJUmXH/cSkI9fOngs0yl6mA=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.26 h1:zXFLuEuMMUOvEARXFUVJdfqZ4bvvSgdGRq/ATcrQxzM=
|
||||
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.26/go.mod h1:3o2Wpy0bogG1kyOPrgkXA8pgIfEEv0+m19O9D5+W8y8=
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.2.4/go.mod h1:ZcBrrI3zBKlhGFNYWvju0I3TR93I7YIgAfy82Fh4lcQ=
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1 h1:uR9lXYjdPX0xY+NhvaJ4dD8rpSRz5VY81ccIIoNG+lw=
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1/go.mod h1:6fQQgfuGmw8Al/3M2IgIllycxV7ZW7WCdVSqfBeUiCY=
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.5 h1:CesTZ0o3+/7N7pDHyoEuS/zL0mD652uRsYCelV08ABU=
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.5/go.mod h1:Srr966fyoo72fJ/Hkz3ij6WQiZBX0RMO7w0jyzEwDyo=
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 h1:VaRN3TlFdd6KxX1x3ILT5ynH6HvKgqdiXoTxAF4HQcQ=
|
||||
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc=
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.26 h1:GeNJsIFHB+WW5ap2Tec4K6dzcVTsRbsT1Lra46Hv9ME=
|
||||
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.26/go.mod h1:zfgMpwHDXX2WGoG84xG2H+ZlPTkJUU4YUvx2svLQYWo=
|
||||
github.com/aws/aws-sdk-go-v2/service/appconfig v1.4.2/go.mod h1:FZ3HkCe+b10uFZZkFdvf98LHW21k49W8o8J366lqVKY=
|
||||
github.com/aws/aws-sdk-go-v2/service/dynamodb v1.25.5 h1:NfKXRrQTesomlTgmum5kTrd5ywuU4XRmA3bNrXnJ5yk=
|
||||
github.com/aws/aws-sdk-go-v2/service/dynamodb v1.25.5/go.mod h1:k4O1PkdCW+6ZUQGZjEZUkCT+8jmDmneKgLQ0mmmeT8s=
|
||||
github.com/aws/aws-sdk-go-v2/service/iam v1.27.5 h1:4v1TyMBPGMOeagieS9TFnPaHaqs0pZFu1DXgFecsvwo=
|
||||
github.com/aws/aws-sdk-go-v2/service/iam v1.27.5/go.mod h1:2Q4GJi6OAgj3bLPGUbA4VkKseAlvnICEtCnKAN6hSQo=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.1 h1:rpkF4n0CyFcrJUG/rNNohoTmhtWlFTRI4BsZOh9PvLs=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.1/go.mod h1:l9ymW25HOqymeU2m1gbUQ3rUIsTwKs8gYHXkqDQUhiI=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.5 h1:OK4q/3E4Kr1bWgcTqSaxmCE5x463TFtSQrF6mQTqMrw=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.5/go.mod h1:T4RMdi6FqSEFaUMLe/YKTD+tj0l+Uz+mxfT7QxljEIA=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery v1.8.5 h1:nt18vYu0XdigeMdoDHJnOQxcCLcAPEeMat18LZUe68I=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery v1.8.5/go.mod h1:6a+eoGEovMG1U+gJ9IkjSCSHg2lIaBsr39auD9kW1xA=
|
||||
github.com/aws/aws-sdk-go-v2/service/dynamodb v1.39.1 h1:SOJ3xkgrw8W0VQgyBUeep74yuf8kWALToFxNNwlHFvg=
|
||||
github.com/aws/aws-sdk-go-v2/service/dynamodb v1.39.1/go.mod h1:J8xqRbx7HIc8ids2P8JbrKx9irONPEYq7Z1FpLDpi3I=
|
||||
github.com/aws/aws-sdk-go-v2/service/iam v1.38.3 h1:2sFIoFzU1IEL9epJWubJm9Dhrn45aTNEJuwsesaCGnk=
|
||||
github.com/aws/aws-sdk-go-v2/service/iam v1.38.3/go.mod h1:KzlNINwfr/47tKkEhgk0r10/OZq3rjtyWy0txL3lM+I=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 h1:iXtILhvDxB6kPvEXgsDhGaZCSC6LQET5ZHSdJozeI0Y=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1/go.mod h1:9nu0fVANtYiAePIBh2/pFUSwtJ402hLnp854CNoDOeE=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.7 h1:tB4tNw83KcajNAzaIMhkhVI2Nt8fAZd5A5ro113FEMY=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.7/go.mod h1:lvpyBGkZ3tZ9iSsUIcC2EWp+0ywa7aK3BLT+FwZi+mQ=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery v1.10.7 h1:EqGlayejoCRXmnVC6lXl6phCm9R2+k35e0gWsO9G5DI=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/endpoint-discovery v1.10.7/go.mod h1:BTw+t+/E5F3ZnDai/wSOYM54WUVjSdewE7Jvwtb7o+w=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.3.2/go.mod h1:72HRZDLMtmVQiLG2tLfQcaWLCssELvGl+Zf2WVxMmR8=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.5 h1:F+XafeiK7Uf4YwTZfe/JLt+3cB6je9sI7l0TY4f2CkY=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.5/go.mod h1:NlZuvlkyu6l/F3+qIBsGGtYLL2Z71tCf5NFoNAaG1NY=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.5 h1:ow5dalHqYM8IbzXFCL86gQY9UJUtZsLyBHUd6OKep9M=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.5/go.mod h1:AcvGHLN2pTXdx1oVFSzcclBvfY2VbBg0AfOE/XjA7oo=
|
||||
github.com/aws/aws-sdk-go-v2/service/kms v1.26.5 h1:MRNoQVbEtjzhYFeKVMifHae4K5q4FuK9B7tTDskIF/g=
|
||||
github.com/aws/aws-sdk-go-v2/service/kms v1.26.5/go.mod h1:gfe6e+rOxaiz/gr5Myk83ruBD6F9WvM7TZbLjcTNsDM=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.46.0 h1:RaXPp86CLxTKDwCwSTmTW7FvTfaLPXhN48mPtQ881bA=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.46.0/go.mod h1:x7gN1BRfTWXdPr/cFGM/iz+c87gRtJ+JMYinObt/0LI=
|
||||
github.com/aws/aws-sdk-go-v2/service/sqs v1.28.4 h1:Hy1cUZGuZRHe3HPxw7nfA9BFUqdWbyI0JLLiqENgucc=
|
||||
github.com/aws/aws-sdk-go-v2/service/sqs v1.28.4/go.mod h1:xlxN+2XHAmoRFFkGFZcrmVYQfXSlNpEuqEpN0GZMmaI=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.7 h1:8eUsivBQzZHqe/3FE+cqwfH+0p5Jo8PFM/QYQSmeZ+M=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.7/go.mod h1:kLPQvGUmxn/fqiCrDeohwG33bq2pQpGeY62yRO6Nrh0=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.7 h1:Hi0KGbrnr57bEHWM0bJ1QcBzxLrL/k2DHvGYhb8+W1w=
|
||||
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.7/go.mod h1:wKNgWgExdjjrm4qvfbTorkvocEstaoDl4WCvGfeCy9c=
|
||||
github.com/aws/aws-sdk-go-v2/service/kms v1.37.6 h1:CZImQdb1QbU9sGgJ9IswhVkxAcjkkD1eQTMA1KHWk+E=
|
||||
github.com/aws/aws-sdk-go-v2/service/kms v1.37.6/go.mod h1:YJDdlK0zsyxVBxGU48AR/Mi8DMrGdc1E3Yij4fNrONA=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.72.1 h1:+IrM0EXV6ozLqJs3Kq2iwQGJBWmgRiYBXWETQQUMZRY=
|
||||
github.com/aws/aws-sdk-go-v2/service/s3 v1.72.1/go.mod h1:r+xl5yzMk9083rMR+sJ5TYj9Tihvf/l1oxzZXDgGj2Q=
|
||||
github.com/aws/aws-sdk-go-v2/service/sns v1.33.7 h1:N3o8mXK6/MP24BtD9sb51omEO9J9cgPM3Ughc293dZc=
|
||||
github.com/aws/aws-sdk-go-v2/service/sns v1.33.7/go.mod h1:AAHZydTB8/V2zn3WNwjLXBK1RAcSEpDNmFfrmjvrJQg=
|
||||
github.com/aws/aws-sdk-go-v2/service/sqs v1.37.4 h1:WpoMCoS4+qOkkuWQommvDRboKYzK91En6eXO/k5dXr0=
|
||||
github.com/aws/aws-sdk-go-v2/service/sqs v1.37.4/go.mod h1:171mrsbgz6DahPMnLJzQiH3bXXrdsWhpE9USZiM19Lk=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.4.2/go.mod h1:NBvT9R1MEF+Ud6ApJKM0G+IkPchKS7p7c2YPKwHmBOk=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.17.5 h1:kuK22ZsITfzaZEkxEl5H/lhy2k3G4clBtcQBI93RbIc=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.17.5/go.mod h1:/tLqstwPfJLHYGBB5/c8P1ITI82pcGs7cJQuXku2pOg=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.20.3 h1:l5d5nrTFMhiUWNoLnV7QNI4m42/3WVSXqSyqVy+elGk=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.20.3/go.mod h1:30gKZp2pHQJq3yTmVy+hJKDFynSoYzVqYaxe4yPi+xI=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.24.8 h1:CvuUmnXI7ebaUAhbJcDy9YQx8wHR69eZ9I7q5hszt/g=
|
||||
github.com/aws/aws-sdk-go-v2/service/sso v1.24.8/go.mod h1:XDeGv1opzwm8ubxddF0cgqkZWsyOtw4lr6dxwmb6YQg=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.7 h1:F2rBfNAL5UyswqoeWv9zs74N/NanhK16ydHW1pahX6E=
|
||||
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.7/go.mod h1:JfyQ0g2JG8+Krq0EuZNnRwX0mU0HrwY/tG6JNfcqh4k=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.7.2/go.mod h1:8EzeIqfWt2wWT4rJVu3f21TfrhJ8AEMzVybRNSb/b4g=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.25.6 h1:39dJNBt35p8dFSnQdoy+QbDaPenTxFqqDQFOb1GDYpE=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.25.6/go.mod h1:6DKEi+8OnUrqEEh6OCam16AYQHWAOyNgRiUGnHoh7Cg=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.33.4 h1:EzofOvWNMtG9ELt9mPOJjLYh1hz6kN4f5hNCyTtS7Hg=
|
||||
github.com/aws/aws-sdk-go-v2/service/sts v1.33.4/go.mod h1:5Gn+d+VaaRgsjewpMvGazt0WfcFO+Md4wLOuBfGR9Bc=
|
||||
github.com/aws/smithy-go v1.8.0/go.mod h1:SObp3lf9smib00L/v3U2eAKG8FyQ7iLrJnQiAmR5n+E=
|
||||
github.com/aws/smithy-go v1.17.0 h1:wWJD7LX6PBV6etBUwO0zElG0nWN9rUhp0WdYeHSHAaI=
|
||||
github.com/aws/smithy-go v1.17.0/go.mod h1:NukqUGpCZIILqqiV0NIjeFh24kd/FAa4beRb6nbIUPE=
|
||||
github.com/aws/smithy-go v1.22.1 h1:/HPHZQ0g7f4eUeK6HKglFz8uwVfZKgoI25rb/J+dnro=
|
||||
github.com/aws/smithy-go v1.22.1/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
@ -444,8 +446,8 @@ github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQL
|
||||
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
|
||||
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
|
||||
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
|
||||
github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM=
|
||||
github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE=
|
||||
github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM=
|
||||
github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU=
|
||||
github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M=
|
||||
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||
@ -475,8 +477,8 @@ github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7
|
||||
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
|
||||
github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.3.0 h1:2y3SDp0ZXuc6/cjLSZ+Q3ir+QB9T/iG5yYRXqsagWSY=
|
||||
github.com/go-logr/logr v1.3.0/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
|
||||
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||
github.com/go-openapi/errors v0.20.2 h1:dxy7PGTqEh94zj2E3h1cUmQQWiM1+aeCROfAr02EmK8=
|
||||
@ -634,8 +636,8 @@ github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0 h1:YBftPWNWd4WwGqtY2yeZL2ef8rH
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0/go.mod h1:YN5jB8ie0yfIUg6VvR9Kz84aCaG7AsGZnLjhHbUqwPg=
|
||||
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542 h1:2VTzZjLZBgl62/EtslCrtky5vbi9dd7HrQPQIx6wqiw=
|
||||
github.com/h2non/parth v0.0.0-20190131123155-b4df798d6542/go.mod h1:Ow0tF8D4Kplbc8s8sSb3V2oUCygFHVp8gC3Dn6U4MNI=
|
||||
github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.43 h1:IHnW2UNo8CnKJCKN90Osq+ViH/RzfxeRUBRLzZOA4C0=
|
||||
github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.43/go.mod h1:vahmnnIdr7LCswcRr+9z5YCTiytyV5qYIYmw7b4QyUE=
|
||||
github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.60 h1:zh3v/n0DillXuE9iMXqFsZjfMicNCVNB1+leYCjZrQw=
|
||||
github.com/hashicorp/aws-sdk-go-base/v2 v2.0.0-beta.60/go.mod h1:npXAOu74D/9TTX1no1ooctXrq6hyWNRIwHrEu2zeVUo=
|
||||
github.com/hashicorp/consul/api v1.13.0 h1:2hnLQ0GjQvw7f3O61jMO8gbasZviZTrt9R8WzgiirHc=
|
||||
github.com/hashicorp/consul/api v1.13.0/go.mod h1:ZlVrynguJKcYr54zGaDbaL3fOvKC9m72FhPvA8T35KQ=
|
||||
github.com/hashicorp/consul/sdk v0.8.0 h1:OJtKBtEjboEZvG6AOUdh4Z1Zbyu0WcxQ0qatRrZHTVU=
|
||||
@ -957,8 +959,8 @@ github.com/rivo/uniseg v0.2.0 h1:S1pD9weZBuJdFmowNwbpi7BJ8TNftyUImj/0WQi72jY=
|
||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
|
||||
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
|
||||
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
|
||||
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
|
||||
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
|
||||
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
|
||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
|
||||
github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
|
||||
@ -1004,8 +1006,8 @@ github.com/stretchr/testify v1.7.4/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
|
||||
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
|
||||
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.194/go.mod h1:7sCQWVkxcsR38nffDW057DRGk8mUjK1Ing/EFOK8s8Y=
|
||||
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.588 h1:DYtBXB7sVc3EOW5horg8j55cLZynhsLYhHrvQ/jXKKM=
|
||||
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.588/go.mod h1:7sCQWVkxcsR38nffDW057DRGk8mUjK1Ing/EFOK8s8Y=
|
||||
@ -1066,16 +1068,18 @@ go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
|
||||
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
|
||||
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
|
||||
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
|
||||
go.opentelemetry.io/contrib/exporters/autoexport v0.0.0-20230703072336-9a582bd098a2 h1:RRYaicUVPzisz2POp/snLfPetL3eBCHlMqtiiNXPnLY=
|
||||
go.opentelemetry.io/contrib/exporters/autoexport v0.0.0-20230703072336-9a582bd098a2/go.mod h1:mYbddca6uQGV5E5Xzd5LWxzqnNG0SmplGiOKYMBL/S8=
|
||||
go.opentelemetry.io/contrib/instrumentation/github.com/aws/aws-sdk-go-v2/otelaws v0.46.1 h1:PGmSzEMllKQwBQHe9SERAsCytvgLhsb8OrRLeW+40xw=
|
||||
go.opentelemetry.io/contrib/instrumentation/github.com/aws/aws-sdk-go-v2/otelaws v0.46.1/go.mod h1:h0dNRrQsnnlMonPE/+FXrXtDYZEyZSTaIOfs+n8P/RQ=
|
||||
go.opentelemetry.io/contrib/instrumentation/github.com/aws/aws-sdk-go-v2/otelaws v0.58.0 h1:g2rorZw2f1qnyfLOC7FP99argIWsN708Fjs2Zwz6SOk=
|
||||
go.opentelemetry.io/contrib/instrumentation/github.com/aws/aws-sdk-go-v2/otelaws v0.58.0/go.mod h1:QzTypGPlQn4NselMPALVKGwm/p3XKLVCB/UG2Dq3PxQ=
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.1 h1:SpGay3w+nEwMpfVnbqOLH5gY52/foP8RE8UzTZ1pdSE=
|
||||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.1/go.mod h1:4UoMYEZOC0yN/sPGH76KPkkU7zgiEWYWL9vwmbnTJPE=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 h1:aFJWCqJMNjENlcleuuOkGAPH82y0yULBScfXcIEdS24=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1/go.mod h1:sEGXWArGqc3tVa+ekntsN65DmVbVeW+7lTKTjZF3/Fo=
|
||||
go.opentelemetry.io/otel v1.21.0 h1:hzLeKBZEL7Okw2mGzZ0cc4k/A7Fta0uoPgaJCr8fsFc=
|
||||
go.opentelemetry.io/otel v1.21.0/go.mod h1:QZzNPQPm1zLX4gZK4cMi+71eaorMSGT3A4znnUvNNEo=
|
||||
go.opentelemetry.io/otel v1.33.0 h1:/FerN9bax5LoK51X/sI0SVYrjSE0/yUL7DpxW4K3FWw=
|
||||
go.opentelemetry.io/otel v1.33.0/go.mod h1:SUUkR6csvUQl+yjReHu5uM3EtVV7MBm5FHKRlNx4I8I=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 h1:cl5P5/GIfFh4t6xyruOgJP5QiA1pw4fYYdv6nc6CBWw=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0/go.mod h1:zgBdWWAu7oEEMC06MMKc5NLbA/1YDXV1sMpSqEeLQLg=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 h1:tIqheXEFWAZ7O8A7m+J0aPTmpJN3YQ7qetUAdkkkKpk=
|
||||
@ -1084,12 +1088,12 @@ go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 h1:digkE
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0/go.mod h1:/OpE/y70qVkndM0TrxT4KBoN3RsFZP0QaofcfYrj76I=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.16.0 h1:+XWJd3jf75RXJq29mxbuXhCXFDG3S3R4vBUeSI2P7tE=
|
||||
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.16.0/go.mod h1:hqgzBPTf4yONMFgdZvL/bK42R/iinTyVQtiWihs3SZc=
|
||||
go.opentelemetry.io/otel/metric v1.21.0 h1:tlYWfeo+Bocx5kLEloTjbcDwBuELRrIFxwdQ36PlJu4=
|
||||
go.opentelemetry.io/otel/metric v1.21.0/go.mod h1:o1p3CA8nNHW8j5yuQLdc1eeqEaPfzug24uvsyIEJRWM=
|
||||
go.opentelemetry.io/otel/sdk v1.21.0 h1:FTt8qirL1EysG6sTQRZ5TokkU8d0ugCj8htOgThZXQ8=
|
||||
go.opentelemetry.io/otel/sdk v1.21.0/go.mod h1:Nna6Yv7PWTdgJHVRD9hIYywQBRx7pbox6nwBnZIxl/E=
|
||||
go.opentelemetry.io/otel/trace v1.21.0 h1:WD9i5gzvoUPuXIXH24ZNBudiarZDKuekPqi/E8fpfLc=
|
||||
go.opentelemetry.io/otel/trace v1.21.0/go.mod h1:LGbsEB0f9LGjN+OZaQQ26sohbOmiMR+BaslueVtS/qQ=
|
||||
go.opentelemetry.io/otel/metric v1.33.0 h1:r+JOocAyeRVXD8lZpjdQjzMadVZp2M4WmQ+5WtEnklQ=
|
||||
go.opentelemetry.io/otel/metric v1.33.0/go.mod h1:L9+Fyctbp6HFTddIxClbQkjtubW6O9QS3Ann/M82u6M=
|
||||
go.opentelemetry.io/otel/sdk v1.33.0 h1:iax7M131HuAm9QkZotNHEfstof92xM+N8sr3uHXc2IM=
|
||||
go.opentelemetry.io/otel/sdk v1.33.0/go.mod h1:A1Q5oi7/9XaMlIWzPSxLRWOI8nG3FnzHJNbiENQuihM=
|
||||
go.opentelemetry.io/otel/trace v1.33.0 h1:cCJuF7LRjUFso9LPnEAHJDB2pqzp+hbO8eu1qqW2d/s=
|
||||
go.opentelemetry.io/otel/trace v1.33.0/go.mod h1:uIcdVUZMpTAmz0tI1z04GoVSezK37CbGV4fr1f2nBck=
|
||||
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
|
||||
go.opentelemetry.io/proto/otlp v1.0.0 h1:T0TX0tmXU8a3CbNXzEKGeU5mIVOdf0oykP+u2lIVU/I=
|
||||
go.opentelemetry.io/proto/otlp v1.0.0/go.mod h1:Sy6pihPLfYHkr3NkUbEhGHFhINUSI/v80hjKIs5JXpM=
|
||||
@ -1123,8 +1127,8 @@ golang.org/x/crypto v0.3.1-0.20221117191849-2c476679df9a/go.mod h1:hebNnKkNXi2Uz
|
||||
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
|
||||
golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU=
|
||||
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
|
||||
golang.org/x/crypto v0.31.0 h1:ihbySMvVjLAeSH1IbfcRTkD/iNscyz8rGzjF/E5hV6U=
|
||||
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
|
||||
golang.org/x/crypto v0.32.0 h1:euUpcYgM8WcP71gNpTqQCn6rC2t6ULUPiOzfWaXVVfc=
|
||||
golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
|
||||
@ -1233,8 +1237,8 @@ golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
|
||||
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
|
||||
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
|
||||
golang.org/x/net v0.33.0 h1:74SYHlV8BIgHIFC/LrYkOGIwL19eTYXQ5wc6TBuO36I=
|
||||
golang.org/x/net v0.33.0/go.mod h1:HXLR5J+9DxmrqMwG9qjGCxZ+zKXxBru04zlTvWlWuN4=
|
||||
golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0=
|
||||
golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
@ -1382,8 +1386,8 @@ golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.28.0 h1:Fksou7UEQUWlKvIdsqzJmUmCX3cZuD2+P3XyyzwMhlA=
|
||||
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU=
|
||||
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210503060354-a79de5458b56/go.mod h1:tfny5GFUkzUvx4ps4ajbZsCe5lw1metzhBm9T3x7oIY=
|
||||
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
@ -1394,8 +1398,8 @@ golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
|
||||
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
|
||||
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
|
||||
golang.org/x/term v0.27.0 h1:WP60Sv1nlK1T6SupCHbXzSaN0b9wUmsPoRS9b61A23Q=
|
||||
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
|
||||
golang.org/x/term v0.28.0 h1:/Ts8HFuMR2E6IP/jlo7QVLZHggjKQbhu/7H0LJFr3Gg=
|
||||
golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek=
|
||||
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
|
@ -9,6 +9,7 @@ import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
//nolint:cyclop // The complexity of this test naturally scales by the number of test conditions, and would be less readable/maintainable if broken into smaller parts.
|
||||
func TestMap(t *testing.T) {
|
||||
variableName := InputVariable{Name: "name"}
|
||||
localHello := LocalValue{Name: "hello"}
|
||||
|
@ -98,32 +98,22 @@ func ParseRefFromTestingScope(traversal hcl.Traversal) (*Reference, tfdiags.Diag
|
||||
|
||||
switch root {
|
||||
case "output":
|
||||
name, rng, remain, outputDiags := parseSingleAttrRef(traversal)
|
||||
reference = &Reference{
|
||||
Subject: OutputValue{Name: name},
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(rng),
|
||||
Remaining: remain,
|
||||
}
|
||||
diags = outputDiags
|
||||
reference, diags = parseSingleAttrRef(traversal, func(name string) Referenceable {
|
||||
return OutputValue{Name: name}
|
||||
})
|
||||
case "check":
|
||||
name, rng, remain, checkDiags := parseSingleAttrRef(traversal)
|
||||
reference = &Reference{
|
||||
Subject: Check{Name: name},
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(rng),
|
||||
Remaining: remain,
|
||||
}
|
||||
diags = checkDiags
|
||||
reference, diags = parseSingleAttrRef(traversal, func(name string) Referenceable {
|
||||
return Check{Name: name}
|
||||
})
|
||||
default:
|
||||
// If it's not an output or a check block, then just parse it as normal.
|
||||
return ParseRef(traversal)
|
||||
}
|
||||
|
||||
if reference != nil {
|
||||
if len(reference.Remaining) == 0 {
|
||||
reference.Remaining = nil
|
||||
}
|
||||
return reference, diags
|
||||
if reference != nil && len(reference.Remaining) == 0 {
|
||||
reference.Remaining = nil
|
||||
}
|
||||
|
||||
// If it's not an output or a check block, then just parse it as normal.
|
||||
return ParseRef(traversal)
|
||||
return reference, diags
|
||||
}
|
||||
|
||||
// ParseRefStr is a helper wrapper around ParseRef that takes a string
|
||||
@ -178,23 +168,14 @@ func parseRef(traversal hcl.Traversal) (*Reference, tfdiags.Diagnostics) {
|
||||
rootRange := traversal[0].SourceRange()
|
||||
|
||||
switch root {
|
||||
|
||||
case "count":
|
||||
name, rng, remain, diags := parseSingleAttrRef(traversal)
|
||||
return &Reference{
|
||||
Subject: CountAttr{Name: name},
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(rng),
|
||||
Remaining: remain,
|
||||
}, diags
|
||||
|
||||
return parseSingleAttrRef(traversal, func(name string) Referenceable {
|
||||
return CountAttr{Name: name}
|
||||
})
|
||||
case "each":
|
||||
name, rng, remain, diags := parseSingleAttrRef(traversal)
|
||||
return &Reference{
|
||||
Subject: ForEachAttr{Name: name},
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(rng),
|
||||
Remaining: remain,
|
||||
}, diags
|
||||
|
||||
return parseSingleAttrRef(traversal, func(name string) Referenceable {
|
||||
return ForEachAttr{Name: name}
|
||||
})
|
||||
case "data":
|
||||
if len(traversal) < 3 {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
@ -207,7 +188,6 @@ func parseRef(traversal hcl.Traversal) (*Reference, tfdiags.Diagnostics) {
|
||||
}
|
||||
remain := traversal[1:] // trim off "data" so we can use our shared resource reference parser
|
||||
return parseResourceRef(DataResourceMode, rootRange, remain)
|
||||
|
||||
case "resource":
|
||||
// This is an alias for the normal case of just using a managed resource
|
||||
// type as a top-level symbol, which will serve as an escape mechanism
|
||||
@ -228,126 +208,34 @@ func parseRef(traversal hcl.Traversal) (*Reference, tfdiags.Diagnostics) {
|
||||
}
|
||||
remain := traversal[1:] // trim off "resource" so we can use our shared resource reference parser
|
||||
return parseResourceRef(ManagedResourceMode, rootRange, remain)
|
||||
|
||||
case "local":
|
||||
name, rng, remain, diags := parseSingleAttrRef(traversal)
|
||||
return &Reference{
|
||||
Subject: LocalValue{Name: name},
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(rng),
|
||||
Remaining: remain,
|
||||
}, diags
|
||||
|
||||
case "module":
|
||||
callName, callRange, remain, diags := parseSingleAttrRef(traversal)
|
||||
if diags.HasErrors() {
|
||||
return nil, diags
|
||||
}
|
||||
|
||||
// A traversal starting with "module" can either be a reference to an
|
||||
// entire module, or to a single output from a module instance,
|
||||
// depending on what we find after this introducer.
|
||||
callInstance := ModuleCallInstance{
|
||||
Call: ModuleCall{
|
||||
Name: callName,
|
||||
},
|
||||
Key: NoKey,
|
||||
}
|
||||
|
||||
if len(remain) == 0 {
|
||||
// Reference to an entire module. Might alternatively be a
|
||||
// reference to a single instance of a particular module, but the
|
||||
// caller will need to deal with that ambiguity since we don't have
|
||||
// enough context here.
|
||||
return &Reference{
|
||||
Subject: callInstance.Call,
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(callRange),
|
||||
Remaining: remain,
|
||||
}, diags
|
||||
}
|
||||
|
||||
if idxTrav, ok := remain[0].(hcl.TraverseIndex); ok {
|
||||
var err error
|
||||
callInstance.Key, err = ParseInstanceKey(idxTrav.Key)
|
||||
if err != nil {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid index key",
|
||||
Detail: fmt.Sprintf("Invalid index for module instance: %s.", err),
|
||||
Subject: &idxTrav.SrcRange,
|
||||
})
|
||||
return nil, diags
|
||||
}
|
||||
remain = remain[1:]
|
||||
|
||||
if len(remain) == 0 {
|
||||
// Also a reference to an entire module instance, but we have a key
|
||||
// now.
|
||||
return &Reference{
|
||||
Subject: callInstance,
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(hcl.RangeBetween(callRange, idxTrav.SrcRange)),
|
||||
Remaining: remain,
|
||||
}, diags
|
||||
}
|
||||
}
|
||||
|
||||
if attrTrav, ok := remain[0].(hcl.TraverseAttr); ok {
|
||||
remain = remain[1:]
|
||||
return &Reference{
|
||||
Subject: ModuleCallInstanceOutput{
|
||||
Name: attrTrav.Name,
|
||||
Call: callInstance,
|
||||
},
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(hcl.RangeBetween(callRange, attrTrav.SrcRange)),
|
||||
Remaining: remain,
|
||||
}, diags
|
||||
}
|
||||
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid reference",
|
||||
Detail: "Module instance objects do not support this operation.",
|
||||
Subject: remain[0].SourceRange().Ptr(),
|
||||
return parseSingleAttrRef(traversal, func(name string) Referenceable {
|
||||
return LocalValue{Name: name}
|
||||
})
|
||||
return nil, diags
|
||||
|
||||
case "module":
|
||||
return parseModuleCallRef(traversal)
|
||||
case "path":
|
||||
name, rng, remain, diags := parseSingleAttrRef(traversal)
|
||||
return &Reference{
|
||||
Subject: PathAttr{Name: name},
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(rng),
|
||||
Remaining: remain,
|
||||
}, diags
|
||||
|
||||
return parseSingleAttrRef(traversal, func(name string) Referenceable {
|
||||
return PathAttr{Name: name}
|
||||
})
|
||||
case "self":
|
||||
return &Reference{
|
||||
Subject: Self,
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(rootRange),
|
||||
Remaining: traversal[1:],
|
||||
}, diags
|
||||
|
||||
case "terraform":
|
||||
name, rng, remain, diags := parseSingleAttrRef(traversal)
|
||||
return &Reference{
|
||||
Subject: NewTerraformAttr(IdentTerraform, name),
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(rng),
|
||||
Remaining: remain,
|
||||
}, diags
|
||||
|
||||
return parseSingleAttrRef(traversal, func(name string) Referenceable {
|
||||
return NewTerraformAttr(IdentTerraform, name)
|
||||
})
|
||||
case "tofu":
|
||||
name, rng, remain, parsedDiags := parseSingleAttrRef(traversal)
|
||||
return &Reference{
|
||||
Subject: NewTerraformAttr(IdentTofu, name),
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(rng),
|
||||
Remaining: remain,
|
||||
}, parsedDiags
|
||||
|
||||
return parseSingleAttrRef(traversal, func(name string) Referenceable {
|
||||
return NewTerraformAttr(IdentTofu, name)
|
||||
})
|
||||
case "var":
|
||||
name, rng, remain, diags := parseSingleAttrRef(traversal)
|
||||
return &Reference{
|
||||
Subject: InputVariable{Name: name},
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(rng),
|
||||
Remaining: remain,
|
||||
}, diags
|
||||
return parseSingleAttrRef(traversal, func(name string) Referenceable {
|
||||
return InputVariable{Name: name}
|
||||
})
|
||||
case "template", "lazy", "arg":
|
||||
// These names are all pre-emptively reserved in the hope of landing
|
||||
// some version of "template values" or "lazy expressions" feature
|
||||
@ -359,7 +247,6 @@ func parseRef(traversal hcl.Traversal) (*Reference, tfdiags.Diagnostics) {
|
||||
Subject: rootRange.Ptr(),
|
||||
})
|
||||
return nil, diags
|
||||
|
||||
default:
|
||||
function := ParseFunction(root)
|
||||
if function.IsNamespace(FunctionNamespaceProvider) {
|
||||
@ -475,12 +362,102 @@ func parseResourceRef(mode ResourceMode, startRange hcl.Range, traversal hcl.Tra
|
||||
}, diags
|
||||
}
|
||||
|
||||
func parseSingleAttrRef(traversal hcl.Traversal) (string, hcl.Range, hcl.Traversal, tfdiags.Diagnostics) {
|
||||
func parseModuleCallRef(traversal hcl.Traversal) (*Reference, tfdiags.Diagnostics) {
|
||||
// The following is a little circuitous just so we can reuse parseSingleAttrRef
|
||||
// for this slightly-odd case while keeping it relatively simple for all of the
|
||||
// other cases that use it: we first get the information we need wrapped up
|
||||
// in a *Reference and then unpack it to perform further work below.
|
||||
callRef, diags := parseSingleAttrRef(traversal, func(name string) Referenceable {
|
||||
return ModuleCallInstance{
|
||||
Call: ModuleCall{
|
||||
Name: name,
|
||||
},
|
||||
Key: NoKey,
|
||||
}
|
||||
})
|
||||
if diags.HasErrors() {
|
||||
return nil, diags
|
||||
}
|
||||
|
||||
// A traversal starting with "module" can either be a reference to an
|
||||
// entire module, or to a single output from a module instance,
|
||||
// depending on what we find after this introducer.
|
||||
callInstance := callRef.Subject.(ModuleCallInstance) //nolint:errcheck // This was constructed directly above by call to parseSingleAttrRef
|
||||
callRange := callRef.SourceRange
|
||||
remain := callRef.Remaining
|
||||
|
||||
if len(remain) == 0 {
|
||||
// Reference to an entire module. Might alternatively be a
|
||||
// reference to a single instance of a particular module, but the
|
||||
// caller will need to deal with that ambiguity since we don't have
|
||||
// enough context here.
|
||||
return &Reference{
|
||||
Subject: callInstance.Call,
|
||||
SourceRange: callRange,
|
||||
Remaining: remain,
|
||||
}, diags
|
||||
}
|
||||
|
||||
if idxTrav, ok := remain[0].(hcl.TraverseIndex); ok {
|
||||
var err error
|
||||
callInstance.Key, err = ParseInstanceKey(idxTrav.Key)
|
||||
if err != nil {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid index key",
|
||||
Detail: fmt.Sprintf("Invalid index for module instance: %s.", err),
|
||||
Subject: &idxTrav.SrcRange,
|
||||
})
|
||||
return nil, diags
|
||||
}
|
||||
remain = remain[1:]
|
||||
|
||||
if len(remain) == 0 {
|
||||
// Also a reference to an entire module instance, but we have a key
|
||||
// now.
|
||||
return &Reference{
|
||||
Subject: callInstance,
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(hcl.RangeBetween(callRange.ToHCL(), idxTrav.SrcRange)),
|
||||
Remaining: remain,
|
||||
}, diags
|
||||
}
|
||||
}
|
||||
|
||||
if attrTrav, ok := remain[0].(hcl.TraverseAttr); ok {
|
||||
remain = remain[1:]
|
||||
return &Reference{
|
||||
Subject: ModuleCallInstanceOutput{
|
||||
Name: attrTrav.Name,
|
||||
Call: callInstance,
|
||||
},
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(hcl.RangeBetween(callRange.ToHCL(), attrTrav.SrcRange)),
|
||||
Remaining: remain,
|
||||
}, diags
|
||||
}
|
||||
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
Summary: "Invalid reference",
|
||||
Detail: "Module instance objects do not support this operation.",
|
||||
Subject: remain[0].SourceRange().Ptr(),
|
||||
})
|
||||
return nil, diags
|
||||
}
|
||||
|
||||
func parseSingleAttrRef(traversal hcl.Traversal, makeAddr func(name string) Referenceable) (*Reference, tfdiags.Diagnostics) {
|
||||
var diags tfdiags.Diagnostics
|
||||
|
||||
root := traversal.RootName()
|
||||
rootRange := traversal[0].SourceRange()
|
||||
|
||||
// NOTE: In a previous version of this file parseSingleAttrRef only returned the component parts
|
||||
// of a *Reference and then the callers assembled them, which caused the main parseRef function
|
||||
// to return a non-nil result (with mostly-garbage field values) even in the error cases.
|
||||
// We've preserved that oddity for now because our code complexity refactoring efforts should
|
||||
// not change the externally-observable behavior, but to guarantee that we'd need to review
|
||||
// all uses of parseRef to make sure that they aren't depending on getting a non-nil *Reference
|
||||
// along with error diagnostics. :(
|
||||
|
||||
if len(traversal) < 2 {
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
@ -488,10 +465,15 @@ func parseSingleAttrRef(traversal hcl.Traversal) (string, hcl.Range, hcl.Travers
|
||||
Detail: fmt.Sprintf("The %q object cannot be accessed directly. Instead, access one of its attributes.", root),
|
||||
Subject: &rootRange,
|
||||
})
|
||||
return "", hcl.Range{}, nil, diags
|
||||
return &Reference{Subject: makeAddr("")}, diags
|
||||
}
|
||||
if attrTrav, ok := traversal[1].(hcl.TraverseAttr); ok {
|
||||
return attrTrav.Name, hcl.RangeBetween(rootRange, attrTrav.SrcRange), traversal[2:], diags
|
||||
subjectAddr := makeAddr(attrTrav.Name)
|
||||
return &Reference{
|
||||
Subject: subjectAddr,
|
||||
SourceRange: tfdiags.SourceRangeFromHCL(hcl.RangeBetween(rootRange, attrTrav.SrcRange)),
|
||||
Remaining: traversal[2:],
|
||||
}, diags
|
||||
}
|
||||
diags = diags.Append(&hcl.Diagnostic{
|
||||
Severity: hcl.DiagError,
|
||||
@ -499,5 +481,5 @@ func parseSingleAttrRef(traversal hcl.Traversal) (string, hcl.Range, hcl.Travers
|
||||
Detail: fmt.Sprintf("The %q object does not support this operation.", root),
|
||||
Subject: traversal[1].SourceRange().Ptr(),
|
||||
})
|
||||
return "", hcl.Range{}, nil, diags
|
||||
return &Reference{Subject: makeAddr("")}, diags
|
||||
}
|
||||
|
@ -772,9 +772,9 @@ func (b *Backend) Configure(obj cty.Value) tfdiags.Diagnostics {
|
||||
}
|
||||
|
||||
if value := obj.GetAttr("assume_role"); !value.IsNull() {
|
||||
cfg.AssumeRole = configureNestedAssumeRole(obj)
|
||||
cfg.AssumeRole = []awsbase.AssumeRole{configureNestedAssumeRole(obj)}
|
||||
} else if value := obj.GetAttr("role_arn"); !value.IsNull() {
|
||||
cfg.AssumeRole = configureAssumeRole(obj)
|
||||
cfg.AssumeRole = []awsbase.AssumeRole{configureAssumeRole(obj)}
|
||||
}
|
||||
|
||||
if val := obj.GetAttr("assume_role_with_web_identity"); !val.IsNull() {
|
||||
@ -885,7 +885,7 @@ func getS3Config(obj cty.Value) func(options *s3.Options) {
|
||||
}
|
||||
}
|
||||
|
||||
func configureNestedAssumeRole(obj cty.Value) *awsbase.AssumeRole {
|
||||
func configureNestedAssumeRole(obj cty.Value) awsbase.AssumeRole {
|
||||
assumeRole := awsbase.AssumeRole{}
|
||||
|
||||
obj = obj.GetAttr("assume_role")
|
||||
@ -922,10 +922,10 @@ func configureNestedAssumeRole(obj cty.Value) *awsbase.AssumeRole {
|
||||
assumeRole.TransitiveTagKeys = val
|
||||
}
|
||||
|
||||
return &assumeRole
|
||||
return assumeRole
|
||||
}
|
||||
|
||||
func configureAssumeRole(obj cty.Value) *awsbase.AssumeRole {
|
||||
func configureAssumeRole(obj cty.Value) awsbase.AssumeRole {
|
||||
assumeRole := awsbase.AssumeRole{}
|
||||
|
||||
assumeRole.RoleARN = stringAttr(obj, "role_arn")
|
||||
@ -944,7 +944,7 @@ func configureAssumeRole(obj cty.Value) *awsbase.AssumeRole {
|
||||
assumeRole.TransitiveTagKeys = val
|
||||
}
|
||||
|
||||
return &assumeRole
|
||||
return assumeRole
|
||||
}
|
||||
|
||||
func configureAssumeRoleWithWebIdentity(obj cty.Value) *awsbase.AssumeRoleWithWebIdentity {
|
||||
|
@ -116,7 +116,7 @@ func TestBackendConfig_InvalidRegion(t *testing.T) {
|
||||
tfdiags.AttributeValue(
|
||||
tfdiags.Error,
|
||||
"Invalid region value",
|
||||
`Invalid AWS Region: nonesuch`,
|
||||
`invalid AWS Region: nonesuch`,
|
||||
cty.Path{cty.GetAttrStep{Name: "region"}},
|
||||
),
|
||||
},
|
||||
|
@ -502,6 +502,68 @@ func TestInitProviderNotFound(t *testing.T) {
|
||||
t.Errorf("wrong output:\n%s", cmp.Diff(stripAnsi(stderr), expectedErr))
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("implicit provider resource and data not found", func(t *testing.T) {
|
||||
implicitFixturePath := filepath.Join("testdata", "provider-implicit-ref-not-found/implicit-by-resource-and-data")
|
||||
tf := e2e.NewBinary(t, tofuBin, implicitFixturePath)
|
||||
stdout, _, err := tf.Run("init")
|
||||
if err == nil {
|
||||
t.Fatal("expected error, got success")
|
||||
}
|
||||
|
||||
// Testing that the warn wrote to the user is containing the resource address from where the provider
|
||||
// was registered to be downloaded
|
||||
expectedContentInOutput := []string{
|
||||
`(and one more similar warning elsewhere)`,
|
||||
`
|
||||
╷
|
||||
│ Warning: Automatically-inferred provider dependency
|
||||
│
|
||||
│ on main.tf line 2:
|
||||
│ 2: resource "nonexistingProv_res" "test1" {
|
||||
│
|
||||
│ Due to the prefix of the resource type name OpenTofu guessed that you
|
||||
│ intended to associate nonexistingProv_res.test1 with a provider whose local
|
||||
│ name is "nonexistingprov", but that name is not declared in this module's
|
||||
│ required_providers block. OpenTofu therefore guessed that you intended to
|
||||
│ use hashicorp/nonexistingprov, but that provider does not exist.
|
||||
│
|
||||
│ Make at least one of the following changes to tell OpenTofu which provider
|
||||
│ to use:
|
||||
│
|
||||
│ - Add a declaration for local name "nonexistingprov" to this module's
|
||||
│ required_providers block, specifying the full source address for the
|
||||
│ provider you intended to use.
|
||||
│ - Verify that "nonexistingProv_res" is the correct resource type name to
|
||||
│ use. Did you omit a prefix which would imply the correct provider?
|
||||
│ - Use a "provider" argument within this resource block to override
|
||||
│ OpenTofu's automatic selection of the local name "nonexistingprov".
|
||||
│`}
|
||||
for _, expectedOutput := range expectedContentInOutput {
|
||||
if cleanOut := strings.TrimSpace(stripAnsi(stdout)); !strings.Contains(cleanOut, expectedOutput) {
|
||||
t.Errorf("wrong output.\n\toutput:\n%s\n\n\tdoes not contain:\n%s", cleanOut, expectedOutput)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("resource pointing to a not configured provider does not warn on implicit reference", func(t *testing.T) {
|
||||
implicitFixturePath := filepath.Join("testdata", "provider-implicit-ref-not-found/resource-with-provider-attribute")
|
||||
tf := e2e.NewBinary(t, tofuBin, implicitFixturePath)
|
||||
stdout, _, err := tf.Run("init")
|
||||
if err == nil {
|
||||
t.Fatal("expected error, got success")
|
||||
}
|
||||
|
||||
// Ensure that the output does not contain the warning since the resource is pointing already to a specific
|
||||
// provider (even though it is misspelled)
|
||||
expectedOutput := `Initializing the backend...
|
||||
|
||||
Initializing provider plugins...
|
||||
- Finding latest version of hashicorp/asw...`
|
||||
if cleanOut := strings.TrimSpace(stripAnsi(stdout)); cleanOut != expectedOutput {
|
||||
t.Errorf("wrong output:\n%s", cmp.Diff(cleanOut, expectedOutput))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// The following test is temporarily removed until the OpenTofu registry returns a deprecation warning
|
||||
|
@ -0,0 +1,10 @@
|
||||
# This is for testing that the implicitly defined providers cannot be fetched and the user is getting an info of the root cause
|
||||
resource "nonexistingProv_res" "test1" {
|
||||
}
|
||||
|
||||
data "nonexistingProv2_data" "test2" {
|
||||
}
|
||||
|
||||
module "testmod" {
|
||||
source = "./mod"
|
||||
}
|
@ -0,0 +1,2 @@
|
||||
resource "nonexistingProv_res" "test2" {
|
||||
}
|
@ -0,0 +1,6 @@
|
||||
// when a resource is pointing to a provider that is missing required_providers definition, tofu does not show the warn
|
||||
// about implicit reference of a provider
|
||||
resource "aws_iam_role" "test" {
|
||||
assume_role_policy = "test"
|
||||
provider = asw.test
|
||||
}
|
@ -566,7 +566,7 @@ func (c *InitCommand) getProviders(ctx context.Context, config *configs.Config,
|
||||
|
||||
// First we'll collect all the provider dependencies we can see in the
|
||||
// configuration and the state.
|
||||
reqs, hclDiags := config.ProviderRequirements()
|
||||
reqs, qualifs, hclDiags := config.ProviderRequirements()
|
||||
diags = diags.Append(hclDiags)
|
||||
if hclDiags.HasErrors() {
|
||||
return false, true, diags
|
||||
@ -712,6 +712,9 @@ func (c *InitCommand) getProviders(ctx context.Context, config *configs.Config,
|
||||
suggestion += "\n\nIf you believe this provider is missing from the registry, please submit a issue on the OpenTofu Registry https://github.com/opentofu/registry/issues/new/choose"
|
||||
}
|
||||
|
||||
warnDiags := warnOnFailedImplicitProvReference(provider, qualifs)
|
||||
diags = diags.Append(warnDiags)
|
||||
|
||||
diags = diags.Append(tfdiags.Sourceless(
|
||||
tfdiags.Error,
|
||||
"Failed to query available provider packages",
|
||||
@ -1039,6 +1042,43 @@ version control system if they represent changes you intended to make.`))
|
||||
return true, false, diags
|
||||
}
|
||||
|
||||
// warnOnFailedImplicitProvReference returns a warn diagnostic when the downloader fails to fetch a provider that is implicitly referenced.
|
||||
// In other words, if the failed to download provider is having no required_providers entry, this function is trying to give to the user
|
||||
// more information on the source of the issue and gives also instructions on how to fix it.
|
||||
func warnOnFailedImplicitProvReference(provider addrs.Provider, qualifs *getproviders.ProvidersQualification) tfdiags.Diagnostics {
|
||||
if _, ok := qualifs.Explicit[provider]; ok {
|
||||
return nil
|
||||
}
|
||||
refs, ok := qualifs.Implicit[provider]
|
||||
if !ok || len(refs) == 0 {
|
||||
// If there is no implicit reference for that provider, do not write the warn, let just the error to be returned.
|
||||
return nil
|
||||
}
|
||||
|
||||
// NOTE: if needed, in the future we can use the rest of the "refs" to print all the culprits or at least to give
|
||||
// a hint on how many resources are causing this
|
||||
ref := refs[0]
|
||||
if ref.ProviderAttribute {
|
||||
return nil
|
||||
}
|
||||
details := fmt.Sprintf(
|
||||
implicitProviderReferenceBody,
|
||||
ref.CfgRes.String(),
|
||||
provider.Type,
|
||||
provider.ForDisplay(),
|
||||
provider.Type,
|
||||
ref.CfgRes.Resource.Type,
|
||||
provider.Type,
|
||||
)
|
||||
return tfdiags.Diagnostics{}.Append(
|
||||
&hcl.Diagnostic{
|
||||
Severity: hcl.DiagWarning,
|
||||
Subject: ref.Ref.ToHCL().Ptr(),
|
||||
Summary: implicitProviderReferenceHead,
|
||||
Detail: details,
|
||||
})
|
||||
}
|
||||
|
||||
// backendConfigOverrideBody interprets the raw values of -backend-config
|
||||
// arguments into a hcl Body that should override the backend settings given
|
||||
// in the configuration.
|
||||
@ -1379,3 +1419,14 @@ The current .terraform.lock.hcl file only includes checksums for %s, so OpenTofu
|
||||
To calculate additional checksums for another platform, run:
|
||||
tofu providers lock -platform=linux_amd64
|
||||
(where linux_amd64 is the platform to generate)`
|
||||
|
||||
const implicitProviderReferenceHead = `Automatically-inferred provider dependency`
|
||||
|
||||
const implicitProviderReferenceBody = `Due to the prefix of the resource type name OpenTofu guessed that you intended to associate %s with a provider whose local name is "%s", but that name is not declared in this module's required_providers block. OpenTofu therefore guessed that you intended to use %s, but that provider does not exist.
|
||||
|
||||
Make at least one of the following changes to tell OpenTofu which provider to use:
|
||||
|
||||
- Add a declaration for local name "%s" to this module's required_providers block, specifying the full source address for the provider you intended to use.
|
||||
- Verify that "%s" is the correct resource type name to use. Did you omit a prefix which would imply the correct provider?
|
||||
- Use a "provider" argument within this resource block to override OpenTofu's automatic selection of the local name "%s".
|
||||
`
|
||||
|
@ -124,7 +124,7 @@ func (c *ProvidersLockCommand) Run(args []string) int {
|
||||
|
||||
config, confDiags := c.loadConfig(".")
|
||||
diags = diags.Append(confDiags)
|
||||
reqs, hclDiags := config.ProviderRequirements()
|
||||
reqs, _, hclDiags := config.ProviderRequirements()
|
||||
diags = diags.Append(hclDiags)
|
||||
|
||||
// If we have explicit provider selections on the command line then
|
||||
|
@ -83,7 +83,7 @@ func (c *ProvidersMirrorCommand) Run(args []string) int {
|
||||
|
||||
config, confDiags := c.loadConfig(".")
|
||||
diags = diags.Append(confDiags)
|
||||
reqs, moreDiags := config.ProviderRequirements()
|
||||
reqs, _, moreDiags := config.ProviderRequirements()
|
||||
diags = diags.Append(moreDiags)
|
||||
|
||||
// Read lock file
|
||||
|
@ -7,7 +7,6 @@ package json
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
@ -16,9 +15,10 @@ import (
|
||||
"github.com/hashicorp/hcl/v2/hcled"
|
||||
"github.com/hashicorp/hcl/v2/hclparse"
|
||||
"github.com/hashicorp/hcl/v2/hclsyntax"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
||||
"github.com/opentofu/opentofu/internal/lang/marks"
|
||||
"github.com/opentofu/opentofu/internal/tfdiags"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
// These severities map to the tfdiags.Severity values, plus an explicit
|
||||
@ -132,7 +132,8 @@ type DiagnosticFunctionCall struct {
|
||||
}
|
||||
|
||||
// NewDiagnostic takes a tfdiags.Diagnostic and a map of configuration sources,
|
||||
// and returns a Diagnostic struct.
|
||||
// and returns a [Diagnostic] object as a "UI-flavored" representation of the
|
||||
// diagnostic.
|
||||
func NewDiagnostic(diag tfdiags.Diagnostic, sources map[string]*hcl.File) *Diagnostic {
|
||||
var sev string
|
||||
switch diag.Severity() {
|
||||
@ -144,269 +145,340 @@ func NewDiagnostic(diag tfdiags.Diagnostic, sources map[string]*hcl.File) *Diagn
|
||||
sev = DiagnosticSeverityUnknown
|
||||
}
|
||||
|
||||
desc := diag.Description()
|
||||
sourceRefs := diag.Source()
|
||||
highlightRange, snippetRange := prepareDiagnosticRanges(sourceRefs.Subject, sourceRefs.Context)
|
||||
|
||||
diagnostic := &Diagnostic{
|
||||
// If the diagnostic has source location information then we will try to construct a snippet
|
||||
// showing a relevant portion of the source code.
|
||||
snippet := newDiagnosticSnippet(snippetRange, highlightRange, sources)
|
||||
if snippet != nil {
|
||||
// We might be able to annotate the snippet with some dynamic-expression-related information,
|
||||
// if this is a suitably-enriched diagnostic. These are not strictly part of the "snippet",
|
||||
// but we return them all together because the human-readable UI presents this information
|
||||
// all together as one UI element.
|
||||
snippet.Values = newDiagnosticExpressionValues(diag)
|
||||
snippet.FunctionCall = newDiagnosticSnippetFunctionCall(diag)
|
||||
}
|
||||
|
||||
desc := diag.Description()
|
||||
return &Diagnostic{
|
||||
Severity: sev,
|
||||
Summary: desc.Summary,
|
||||
Detail: desc.Detail,
|
||||
Address: desc.Address,
|
||||
Range: newDiagnosticRange(highlightRange),
|
||||
Snippet: snippet,
|
||||
}
|
||||
|
||||
sourceRefs := diag.Source()
|
||||
if sourceRefs.Subject != nil {
|
||||
// We'll borrow HCL's range implementation here, because it has some
|
||||
// handy features to help us produce a nice source code snippet.
|
||||
highlightRange := sourceRefs.Subject.ToHCL()
|
||||
|
||||
// Some diagnostic sources fail to set the end of the subject range.
|
||||
if highlightRange.End == (hcl.Pos{}) {
|
||||
highlightRange.End = highlightRange.Start
|
||||
}
|
||||
|
||||
snippetRange := highlightRange
|
||||
if sourceRefs.Context != nil {
|
||||
snippetRange = sourceRefs.Context.ToHCL()
|
||||
}
|
||||
|
||||
// Make sure the snippet includes the highlight. This should be true
|
||||
// for any reasonable diagnostic, but we'll make sure.
|
||||
snippetRange = hcl.RangeOver(snippetRange, highlightRange)
|
||||
|
||||
// Empty ranges result in odd diagnostic output, so extend the end to
|
||||
// ensure there's at least one byte in the snippet or highlight.
|
||||
if snippetRange.Empty() {
|
||||
snippetRange.End.Byte++
|
||||
snippetRange.End.Column++
|
||||
}
|
||||
if highlightRange.Empty() {
|
||||
highlightRange.End.Byte++
|
||||
highlightRange.End.Column++
|
||||
}
|
||||
|
||||
diagnostic.Range = &DiagnosticRange{
|
||||
Filename: highlightRange.Filename,
|
||||
Start: Pos{
|
||||
Line: highlightRange.Start.Line,
|
||||
Column: highlightRange.Start.Column,
|
||||
Byte: highlightRange.Start.Byte,
|
||||
},
|
||||
End: Pos{
|
||||
Line: highlightRange.End.Line,
|
||||
Column: highlightRange.End.Column,
|
||||
Byte: highlightRange.End.Byte,
|
||||
},
|
||||
}
|
||||
|
||||
var src []byte
|
||||
if sources != nil {
|
||||
if f, ok := sources[highlightRange.Filename]; ok {
|
||||
src = f.Bytes
|
||||
}
|
||||
}
|
||||
|
||||
// If we have a source file for the diagnostic, we can emit a code
|
||||
// snippet.
|
||||
if src != nil {
|
||||
diagnostic.Snippet = &DiagnosticSnippet{
|
||||
StartLine: snippetRange.Start.Line,
|
||||
|
||||
// Ensure that the default Values struct is an empty array, as this
|
||||
// makes consuming the JSON structure easier in most languages.
|
||||
Values: []DiagnosticExpressionValue{},
|
||||
}
|
||||
|
||||
file, offset := parseRange(src, highlightRange)
|
||||
|
||||
// Some diagnostics may have a useful top-level context to add to
|
||||
// the code snippet output.
|
||||
contextStr := hcled.ContextString(file, offset-1)
|
||||
if contextStr != "" {
|
||||
diagnostic.Snippet.Context = &contextStr
|
||||
}
|
||||
|
||||
// Build the string of the code snippet, tracking at which byte of
|
||||
// the file the snippet starts.
|
||||
var codeStartByte int
|
||||
sc := hcl.NewRangeScanner(src, highlightRange.Filename, bufio.ScanLines)
|
||||
var code strings.Builder
|
||||
for sc.Scan() {
|
||||
lineRange := sc.Range()
|
||||
if lineRange.Overlaps(snippetRange) {
|
||||
if codeStartByte == 0 && code.Len() == 0 {
|
||||
codeStartByte = lineRange.Start.Byte
|
||||
}
|
||||
code.Write(lineRange.SliceBytes(src))
|
||||
code.WriteRune('\n')
|
||||
}
|
||||
}
|
||||
codeStr := strings.TrimSuffix(code.String(), "\n")
|
||||
diagnostic.Snippet.Code = codeStr
|
||||
|
||||
// Calculate the start and end byte of the highlight range relative
|
||||
// to the code snippet string.
|
||||
start := highlightRange.Start.Byte - codeStartByte
|
||||
end := start + (highlightRange.End.Byte - highlightRange.Start.Byte)
|
||||
|
||||
// We can end up with some quirky results here in edge cases like
|
||||
// when a source range starts or ends at a newline character,
|
||||
// so we'll cap the results at the bounds of the highlight range
|
||||
// so that consumers of this data don't need to contend with
|
||||
// out-of-bounds errors themselves.
|
||||
if start < 0 {
|
||||
start = 0
|
||||
} else if start > len(codeStr) {
|
||||
start = len(codeStr)
|
||||
}
|
||||
if end < 0 {
|
||||
end = 0
|
||||
} else if end > len(codeStr) {
|
||||
end = len(codeStr)
|
||||
}
|
||||
|
||||
diagnostic.Snippet.HighlightStartOffset = start
|
||||
diagnostic.Snippet.HighlightEndOffset = end
|
||||
|
||||
if fromExpr := diag.FromExpr(); fromExpr != nil {
|
||||
// We may also be able to generate information about the dynamic
|
||||
// values of relevant variables at the point of evaluation, then.
|
||||
// This is particularly useful for expressions that get evaluated
|
||||
// multiple times with different values, such as blocks using
|
||||
// "count" and "for_each", or within "for" expressions.
|
||||
expr := fromExpr.Expression
|
||||
ctx := fromExpr.EvalContext
|
||||
vars := expr.Variables()
|
||||
values := make([]DiagnosticExpressionValue, 0, len(vars))
|
||||
seen := make(map[string]struct{}, len(vars))
|
||||
includeUnknown := tfdiags.DiagnosticCausedByUnknown(diag)
|
||||
includeSensitive := tfdiags.DiagnosticCausedBySensitive(diag)
|
||||
Traversals:
|
||||
for _, traversal := range vars {
|
||||
for len(traversal) > 1 {
|
||||
val, diags := traversal.TraverseAbs(ctx)
|
||||
if diags.HasErrors() {
|
||||
// Skip anything that generates errors, since we probably
|
||||
// already have the same error in our diagnostics set
|
||||
// already.
|
||||
traversal = traversal[:len(traversal)-1]
|
||||
continue
|
||||
}
|
||||
|
||||
traversalStr := traversalStr(traversal)
|
||||
if _, exists := seen[traversalStr]; exists {
|
||||
continue Traversals // don't show duplicates when the same variable is referenced multiple times
|
||||
}
|
||||
value := DiagnosticExpressionValue{
|
||||
Traversal: traversalStr,
|
||||
}
|
||||
switch {
|
||||
case val.HasMark(marks.Sensitive):
|
||||
// We only mention a sensitive value if the diagnostic
|
||||
// we're rendering is explicitly marked as being
|
||||
// caused by sensitive values, because otherwise
|
||||
// readers tend to be misled into thinking the error
|
||||
// is caused by the sensitive value even when it isn't.
|
||||
if !includeSensitive {
|
||||
continue Traversals
|
||||
}
|
||||
// Even when we do mention one, we keep it vague
|
||||
// in order to minimize the chance of giving away
|
||||
// whatever was sensitive about it.
|
||||
value.Statement = "has a sensitive value"
|
||||
case !val.IsKnown():
|
||||
// We'll avoid saying anything about unknown or
|
||||
// "known after apply" unless the diagnostic is
|
||||
// explicitly marked as being caused by unknown
|
||||
// values, because otherwise readers tend to be
|
||||
// misled into thinking the error is caused by the
|
||||
// unknown value even when it isn't.
|
||||
if ty := val.Type(); ty != cty.DynamicPseudoType {
|
||||
if includeUnknown {
|
||||
switch {
|
||||
case ty.IsCollectionType():
|
||||
valRng := val.Range()
|
||||
minLen := valRng.LengthLowerBound()
|
||||
maxLen := valRng.LengthUpperBound()
|
||||
const maxLimit = 1024 // (upper limit is just an arbitrary value to avoid showing distracting large numbers in the UI)
|
||||
switch {
|
||||
case minLen == maxLen:
|
||||
value.Statement = fmt.Sprintf("is a %s of length %d, known only after apply", ty.FriendlyName(), minLen)
|
||||
case minLen != 0 && maxLen <= maxLimit:
|
||||
value.Statement = fmt.Sprintf("is a %s with between %d and %d elements, known only after apply", ty.FriendlyName(), minLen, maxLen)
|
||||
case minLen != 0:
|
||||
value.Statement = fmt.Sprintf("is a %s with at least %d elements, known only after apply", ty.FriendlyName(), minLen)
|
||||
case maxLen <= maxLimit:
|
||||
value.Statement = fmt.Sprintf("is a %s with up to %d elements, known only after apply", ty.FriendlyName(), maxLen)
|
||||
default:
|
||||
value.Statement = fmt.Sprintf("is a %s, known only after apply", ty.FriendlyName())
|
||||
}
|
||||
default:
|
||||
value.Statement = fmt.Sprintf("is a %s, known only after apply", ty.FriendlyName())
|
||||
}
|
||||
} else {
|
||||
value.Statement = fmt.Sprintf("is a %s", ty.FriendlyName())
|
||||
}
|
||||
} else {
|
||||
if !includeUnknown {
|
||||
continue Traversals
|
||||
}
|
||||
value.Statement = "will be known only after apply"
|
||||
}
|
||||
default:
|
||||
value.Statement = fmt.Sprintf("is %s", compactValueStr(val))
|
||||
}
|
||||
values = append(values, value)
|
||||
seen[traversalStr] = struct{}{}
|
||||
}
|
||||
}
|
||||
sort.Slice(values, func(i, j int) bool {
|
||||
return values[i].Traversal < values[j].Traversal
|
||||
})
|
||||
diagnostic.Snippet.Values = values
|
||||
|
||||
if callInfo := tfdiags.ExtraInfo[hclsyntax.FunctionCallDiagExtra](diag); callInfo != nil && callInfo.CalledFunctionName() != "" {
|
||||
calledAs := callInfo.CalledFunctionName()
|
||||
baseName := calledAs
|
||||
if idx := strings.LastIndex(baseName, "::"); idx >= 0 {
|
||||
baseName = baseName[idx+2:]
|
||||
}
|
||||
callInfo := &DiagnosticFunctionCall{
|
||||
CalledAs: calledAs,
|
||||
}
|
||||
if f, ok := ctx.Functions[calledAs]; ok {
|
||||
callInfo.Signature = DescribeFunction(baseName, f)
|
||||
}
|
||||
diagnostic.Snippet.FunctionCall = callInfo
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
return diagnostic
|
||||
}
|
||||
|
||||
func parseRange(src []byte, rng hcl.Range) (*hcl.File, int) {
|
||||
filename := rng.Filename
|
||||
offset := rng.Start.Byte
|
||||
|
||||
// We need to re-parse here to get a *hcl.File we can interrogate. This
|
||||
// is not awesome since we presumably already parsed the file earlier too,
|
||||
// but this re-parsing is architecturally simpler than retaining all of
|
||||
// the hcl.File objects and we only do this in the case of an error anyway
|
||||
// so the overhead here is not a big problem.
|
||||
parser := hclparse.NewParser()
|
||||
var file *hcl.File
|
||||
|
||||
// Ignore diagnostics here as there is nothing we can do with them.
|
||||
if strings.HasSuffix(filename, ".json") {
|
||||
file, _ = parser.ParseJSON(src, filename)
|
||||
} else {
|
||||
file, _ = parser.ParseHCL(src, filename)
|
||||
// prepareDiagnosticRanges takes the raw subject and context source ranges from a
|
||||
// diagnostic message and returns the more UI-oriented "highlight" and "snippet"
|
||||
// ranges.
|
||||
//
|
||||
// The "highlight" range describes the characters that are considered to be the
|
||||
// direct cause of the problem, and which are typically presented as underlined
|
||||
// when producing human-readable diagnostics in a terminal that can support that.
|
||||
//
|
||||
// The "snippet" range describes a potentially-larger range of characters that
|
||||
// should all be included in the source code snippet included in the diagnostic
|
||||
// message. The highlight range is guaranteed to be contained within the
|
||||
// snippet range. Some of our diagnostic messages use this, for example, to
|
||||
// ensure that the whole of an expression gets included in the snippet even if
|
||||
// the problem is just one operand of the expression and the expression is wrapped
|
||||
// over multiple lines.
|
||||
//
|
||||
//nolint:nonamedreturns // These names are for documentation purposes, to differentiate two results that have the same type
|
||||
func prepareDiagnosticRanges(subject, context *tfdiags.SourceRange) (highlight, snippet *tfdiags.SourceRange) {
|
||||
if subject == nil {
|
||||
// If we don't even have a "subject" then we have no ranges to report at all.
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
return file, offset
|
||||
// We'll borrow HCL's range implementation here, because it has some
|
||||
// handy features to help us produce a nice source code snippet.
|
||||
highlightRange := subject.ToHCL()
|
||||
|
||||
// Some diagnostic sources fail to set the end of the subject range.
|
||||
if highlightRange.End == (hcl.Pos{}) {
|
||||
highlightRange.End = highlightRange.Start
|
||||
}
|
||||
|
||||
snippetRange := highlightRange
|
||||
if context != nil {
|
||||
snippetRange = context.ToHCL()
|
||||
}
|
||||
|
||||
// Make sure the snippet includes the highlight. This should be true
|
||||
// for any reasonable diagnostic, but we'll make sure.
|
||||
snippetRange = hcl.RangeOver(snippetRange, highlightRange)
|
||||
|
||||
// Empty ranges result in odd diagnostic output, so extend the end to
|
||||
// ensure there's at least one byte in the snippet or highlight.
|
||||
if highlightRange.Empty() {
|
||||
highlightRange.End.Byte++
|
||||
highlightRange.End.Column++
|
||||
}
|
||||
if snippetRange.Empty() {
|
||||
snippetRange.End.Byte++
|
||||
snippetRange.End.Column++
|
||||
}
|
||||
|
||||
retHighlight := tfdiags.SourceRangeFromHCL(highlightRange)
|
||||
retSnippet := tfdiags.SourceRangeFromHCL(snippetRange)
|
||||
return &retHighlight, &retSnippet
|
||||
}
|
||||
|
||||
func newDiagnosticRange(highlightRange *tfdiags.SourceRange) *DiagnosticRange {
|
||||
if highlightRange == nil {
|
||||
// No particular range to report, then.
|
||||
return nil
|
||||
}
|
||||
|
||||
return &DiagnosticRange{
|
||||
Filename: highlightRange.Filename,
|
||||
Start: Pos{
|
||||
Line: highlightRange.Start.Line,
|
||||
Column: highlightRange.Start.Column,
|
||||
Byte: highlightRange.Start.Byte,
|
||||
},
|
||||
End: Pos{
|
||||
Line: highlightRange.End.Line,
|
||||
Column: highlightRange.End.Column,
|
||||
Byte: highlightRange.End.Byte,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func newDiagnosticSnippet(snippetRange, highlightRange *tfdiags.SourceRange, sources map[string]*hcl.File) *DiagnosticSnippet {
|
||||
if snippetRange == nil || highlightRange == nil {
|
||||
// There is no code that is relevant to show in a snippet for this diagnostic.
|
||||
return nil
|
||||
}
|
||||
file, ok := sources[snippetRange.Filename]
|
||||
if !ok {
|
||||
// If we don't have the source code for the file that the snippet is supposed
|
||||
// to come from then we can't produce a snippet. (This tends to happen when
|
||||
// we're rendering a diagnostic from an unusual location that isn't actually
|
||||
// a source file, like an expression entered into the "tofu console" prompt.)
|
||||
return nil
|
||||
}
|
||||
src := file.Bytes
|
||||
if src == nil {
|
||||
// A file without any source bytes? Weird, but perhaps constructed artificially
|
||||
// for testing or for other unusual reasons.
|
||||
return nil
|
||||
}
|
||||
|
||||
// If we get this far then we're going to do our best to return at least a minimal
|
||||
// snippet, though the level of detail depends on what other information we have
|
||||
// available.
|
||||
ret := &DiagnosticSnippet{
|
||||
StartLine: snippetRange.Start.Line,
|
||||
|
||||
// Ensure that the default Values struct is an empty array, as this
|
||||
// makes consuming the JSON structure easier in most languages.
|
||||
Values: []DiagnosticExpressionValue{},
|
||||
}
|
||||
|
||||
// Some callers pass us *hcl.File objects they directly constructed rather than
|
||||
// using the HCL parser, in which case they lack the "navigation metadata"
|
||||
// that HCL's parsers would generate. We need that metadata to extract the
|
||||
// context string below, so we'll make a best effort to obtain that metadata.
|
||||
file = tryHCLFileWithNavMetadata(file, snippetRange.Filename)
|
||||
|
||||
// Some diagnostics may have a useful top-level context to add to
|
||||
// the code snippet output. This function needs a file with nav metadata
|
||||
// to return a useful result, but it will happily return an empty string
|
||||
// if given a file without that metadata.
|
||||
contextStr := hcled.ContextString(file, highlightRange.Start.Byte-1)
|
||||
if contextStr != "" {
|
||||
ret.Context = &contextStr
|
||||
}
|
||||
|
||||
// Build the string of the code snippet, tracking at which byte of
|
||||
// the file the snippet starts.
|
||||
var codeStartByte int
|
||||
sc := hcl.NewRangeScanner(src, highlightRange.Filename, bufio.ScanLines)
|
||||
var code strings.Builder
|
||||
for sc.Scan() {
|
||||
lineRange := sc.Range()
|
||||
if lineRange.Overlaps(snippetRange.ToHCL()) {
|
||||
if codeStartByte == 0 && code.Len() == 0 {
|
||||
codeStartByte = lineRange.Start.Byte
|
||||
}
|
||||
code.Write(lineRange.SliceBytes(src))
|
||||
code.WriteRune('\n')
|
||||
}
|
||||
}
|
||||
codeStr := strings.TrimSuffix(code.String(), "\n")
|
||||
ret.Code = codeStr
|
||||
|
||||
// Calculate the start and end byte of the highlight range relative
|
||||
// to the code snippet string.
|
||||
start := highlightRange.Start.Byte - codeStartByte
|
||||
end := start + (highlightRange.End.Byte - highlightRange.Start.Byte)
|
||||
|
||||
// We can end up with some quirky results here in edge cases like
|
||||
// when a source range starts or ends at a newline character,
|
||||
// so we'll cap the results at the bounds of the highlight range
|
||||
// so that consumers of this data don't need to contend with
|
||||
// out-of-bounds errors themselves.
|
||||
if start < 0 {
|
||||
start = 0
|
||||
} else if start > len(codeStr) {
|
||||
start = len(codeStr)
|
||||
}
|
||||
if end < 0 {
|
||||
end = 0
|
||||
} else if end > len(codeStr) {
|
||||
end = len(codeStr)
|
||||
}
|
||||
|
||||
ret.HighlightStartOffset = start
|
||||
ret.HighlightEndOffset = end
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
func newDiagnosticExpressionValues(diag tfdiags.Diagnostic) []DiagnosticExpressionValue {
|
||||
fromExpr := diag.FromExpr()
|
||||
if fromExpr == nil {
|
||||
// no expression-related information on this diagnostic, but our
|
||||
// callers always want a non-nil slice in this case because that's
|
||||
// friendlier for JSON serialization.
|
||||
return make([]DiagnosticExpressionValue, 0)
|
||||
}
|
||||
|
||||
// We may also be able to generate information about the dynamic
|
||||
// values of relevant variables at the point of evaluation, then.
|
||||
// This is particularly useful for expressions that get evaluated
|
||||
// multiple times with different values, such as blocks using
|
||||
// "count" and "for_each", or within "for" expressions.
|
||||
expr := fromExpr.Expression
|
||||
ctx := fromExpr.EvalContext
|
||||
vars := expr.Variables()
|
||||
values := make([]DiagnosticExpressionValue, 0, len(vars))
|
||||
seen := make(map[string]struct{}, len(vars))
|
||||
includeUnknown := tfdiags.DiagnosticCausedByUnknown(diag)
|
||||
includeSensitive := tfdiags.DiagnosticCausedBySensitive(diag)
|
||||
Traversals:
|
||||
for _, traversal := range vars {
|
||||
for len(traversal) > 1 {
|
||||
val, diags := traversal.TraverseAbs(ctx)
|
||||
if diags.HasErrors() {
|
||||
// Skip anything that generates errors, since we probably
|
||||
// already have the same error in our diagnostics set
|
||||
// already.
|
||||
traversal = traversal[:len(traversal)-1]
|
||||
continue
|
||||
}
|
||||
|
||||
traversalStr := traversalStr(traversal)
|
||||
if _, exists := seen[traversalStr]; exists {
|
||||
continue Traversals // don't show duplicates when the same variable is referenced multiple times
|
||||
}
|
||||
statement := newDiagnosticSnippetValueDescription(val, includeUnknown, includeSensitive)
|
||||
if statement == "" {
|
||||
// If we don't have anything to say about this value then we won't include
|
||||
// an entry for it at all.
|
||||
continue Traversals
|
||||
}
|
||||
values = append(values, DiagnosticExpressionValue{
|
||||
Traversal: traversalStr,
|
||||
Statement: statement,
|
||||
})
|
||||
seen[traversalStr] = struct{}{}
|
||||
}
|
||||
}
|
||||
sort.Slice(values, func(i, j int) bool {
|
||||
return values[i].Traversal < values[j].Traversal
|
||||
})
|
||||
return values
|
||||
}
|
||||
|
||||
func newDiagnosticSnippetFunctionCall(diag tfdiags.Diagnostic) *DiagnosticFunctionCall {
|
||||
fromExpr := diag.FromExpr()
|
||||
if fromExpr == nil {
|
||||
return nil // no expression-related information on this diagnostic
|
||||
}
|
||||
callInfo := tfdiags.ExtraInfo[hclsyntax.FunctionCallDiagExtra](diag)
|
||||
if callInfo == nil || callInfo.CalledFunctionName() == "" {
|
||||
return nil // no function call information
|
||||
}
|
||||
|
||||
ctx := fromExpr.EvalContext
|
||||
calledAs := callInfo.CalledFunctionName()
|
||||
baseName := calledAs
|
||||
if idx := strings.LastIndex(baseName, "::"); idx >= 0 {
|
||||
baseName = baseName[idx+2:]
|
||||
}
|
||||
ret := &DiagnosticFunctionCall{
|
||||
CalledAs: calledAs,
|
||||
}
|
||||
if f, ok := ctx.Functions[calledAs]; ok {
|
||||
ret.Signature = DescribeFunction(baseName, f)
|
||||
}
|
||||
return ret
|
||||
}
|
||||
|
||||
func newDiagnosticSnippetValueDescription(val cty.Value, includeUnknown, includeSensitive bool) string {
|
||||
switch {
|
||||
case val.HasMark(marks.Sensitive):
|
||||
// We only mention a sensitive value if the diagnostic
|
||||
// we're rendering is explicitly marked as being
|
||||
// caused by sensitive values, because otherwise
|
||||
// readers tend to be misled into thinking the error
|
||||
// is caused by the sensitive value even when it isn't.
|
||||
if !includeSensitive {
|
||||
return ""
|
||||
}
|
||||
// Even when we do mention one, we keep it vague
|
||||
// in order to minimize the chance of giving away
|
||||
// whatever was sensitive about it.
|
||||
return "has a sensitive value"
|
||||
case !val.IsKnown():
|
||||
ty := val.Type()
|
||||
// We'll avoid saying anything about unknown or
|
||||
// "known after apply" unless the diagnostic is
|
||||
// explicitly marked as being caused by unknown
|
||||
// values, because otherwise readers tend to be
|
||||
// misled into thinking the error is caused by the
|
||||
// unknown value even when it isn't.
|
||||
if !includeUnknown {
|
||||
if ty == cty.DynamicPseudoType {
|
||||
return "" // if we can't even name the type then we'll say nothing at all
|
||||
}
|
||||
// We can at least say what the type is, without mentioning "known after apply" at all
|
||||
return fmt.Sprintf("is a %s", ty.FriendlyName())
|
||||
}
|
||||
switch {
|
||||
case ty == cty.DynamicPseudoType:
|
||||
return "will be known only after apply" // we don't even know what the type will be
|
||||
case ty.IsCollectionType():
|
||||
// If the unknown value has collection length refinements then we might at least
|
||||
// be able to give some hints about the expected length.
|
||||
valRng := val.Range()
|
||||
minLen := valRng.LengthLowerBound()
|
||||
maxLen := valRng.LengthUpperBound()
|
||||
const maxLimit = 1024 // (upper limit is just an arbitrary value to avoid showing distracting large numbers in the UI)
|
||||
switch {
|
||||
case minLen == maxLen:
|
||||
return fmt.Sprintf("is a %s of length %d, known only after apply", ty.FriendlyName(), minLen)
|
||||
case minLen != 0 && maxLen <= maxLimit:
|
||||
return fmt.Sprintf("is a %s with between %d and %d elements, known only after apply", ty.FriendlyName(), minLen, maxLen)
|
||||
case minLen != 0:
|
||||
return fmt.Sprintf("is a %s with at least %d elements, known only after apply", ty.FriendlyName(), minLen)
|
||||
case maxLen <= maxLimit:
|
||||
return fmt.Sprintf("is a %s with up to %d elements, known only after apply", ty.FriendlyName(), maxLen)
|
||||
default:
|
||||
return fmt.Sprintf("is a %s, known only after apply", ty.FriendlyName())
|
||||
}
|
||||
default:
|
||||
return fmt.Sprintf("is a %s, known only after apply", ty.FriendlyName())
|
||||
}
|
||||
default:
|
||||
return fmt.Sprintf("is %s", compactValueStr(val))
|
||||
}
|
||||
}
|
||||
|
||||
// compactValueStr produces a compact, single-line summary of a given value
|
||||
@ -493,7 +565,7 @@ func traversalStr(traversal hcl.Traversal) string {
|
||||
// producing helpful contextual messages in diagnostics. It is not
|
||||
// comprehensive nor intended to be used for other purposes.
|
||||
|
||||
var buf bytes.Buffer
|
||||
var buf strings.Builder
|
||||
for _, step := range traversal {
|
||||
switch tStep := step.(type) {
|
||||
case hcl.TraverseRoot:
|
||||
@ -515,3 +587,41 @@ func traversalStr(traversal hcl.Traversal) string {
|
||||
}
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
// tryHCLFileWithNavMetadata takes an hcl.File that might have been directly
|
||||
// constructed rather than produced by an HCL parser, and tries to pass it
|
||||
// through a suitable HCL parser if it lacks the metadata that an HCL parser
|
||||
// would normally add.
|
||||
//
|
||||
// If parsing would be necessary to produce the metadata but parsing fails
|
||||
// then this returns the given file verbatim, so the caller must still be
|
||||
// prepared to deal with a file lacking navigation metadata.
|
||||
func tryHCLFileWithNavMetadata(file *hcl.File, filename string) *hcl.File {
|
||||
if file.Nav != nil {
|
||||
// If there's _something_ in this field then we'll assume that
|
||||
// an HCL parser put it there. The details of this field are
|
||||
// HCL-parser-specific so we don't try to dig any deeper.
|
||||
return file
|
||||
}
|
||||
|
||||
// If we have a nil nav then we'll try to construct a fully-fledged
|
||||
// file by parsing what we were given. This is best-effort, because
|
||||
// the file might well have been lacking navigation metadata due to
|
||||
// having been invalid in the first place.
|
||||
// Re-parsing a file that might well have already been parsed already
|
||||
// earlier is a little wasteful, but we only get here when we're
|
||||
// returning diagnostics and so we'd rather do a little extra work
|
||||
// if it might allow us to return a better diagnostic.
|
||||
parser := hclparse.NewParser()
|
||||
var newFile *hcl.File
|
||||
if strings.HasSuffix(filename, ".json") {
|
||||
newFile, _ = parser.ParseJSON(file.Bytes, filename)
|
||||
} else {
|
||||
newFile, _ = parser.ParseHCL(file.Bytes, filename)
|
||||
}
|
||||
if newFile == nil {
|
||||
// Our best efforts have failed, then. We'll just return what we had.
|
||||
return file
|
||||
}
|
||||
return newFile
|
||||
}
|
||||
|
@ -12,10 +12,10 @@ import (
|
||||
|
||||
version "github.com/hashicorp/go-version"
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
|
||||
"github.com/opentofu/opentofu/internal/addrs"
|
||||
"github.com/opentofu/opentofu/internal/depsfile"
|
||||
"github.com/opentofu/opentofu/internal/getproviders"
|
||||
"github.com/opentofu/opentofu/internal/tfdiags"
|
||||
)
|
||||
|
||||
// A Config is a node in the tree of modules within a configuration.
|
||||
@ -230,7 +230,7 @@ func (c *Config) EntersNewPackage() bool {
|
||||
func (c *Config) VerifyDependencySelections(depLocks *depsfile.Locks) []error {
|
||||
var errs []error
|
||||
|
||||
reqs, diags := c.ProviderRequirements()
|
||||
reqs, _, diags := c.ProviderRequirements()
|
||||
if diags.HasErrors() {
|
||||
// It should be very unusual to get here, but unfortunately we can
|
||||
// end up here in some edge cases where the config loader doesn't
|
||||
@ -301,11 +301,12 @@ func (c *Config) VerifyDependencySelections(depLocks *depsfile.Locks) []error {
|
||||
//
|
||||
// If the returned diagnostics includes errors then the resulting Requirements
|
||||
// may be incomplete.
|
||||
func (c *Config) ProviderRequirements() (getproviders.Requirements, hcl.Diagnostics) {
|
||||
func (c *Config) ProviderRequirements() (getproviders.Requirements, *getproviders.ProvidersQualification, hcl.Diagnostics) {
|
||||
reqs := make(getproviders.Requirements)
|
||||
diags := c.addProviderRequirements(reqs, true, true)
|
||||
qualifs := new(getproviders.ProvidersQualification)
|
||||
diags := c.addProviderRequirements(reqs, qualifs, true, true)
|
||||
|
||||
return reqs, diags
|
||||
return reqs, qualifs, diags
|
||||
}
|
||||
|
||||
// ProviderRequirementsShallow searches only the direct receiver for explicit
|
||||
@ -315,7 +316,8 @@ func (c *Config) ProviderRequirements() (getproviders.Requirements, hcl.Diagnost
|
||||
// may be incomplete.
|
||||
func (c *Config) ProviderRequirementsShallow() (getproviders.Requirements, hcl.Diagnostics) {
|
||||
reqs := make(getproviders.Requirements)
|
||||
diags := c.addProviderRequirements(reqs, false, true)
|
||||
qualifs := new(getproviders.ProvidersQualification)
|
||||
diags := c.addProviderRequirements(reqs, qualifs, false, true)
|
||||
|
||||
return reqs, diags
|
||||
}
|
||||
@ -328,7 +330,8 @@ func (c *Config) ProviderRequirementsShallow() (getproviders.Requirements, hcl.D
|
||||
// may be incomplete.
|
||||
func (c *Config) ProviderRequirementsByModule() (*ModuleRequirements, hcl.Diagnostics) {
|
||||
reqs := make(getproviders.Requirements)
|
||||
diags := c.addProviderRequirements(reqs, false, false)
|
||||
qualifs := new(getproviders.ProvidersQualification)
|
||||
diags := c.addProviderRequirements(reqs, qualifs, false, false)
|
||||
|
||||
children := make(map[string]*ModuleRequirements)
|
||||
for name, child := range c.Children {
|
||||
@ -378,7 +381,7 @@ func (c *Config) ProviderRequirementsByModule() (*ModuleRequirements, hcl.Diagno
|
||||
// implementation, gradually mutating a shared requirements object to
|
||||
// eventually return. If the recurse argument is true, the requirements will
|
||||
// include all descendant modules; otherwise, only the specified module.
|
||||
func (c *Config) addProviderRequirements(reqs getproviders.Requirements, recurse, tests bool) hcl.Diagnostics {
|
||||
func (c *Config) addProviderRequirements(reqs getproviders.Requirements, qualifs *getproviders.ProvidersQualification, recurse, tests bool) hcl.Diagnostics {
|
||||
var diags hcl.Diagnostics
|
||||
|
||||
// First we'll deal with the requirements directly in _our_ module...
|
||||
@ -409,6 +412,7 @@ func (c *Config) addProviderRequirements(reqs getproviders.Requirements, recurse
|
||||
})
|
||||
}
|
||||
reqs[fqn] = append(reqs[fqn], constraints...)
|
||||
qualifs.AddExplicitProvider(providerReqs.Type)
|
||||
}
|
||||
}
|
||||
|
||||
@ -418,17 +422,42 @@ func (c *Config) addProviderRequirements(reqs getproviders.Requirements, recurse
|
||||
for _, rc := range c.Module.ManagedResources {
|
||||
fqn := rc.Provider
|
||||
if _, exists := reqs[fqn]; exists {
|
||||
// If this is called for a child module, and the provider was added from another implicit reference and not
|
||||
// from a top level required_provider, we need to collect the reference of this resource as well as implicit provider.
|
||||
qualifs.AddImplicitProvider(fqn, getproviders.ResourceRef{
|
||||
CfgRes: rc.Addr().InModule(c.Path),
|
||||
Ref: tfdiags.SourceRangeFromHCL(rc.DeclRange),
|
||||
ProviderAttribute: rc.ProviderConfigRef != nil,
|
||||
})
|
||||
// Explicit dependency already present
|
||||
continue
|
||||
}
|
||||
qualifs.AddImplicitProvider(fqn, getproviders.ResourceRef{
|
||||
CfgRes: rc.Addr().InModule(c.Path),
|
||||
Ref: tfdiags.SourceRangeFromHCL(rc.DeclRange),
|
||||
ProviderAttribute: rc.ProviderConfigRef != nil,
|
||||
})
|
||||
reqs[fqn] = nil
|
||||
}
|
||||
for _, rc := range c.Module.DataResources {
|
||||
fqn := rc.Provider
|
||||
if _, exists := reqs[fqn]; exists {
|
||||
// If this is called for a child module, and the provider was added from another implicit reference and not
|
||||
// from a top level required_provider, we need to collect the reference of this resource as well as implicit provider.
|
||||
qualifs.AddImplicitProvider(fqn, getproviders.ResourceRef{
|
||||
CfgRes: rc.Addr().InModule(c.Path),
|
||||
Ref: tfdiags.SourceRangeFromHCL(rc.DeclRange),
|
||||
ProviderAttribute: rc.ProviderConfigRef != nil,
|
||||
})
|
||||
|
||||
// Explicit dependency already present
|
||||
continue
|
||||
}
|
||||
qualifs.AddImplicitProvider(fqn, getproviders.ResourceRef{
|
||||
CfgRes: rc.Addr().InModule(c.Path),
|
||||
Ref: tfdiags.SourceRangeFromHCL(rc.DeclRange),
|
||||
ProviderAttribute: rc.ProviderConfigRef != nil,
|
||||
})
|
||||
reqs[fqn] = nil
|
||||
}
|
||||
|
||||
@ -445,6 +474,10 @@ func (c *Config) addProviderRequirements(reqs getproviders.Requirements, recurse
|
||||
fqn := i.Provider
|
||||
if _, exists := reqs[fqn]; !exists {
|
||||
reqs[fqn] = nil
|
||||
qualifs.AddImplicitProvider(i.Provider, getproviders.ResourceRef{
|
||||
CfgRes: i.StaticTo,
|
||||
Ref: tfdiags.SourceRangeFromHCL(i.DeclRange),
|
||||
})
|
||||
}
|
||||
|
||||
// TODO: This should probably be moved to provider_validation.go so that
|
||||
@ -522,7 +555,7 @@ func (c *Config) addProviderRequirements(reqs getproviders.Requirements, recurse
|
||||
// Then we'll also look for requirements in testing modules.
|
||||
for _, run := range file.Runs {
|
||||
if run.ConfigUnderTest != nil {
|
||||
moreDiags := run.ConfigUnderTest.addProviderRequirements(reqs, true, false)
|
||||
moreDiags := run.ConfigUnderTest.addProviderRequirements(reqs, qualifs, true, false)
|
||||
diags = append(diags, moreDiags...)
|
||||
}
|
||||
}
|
||||
@ -532,7 +565,7 @@ func (c *Config) addProviderRequirements(reqs getproviders.Requirements, recurse
|
||||
|
||||
if recurse {
|
||||
for _, childConfig := range c.Children {
|
||||
moreDiags := childConfig.addProviderRequirements(reqs, true, false)
|
||||
moreDiags := childConfig.addProviderRequirements(reqs, qualifs, true, false)
|
||||
diags = append(diags, moreDiags...)
|
||||
}
|
||||
}
|
||||
@ -772,7 +805,7 @@ func (c *Config) resolveProviderTypesForTests(providers map[string]addrs.Provide
|
||||
// versions for each provider.
|
||||
func (c *Config) ProviderTypes() []addrs.Provider {
|
||||
// Ignore diagnostics here because they relate to version constraints
|
||||
reqs, _ := c.ProviderRequirements()
|
||||
reqs, _, _ := c.ProviderRequirements()
|
||||
|
||||
ret := make([]addrs.Provider, 0, len(reqs))
|
||||
for k := range reqs {
|
||||
|
@ -18,6 +18,7 @@ import (
|
||||
"github.com/google/go-cmp/cmp/cmpopts"
|
||||
"github.com/hashicorp/hcl/v2"
|
||||
"github.com/hashicorp/hcl/v2/hclparse"
|
||||
"github.com/opentofu/opentofu/internal/tfdiags"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
||||
version "github.com/hashicorp/go-version"
|
||||
@ -155,7 +156,7 @@ func TestConfigProviderRequirements(t *testing.T) {
|
||||
configuredProvider := addrs.NewDefaultProvider("configured")
|
||||
grandchildProvider := addrs.NewDefaultProvider("grandchild")
|
||||
|
||||
got, diags := cfg.ProviderRequirements()
|
||||
got, qualifs, diags := cfg.ProviderRequirements()
|
||||
assertNoDiagnostics(t, diags)
|
||||
want := getproviders.Requirements{
|
||||
// the nullProvider constraints from the two modules are merged
|
||||
@ -170,9 +171,62 @@ func TestConfigProviderRequirements(t *testing.T) {
|
||||
terraformProvider: nil,
|
||||
grandchildProvider: nil,
|
||||
}
|
||||
wantQualifs := &getproviders.ProvidersQualification{
|
||||
Implicit: map[addrs.Provider][]getproviders.ResourceRef{
|
||||
grandchildProvider: {
|
||||
{
|
||||
CfgRes: addrs.ConfigResource{Module: []string{"kinder", "nested"}, Resource: addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "grandchild_foo", Name: "bar"}},
|
||||
Ref: tfdiags.SourceRange{Filename: "testdata/provider-reqs/child/grandchild/provider-reqs-grandchild.tf", Start: tfdiags.SourcePos{Line: 3, Column: 1, Byte: 136}, End: tfdiags.SourcePos{Line: 3, Column: 32, Byte: 167}},
|
||||
},
|
||||
},
|
||||
impliedProvider: {
|
||||
{
|
||||
CfgRes: addrs.ConfigResource{Resource: addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "implied_foo", Name: "bar"}},
|
||||
Ref: tfdiags.SourceRange{Filename: "testdata/provider-reqs/provider-reqs-root.tf", Start: tfdiags.SourcePos{Line: 16, Column: 1, Byte: 317}, End: tfdiags.SourcePos{Line: 16, Column: 29, Byte: 345}},
|
||||
},
|
||||
},
|
||||
importexplicitProvider: {
|
||||
{
|
||||
CfgRes: addrs.ConfigResource{Resource: addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "importimplied", Name: "targetB"}},
|
||||
Ref: tfdiags.SourceRange{Filename: "testdata/provider-reqs/provider-reqs-root.tf", Start: tfdiags.SourcePos{Line: 42, Column: 1, Byte: 939}, End: tfdiags.SourcePos{Line: 42, Column: 7, Byte: 945}},
|
||||
},
|
||||
},
|
||||
importimpliedProvider: {
|
||||
{
|
||||
CfgRes: addrs.ConfigResource{Resource: addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "importimplied", Name: "targetA"}},
|
||||
Ref: tfdiags.SourceRange{Filename: "testdata/provider-reqs/provider-reqs-root.tf", Start: tfdiags.SourcePos{Line: 37, Column: 1, Byte: 886}, End: tfdiags.SourcePos{Line: 37, Column: 7, Byte: 892}},
|
||||
},
|
||||
},
|
||||
terraformProvider: {
|
||||
{
|
||||
CfgRes: addrs.ConfigResource{Resource: addrs.Resource{Mode: addrs.DataResourceMode, Type: "terraform_remote_state", Name: "bar"}},
|
||||
Ref: tfdiags.SourceRange{Filename: "testdata/provider-reqs/provider-reqs-root.tf", Start: tfdiags.SourcePos{Line: 27, Column: 1, Byte: 628}, End: tfdiags.SourcePos{Line: 27, Column: 36, Byte: 663}},
|
||||
},
|
||||
},
|
||||
},
|
||||
Explicit: map[addrs.Provider]struct{}{
|
||||
happycloudProvider: {},
|
||||
nullProvider: {},
|
||||
randomProvider: {},
|
||||
tlsProvider: {},
|
||||
},
|
||||
}
|
||||
// These 2 assertions are strictly to ensure that later the "provider" blocks are not registered into the qualifications.
|
||||
// Technically speaking, provider blocks are indeed implicit references, but the current warning message
|
||||
// on implicitly referenced providers could be misleading for the "provider" blocks.
|
||||
if _, okExpl := qualifs.Explicit[configuredProvider]; okExpl {
|
||||
t.Errorf("provider blocks shouldn't be added into the explicit qualifications")
|
||||
}
|
||||
if _, okImpl := qualifs.Implicit[configuredProvider]; okImpl {
|
||||
t.Errorf("provider blocks shouldn't be added into the implicit qualifications")
|
||||
}
|
||||
|
||||
if diff := cmp.Diff(want, got); diff != "" {
|
||||
t.Errorf("wrong result\n%s", diff)
|
||||
t.Errorf("wrong reqs result\n%s", diff)
|
||||
}
|
||||
|
||||
if diff := cmp.Diff(wantQualifs, qualifs); diff != "" {
|
||||
t.Errorf("wrong qualifs result\n%s", diff)
|
||||
}
|
||||
}
|
||||
|
||||
@ -195,7 +249,7 @@ func TestConfigProviderRequirementsInclTests(t *testing.T) {
|
||||
terraformProvider := addrs.NewBuiltInProvider("terraform")
|
||||
configuredProvider := addrs.NewDefaultProvider("configured")
|
||||
|
||||
got, diags := cfg.ProviderRequirements()
|
||||
got, qualifs, diags := cfg.ProviderRequirements()
|
||||
assertNoDiagnostics(t, diags)
|
||||
want := getproviders.Requirements{
|
||||
// the nullProvider constraints from the two modules are merged
|
||||
@ -207,9 +261,35 @@ func TestConfigProviderRequirementsInclTests(t *testing.T) {
|
||||
terraformProvider: nil,
|
||||
}
|
||||
|
||||
wantQualifs := &getproviders.ProvidersQualification{
|
||||
Implicit: map[addrs.Provider][]getproviders.ResourceRef{
|
||||
impliedProvider: {
|
||||
{
|
||||
CfgRes: addrs.ConfigResource{Resource: addrs.Resource{Mode: addrs.ManagedResourceMode, Type: "implied_foo", Name: "bar"}},
|
||||
Ref: tfdiags.SourceRange{Filename: "testdata/provider-reqs-with-tests/provider-reqs-root.tf", Start: tfdiags.SourcePos{Line: 12, Column: 1, Byte: 247}, End: tfdiags.SourcePos{Line: 12, Column: 29, Byte: 275}},
|
||||
},
|
||||
},
|
||||
terraformProvider: {
|
||||
{
|
||||
CfgRes: addrs.ConfigResource{Resource: addrs.Resource{Mode: addrs.DataResourceMode, Type: "terraform_remote_state", Name: "bar"}},
|
||||
Ref: tfdiags.SourceRange{Filename: "testdata/provider-reqs-with-tests/provider-reqs-root.tf", Start: tfdiags.SourcePos{Line: 19, Column: 1, Byte: 516}, End: tfdiags.SourcePos{Line: 19, Column: 36, Byte: 551}},
|
||||
},
|
||||
},
|
||||
},
|
||||
Explicit: map[addrs.Provider]struct{}{
|
||||
nullProvider: {},
|
||||
randomProvider: {},
|
||||
tlsProvider: {},
|
||||
},
|
||||
}
|
||||
|
||||
if diff := cmp.Diff(want, got); diff != "" {
|
||||
t.Errorf("wrong result\n%s", diff)
|
||||
}
|
||||
|
||||
if diff := cmp.Diff(wantQualifs, qualifs); diff != "" {
|
||||
t.Errorf("wrong qualifs result\n%s", diff)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConfigProviderRequirementsDuplicate(t *testing.T) {
|
||||
@ -584,7 +664,8 @@ func TestConfigAddProviderRequirements(t *testing.T) {
|
||||
reqs := getproviders.Requirements{
|
||||
addrs.NewDefaultProvider("null"): nil,
|
||||
}
|
||||
diags = cfg.addProviderRequirements(reqs, true, false)
|
||||
qualifs := new(getproviders.ProvidersQualification)
|
||||
diags = cfg.addProviderRequirements(reqs, qualifs, true, false)
|
||||
assertNoDiagnostics(t, diags)
|
||||
}
|
||||
|
||||
@ -609,8 +690,9 @@ Use the providers argument within the module block to configure providers for al
|
||||
func TestConfigImportProviderClashesWithResources(t *testing.T) {
|
||||
cfg, diags := testModuleConfigFromFile("testdata/invalid-import-files/import-and-resource-clash.tf")
|
||||
assertNoDiagnostics(t, diags)
|
||||
qualifs := new(getproviders.ProvidersQualification)
|
||||
|
||||
diags = cfg.addProviderRequirements(getproviders.Requirements{}, true, false)
|
||||
diags = cfg.addProviderRequirements(getproviders.Requirements{}, qualifs, true, false)
|
||||
assertExactDiagnostics(t, diags, []string{
|
||||
`testdata/invalid-import-files/import-and-resource-clash.tf:9,3-19: Invalid import provider argument; The provider argument in the target resource block must match the import block.`,
|
||||
})
|
||||
@ -620,7 +702,8 @@ func TestConfigImportProviderWithNoResourceProvider(t *testing.T) {
|
||||
cfg, diags := testModuleConfigFromFile("testdata/invalid-import-files/import-and-no-resource.tf")
|
||||
assertNoDiagnostics(t, diags)
|
||||
|
||||
diags = cfg.addProviderRequirements(getproviders.Requirements{}, true, false)
|
||||
qualifs := new(getproviders.ProvidersQualification)
|
||||
diags = cfg.addProviderRequirements(getproviders.Requirements{}, qualifs, true, false)
|
||||
assertExactDiagnostics(t, diags, []string{
|
||||
`testdata/invalid-import-files/import-and-no-resource.tf:5,3-19: Invalid import provider argument; The provider argument in the target resource block must be specified and match the import block.`,
|
||||
})
|
||||
|
@ -21,7 +21,7 @@ import (
|
||||
// In the case of any errors, t.Fatal (or similar) will be called to halt
|
||||
// execution of the test, so the calling test does not need to handle errors
|
||||
// itself.
|
||||
func NewLoaderForTests(t *testing.T) (*Loader, func()) {
|
||||
func NewLoaderForTests(t testing.TB) (*Loader, func()) {
|
||||
t.Helper()
|
||||
|
||||
modulesDir, err := os.MkdirTemp("", "tf-configs")
|
||||
|
@ -92,6 +92,10 @@ func (c Config) asAWSBase() (*awsbase.Config, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var roles []awsbase.AssumeRole
|
||||
if assumeRole != nil {
|
||||
roles = append(roles, *assumeRole)
|
||||
}
|
||||
|
||||
// Get assume role with web identity
|
||||
assumeRoleWithWebIdentity, err := c.AssumeRoleWithWebIdentity.asAWSBase()
|
||||
@ -168,7 +172,7 @@ func (c Config) asAWSBase() (*awsbase.Config, error) {
|
||||
|
||||
SharedCredentialsFiles: stringArrayAttrEnvFallback(c.SharedCredentialsFiles, "AWS_SHARED_CREDENTIALS_FILE"),
|
||||
SharedConfigFiles: stringArrayAttrEnvFallback(c.SharedConfigFiles, "AWS_SHARED_CONFIG_FILE"),
|
||||
AssumeRole: assumeRole,
|
||||
AssumeRole: roles,
|
||||
AssumeRoleWithWebIdentity: assumeRoleWithWebIdentity,
|
||||
AllowedAccountIds: c.AllowedAccountIds,
|
||||
ForbiddenAccountIds: c.ForbiddenAccountIds,
|
||||
|
@ -133,20 +133,22 @@ func TestConfig_asAWSBase(t *testing.T) {
|
||||
EC2MetadataServiceEndpointMode: "my-emde-mode",
|
||||
SharedCredentialsFiles: []string{"my-scredf"},
|
||||
SharedConfigFiles: []string{"my-sconff"},
|
||||
AssumeRole: &awsbase.AssumeRole{
|
||||
RoleARN: "ar_arn",
|
||||
Duration: time.Hour * 4,
|
||||
ExternalID: "ar_extid",
|
||||
Policy: "ar_policy",
|
||||
PolicyARNs: []string{
|
||||
"arn:aws:iam::123456789012:policy/AR",
|
||||
},
|
||||
SessionName: "ar_session_name",
|
||||
Tags: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
TransitiveTagKeys: []string{
|
||||
"ar_tags",
|
||||
AssumeRole: []awsbase.AssumeRole{
|
||||
{
|
||||
RoleARN: "ar_arn",
|
||||
Duration: time.Hour * 4,
|
||||
ExternalID: "ar_extid",
|
||||
Policy: "ar_policy",
|
||||
PolicyARNs: []string{
|
||||
"arn:aws:iam::123456789012:policy/AR",
|
||||
},
|
||||
SessionName: "ar_session_name",
|
||||
Tags: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
TransitiveTagKeys: []string{
|
||||
"ar_tags",
|
||||
},
|
||||
},
|
||||
},
|
||||
AssumeRoleWithWebIdentity: &awsbase.AssumeRoleWithWebIdentity{
|
||||
|
@ -13,8 +13,8 @@ import (
|
||||
|
||||
"github.com/apparentlymart/go-versions/versions"
|
||||
"github.com/apparentlymart/go-versions/versions/constraints"
|
||||
|
||||
"github.com/opentofu/opentofu/internal/addrs"
|
||||
"github.com/opentofu/opentofu/internal/tfdiags"
|
||||
)
|
||||
|
||||
// Version represents a particular single version of a provider.
|
||||
@ -50,6 +50,48 @@ type Warnings = []string
|
||||
// altogether, which means that it is not required at all.
|
||||
type Requirements map[addrs.Provider]VersionConstraints
|
||||
|
||||
// ProvidersQualification is storing the implicit/explicit reference qualification of the providers.
|
||||
// This is necessary to be able to warn the user when the resources are referencing a provider that
|
||||
// is not specifically defined in a required_providers block. When the implicitly referenced
|
||||
// provider is tried to be downloaded without a specific provider requirement, it will be tried
|
||||
// from the default namespace (hashicorp), failing to download it when it does not exist in the default namespace.
|
||||
// Therefore, we want to let the user know what resources are generating this situation.
|
||||
type ProvidersQualification struct {
|
||||
Implicit map[addrs.Provider][]ResourceRef
|
||||
Explicit map[addrs.Provider]struct{}
|
||||
}
|
||||
|
||||
type ResourceRef struct {
|
||||
CfgRes addrs.ConfigResource
|
||||
Ref tfdiags.SourceRange
|
||||
ProviderAttribute bool
|
||||
}
|
||||
|
||||
// AddImplicitProvider saves an addrs.Provider with the place in the configuration where this is generated from.
|
||||
func (pq *ProvidersQualification) AddImplicitProvider(provider addrs.Provider, ref ResourceRef) {
|
||||
if pq.Implicit == nil {
|
||||
pq.Implicit = map[addrs.Provider][]ResourceRef{}
|
||||
}
|
||||
// This is avoiding adding the implicit reference of the provider if this is already explicitly configured.
|
||||
// Done this way, because when collecting these qualifications, if there are at least 2 resources (A from root module and B from an imported module),
|
||||
// root module could have no explicit definition but the module of B could have an explicit one. But in case none of the modules is having
|
||||
// an explicit definition, we want to gather all the resources that are implicitly referencing a provider.
|
||||
if _, ok := pq.Explicit[provider]; ok {
|
||||
return
|
||||
}
|
||||
refs := pq.Implicit[provider]
|
||||
refs = append(refs, ref)
|
||||
pq.Implicit[provider] = refs
|
||||
}
|
||||
|
||||
// AddExplicitProvider saves an addrs.Provider that is specifically configured in a required_providers block.
|
||||
func (pq *ProvidersQualification) AddExplicitProvider(provider addrs.Provider) {
|
||||
if pq.Explicit == nil {
|
||||
pq.Explicit = map[addrs.Provider]struct{}{}
|
||||
}
|
||||
pq.Explicit[provider] = struct{}{}
|
||||
}
|
||||
|
||||
// Merge takes the requirements in the receiver and the requirements in the
|
||||
// other given value and produces a new set of requirements that combines
|
||||
// all of the requirements of both.
|
||||
|
259
internal/tofu/bench_many_instances_test.go
Normal file
259
internal/tofu/bench_many_instances_test.go
Normal file
@ -0,0 +1,259 @@
|
||||
// Copyright (c) The OpenTofu Authors
|
||||
// SPDX-License-Identifier: MPL-2.0
|
||||
// Copyright (c) 2023 HashiCorp, Inc.
|
||||
// SPDX-License-Identifier: MPL-2.0
|
||||
|
||||
package tofu
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strconv"
|
||||
"testing"
|
||||
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
|
||||
"github.com/opentofu/opentofu/internal/addrs"
|
||||
"github.com/opentofu/opentofu/internal/configs/configschema"
|
||||
"github.com/opentofu/opentofu/internal/plans"
|
||||
"github.com/opentofu/opentofu/internal/providers"
|
||||
"github.com/opentofu/opentofu/internal/states"
|
||||
)
|
||||
|
||||
// This test file contains a small collection of benchmarks, written using the benchmark
|
||||
// mechanism offered as part of Go's testing library, of situations involving resources
|
||||
// and modules that have a very large number of instances.
|
||||
//
|
||||
// OpenTofu's current design is aimed to support tens of instances as the typical case
|
||||
// and low-hundreds of instances as an extreme case. These benchmarks intentionally
|
||||
// ignore those design assumptions by testing with thousands of resource instances,
|
||||
// since we know that some in our community use OpenTofu in that way and although it
|
||||
// not officially supported we do wish to be able to more easily measure performance
|
||||
// when someone reports a significant regression of performance when using an
|
||||
// "unreasonable" number of instances (per OpenTofu's current design assumptions),
|
||||
// or whenever we're intentionally attempting to change something in OpenTofu to
|
||||
// improve performance.
|
||||
//
|
||||
// The existence of these benchmarks does not represent a commitment to support
|
||||
// using OpenTofu with thousands of resource instances in the same configuration.
|
||||
// We consider these situations to be "best effort" only.
|
||||
//
|
||||
// These benchmarks exercise the core language runtime only. Therefore they do not
|
||||
// account for any additional overheads caused by behaviors at the CLI layer, such
|
||||
// as remote state storage and the state snapshot serialization that implies, or
|
||||
// the UI display hooks.
|
||||
|
||||
// This benchmark takes, at the time of writing, over a minute to perform just one
|
||||
// iteration. Therefore at present it's best to just let it run once:
|
||||
//
|
||||
// go test ./internal/tofu -bench='^BenchmarkManyResourceInstances$' -benchtime=1x
|
||||
func BenchmarkManyResourceInstances(b *testing.B) {
|
||||
// instanceCount is the number of instances we declare _for each resource_.
|
||||
// Since there are two resources, there are 2*instanceCount instances total.
|
||||
const instanceCount = 2500
|
||||
m := testModuleInline(b, map[string]string{
|
||||
"main.tf": `
|
||||
# This test has two resources that each have a lot of instances
|
||||
# that are correlated with one another.
|
||||
|
||||
terraform {
|
||||
required_providers {
|
||||
test = {
|
||||
source = "terraform.io/builtin/test"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
variable "instance_count" {
|
||||
type = number
|
||||
}
|
||||
|
||||
resource "test" "a" {
|
||||
count = var.instance_count
|
||||
|
||||
num = count.index
|
||||
}
|
||||
|
||||
resource "test" "b" {
|
||||
count = length(test.a)
|
||||
|
||||
num = test.a[count.index].num
|
||||
}
|
||||
`,
|
||||
})
|
||||
p := &MockProvider{
|
||||
GetProviderSchemaResponse: &providers.GetProviderSchemaResponse{
|
||||
ResourceTypes: map[string]providers.Schema{
|
||||
"test": {
|
||||
Block: &configschema.Block{
|
||||
Attributes: map[string]*configschema.Attribute{
|
||||
"num": {
|
||||
Type: cty.Number,
|
||||
Required: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
PlanResourceChangeFn: func(prcr providers.PlanResourceChangeRequest) providers.PlanResourceChangeResponse {
|
||||
return providers.PlanResourceChangeResponse{
|
||||
PlannedState: prcr.ProposedNewState,
|
||||
}
|
||||
},
|
||||
ApplyResourceChangeFn: func(arcr providers.ApplyResourceChangeRequest) providers.ApplyResourceChangeResponse {
|
||||
return providers.ApplyResourceChangeResponse{
|
||||
NewState: arcr.PlannedState,
|
||||
}
|
||||
},
|
||||
}
|
||||
tofuCtx := testContext2(b, &ContextOpts{
|
||||
Providers: map[addrs.Provider]providers.Factory{
|
||||
addrs.NewBuiltInProvider("test"): testProviderFuncFixed(p),
|
||||
},
|
||||
// With this many resource instances we need a high concurrency limit
|
||||
// for the runtime to be in any way reasonable. In this case we're
|
||||
// going to set it so high that there is effectively no limit at all,
|
||||
// which measures a best-case scenario where we're limited only by
|
||||
// OpenTofu's direct overheads and not by the artificial concurrency
|
||||
// limit.
|
||||
Parallelism: instanceCount * 3, // instanceCount instances of 2 resources, plus an excessive amount of headroom for other helper nodes
|
||||
})
|
||||
ctx := context.Background()
|
||||
priorStateBase := states.BuildState(func(ss *states.SyncState) {
|
||||
// Our prior state already has all of the instances declared in
|
||||
// the configuration, so that we can also exercise the "upgrade"
|
||||
// and "refresh" steps (which are no-op in the mock provider we're
|
||||
// using, so we're only measuring their overhead).
|
||||
providerAddr := addrs.AbsProviderConfig{
|
||||
Module: addrs.RootModule,
|
||||
Provider: addrs.NewBuiltInProvider("test"),
|
||||
}
|
||||
resourceAddrA := addrs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "test",
|
||||
Name: "a",
|
||||
}.Absolute(addrs.RootModuleInstance)
|
||||
resourceAddrB := addrs.Resource{
|
||||
Mode: addrs.ManagedResourceMode,
|
||||
Type: "test",
|
||||
Name: "b",
|
||||
}.Absolute(addrs.RootModuleInstance)
|
||||
for i := range instanceCount {
|
||||
instAddrA := resourceAddrA.Instance(addrs.IntKey(i))
|
||||
instAddrB := resourceAddrB.Instance(addrs.IntKey(i))
|
||||
rawStateAttrs := `{"num":` + strconv.Itoa(i) + `}`
|
||||
ss.SetResourceInstanceCurrent(
|
||||
instAddrA,
|
||||
&states.ResourceInstanceObjectSrc{
|
||||
AttrsJSON: []byte(rawStateAttrs),
|
||||
},
|
||||
providerAddr, addrs.NoKey,
|
||||
)
|
||||
ss.SetResourceInstanceCurrent(
|
||||
instAddrB,
|
||||
&states.ResourceInstanceObjectSrc{
|
||||
AttrsJSON: []byte(rawStateAttrs),
|
||||
},
|
||||
providerAddr, addrs.NoKey,
|
||||
)
|
||||
}
|
||||
})
|
||||
planOpts := &PlanOpts{
|
||||
Mode: plans.NormalMode,
|
||||
SetVariables: InputValues{
|
||||
"instance_count": {
|
||||
Value: cty.NumberIntVal(instanceCount),
|
||||
},
|
||||
},
|
||||
}
|
||||
b.ResetTimer() // the above setup code is not included in the benchmark
|
||||
|
||||
for range b.N {
|
||||
// It's unfortunate to include this as part of the benchmark, but
|
||||
// our work below is going to modify the state in-place so we do need
|
||||
// to copy it. In practice the CLI layer's state manager system will
|
||||
// tend to do at least one state DeepCopy as part of setting itself up
|
||||
// anyway, so this is not unrealistic.
|
||||
priorState := priorStateBase.DeepCopy()
|
||||
|
||||
plan, planDiags := tofuCtx.Plan(ctx, m, priorState, planOpts)
|
||||
assertNoDiagnostics(b, planDiags)
|
||||
|
||||
_, applyDiags := tofuCtx.Apply(ctx, plan, m)
|
||||
assertNoDiagnostics(b, applyDiags)
|
||||
}
|
||||
}
|
||||
|
||||
// This benchmark takes, at the time of writing, several seconds per iteration, and
|
||||
// so it's probably best to limit the amount of time it can run:
|
||||
//
|
||||
// go test ./internal/tofu -bench='^BenchmarkManyModuleInstances$' -benchtime=1m
|
||||
func BenchmarkManyModuleInstances(b *testing.B) {
|
||||
// instanceCount is the number of instances we declare for each module call.
|
||||
// Since there are two module calls, each object declared in the module
|
||||
// is instantiated twice per instanceCount.
|
||||
const instanceCount = 2500
|
||||
m := testModuleInline(b, map[string]string{
|
||||
"main.tf": `
|
||||
variable "instance_count" {
|
||||
type = number
|
||||
}
|
||||
|
||||
module "a" {
|
||||
source = "./child"
|
||||
count = var.instance_count
|
||||
|
||||
num = count.index
|
||||
}
|
||||
|
||||
module "b" {
|
||||
source = "./child"
|
||||
count = length(module.a)
|
||||
|
||||
num = module.a[count.index].num
|
||||
}
|
||||
`,
|
||||
"child/child.tf": `
|
||||
variable "num" {
|
||||
type = number
|
||||
}
|
||||
|
||||
# Intentionally no resources declared here, because this
|
||||
# test is measuring just the module call overhead and
|
||||
# administrative overhead like the input variable and
|
||||
# output value evaluation.
|
||||
|
||||
output "num" {
|
||||
value = var.num
|
||||
}
|
||||
`,
|
||||
})
|
||||
tofuCtx := testContext2(b, &ContextOpts{
|
||||
Providers: nil, // no providers for this test
|
||||
// With this many resource instances we need a high concurrency limit
|
||||
// for the runtime to be in any way reasonable. In this case we're
|
||||
// going to set it so high that there is effectively no limit at all,
|
||||
// which measures a best-case scenario where we're limited only by
|
||||
// OpenTofu's direct overheads and not by the artificial concurrency
|
||||
// limit.
|
||||
Parallelism: instanceCount * 2 * 8, // instanceCount instances of 2 modules, with enough headroom for 8 graph nodes each (intentionally more than needed)
|
||||
})
|
||||
ctx := context.Background()
|
||||
planOpts := &PlanOpts{
|
||||
Mode: plans.NormalMode,
|
||||
SetVariables: InputValues{
|
||||
"instance_count": {
|
||||
Value: cty.NumberIntVal(instanceCount),
|
||||
},
|
||||
},
|
||||
}
|
||||
b.ResetTimer() // the above setup code is not included in the benchmark
|
||||
|
||||
for range b.N {
|
||||
plan, planDiags := tofuCtx.Plan(ctx, m, states.NewState(), planOpts)
|
||||
assertNoDiagnostics(b, planDiags)
|
||||
|
||||
_, applyDiags := tofuCtx.Apply(ctx, plan, m)
|
||||
assertNoDiagnostics(b, applyDiags)
|
||||
}
|
||||
}
|
@ -358,7 +358,7 @@ func (c *Context) checkConfigDependencies(config *configs.Config) tfdiags.Diagno
|
||||
// We only check that we have a factory for each required provider, and
|
||||
// assume the caller already assured that any separately-installed
|
||||
// plugins are of a suitable version, match expected checksums, etc.
|
||||
providerReqs, hclDiags := config.ProviderRequirements()
|
||||
providerReqs, _, hclDiags := config.ProviderRequirements()
|
||||
diags = diags.Append(hclDiags)
|
||||
if hclDiags.HasErrors() {
|
||||
return diags
|
||||
|
@ -254,7 +254,7 @@ resource "implicit_thing" "b" {
|
||||
}
|
||||
}
|
||||
|
||||
func testContext2(t *testing.T, opts *ContextOpts) *Context {
|
||||
func testContext2(t testing.TB, opts *ContextOpts) *Context {
|
||||
t.Helper()
|
||||
|
||||
ctx, diags := NewContext(opts)
|
||||
@ -952,7 +952,7 @@ func legacyDiffComparisonString(changes *plans.Changes) string {
|
||||
|
||||
// assertNoDiagnostics fails the test in progress (using t.Fatal) if the given
|
||||
// diagnostics is non-empty.
|
||||
func assertNoDiagnostics(t *testing.T, diags tfdiags.Diagnostics) {
|
||||
func assertNoDiagnostics(t testing.TB, diags tfdiags.Diagnostics) {
|
||||
t.Helper()
|
||||
if len(diags) == 0 {
|
||||
return
|
||||
@ -963,7 +963,7 @@ func assertNoDiagnostics(t *testing.T, diags tfdiags.Diagnostics) {
|
||||
|
||||
// assertNoDiagnostics fails the test in progress (using t.Fatal) if the given
|
||||
// diagnostics has any errors.
|
||||
func assertNoErrors(t *testing.T, diags tfdiags.Diagnostics) {
|
||||
func assertNoErrors(t testing.TB, diags tfdiags.Diagnostics) {
|
||||
t.Helper()
|
||||
if !diags.HasErrors() {
|
||||
return
|
||||
@ -980,7 +980,7 @@ func assertNoErrors(t *testing.T, diags tfdiags.Diagnostics) {
|
||||
// assertDiagnosticsMatch sorts the two sets of diagnostics in the usual way
|
||||
// before comparing them, though diagnostics only have a partial order so that
|
||||
// will not totally normalize the ordering of all diagnostics sets.
|
||||
func assertDiagnosticsMatch(t *testing.T, got, want tfdiags.Diagnostics) {
|
||||
func assertDiagnosticsMatch(t testing.TB, got, want tfdiags.Diagnostics) {
|
||||
got = got.ForRPC()
|
||||
want = want.ForRPC()
|
||||
got.Sort()
|
||||
@ -995,7 +995,7 @@ func assertDiagnosticsMatch(t *testing.T, got, want tfdiags.Diagnostics) {
|
||||
// a test. It does not generate any errors or fail the test. See
|
||||
// assertNoDiagnostics and assertNoErrors for more specific helpers that can
|
||||
// also fail the test.
|
||||
func logDiagnostics(t *testing.T, diags tfdiags.Diagnostics) {
|
||||
func logDiagnostics(t testing.TB, diags tfdiags.Diagnostics) {
|
||||
t.Helper()
|
||||
for _, diag := range diags {
|
||||
desc := diag.Description()
|
||||
|
@ -47,13 +47,13 @@ func TestMain(m *testing.M) {
|
||||
os.Exit(m.Run())
|
||||
}
|
||||
|
||||
func testModule(t *testing.T, name string) *configs.Config {
|
||||
func testModule(t testing.TB, name string) *configs.Config {
|
||||
t.Helper()
|
||||
c, _ := testModuleWithSnapshot(t, name)
|
||||
return c
|
||||
}
|
||||
|
||||
func testModuleWithSnapshot(t *testing.T, name string) (*configs.Config, *configload.Snapshot) {
|
||||
func testModuleWithSnapshot(t testing.TB, name string) (*configs.Config, *configload.Snapshot) {
|
||||
t.Helper()
|
||||
|
||||
dir := filepath.Join(fixtureDir, name)
|
||||
@ -90,7 +90,7 @@ func testModuleWithSnapshot(t *testing.T, name string) (*configs.Config, *config
|
||||
|
||||
// testModuleInline takes a map of path -> config strings and yields a config
|
||||
// structure with those files loaded from disk
|
||||
func testModuleInline(t *testing.T, sources map[string]string) *configs.Config {
|
||||
func testModuleInline(t testing.TB, sources map[string]string) *configs.Config {
|
||||
t.Helper()
|
||||
|
||||
cfgPath := t.TempDir()
|
||||
|
@ -56,7 +56,7 @@ func MigrateStateProviderAddresses(config *configs.Config, state *states.State)
|
||||
// config could be nil when we're e.g. showing a statefile without the configuration present
|
||||
if config != nil {
|
||||
var hclDiags hcl.Diagnostics
|
||||
providers, hclDiags = config.ProviderRequirements()
|
||||
providers, _, hclDiags = config.ProviderRequirements()
|
||||
diags = diags.Append(hclDiags)
|
||||
if hclDiags.HasErrors() {
|
||||
return nil, diags
|
||||
|
137
rfc/20250211-s3-locking-with-conditional-writes.md
Normal file
137
rfc/20250211-s3-locking-with-conditional-writes.md
Normal file
@ -0,0 +1,137 @@
|
||||
# S3 Backend: Locking by using the recently released feature of conditional writes
|
||||
|
||||
Issue: https://github.com/opentofu/opentofu/issues/599
|
||||
|
||||
Considering the request from the ticket above, and the newly AWS released S3 feature, we can now have state locking without relying on DynamoDB.
|
||||
|
||||
The main reasons for such a change could be summarized as follows:
|
||||
* Less resources to maintain.
|
||||
* Potentially reducing costs by eliminating usage of dynamo db.
|
||||
* One less point of failure (removing DynamoDB from the state management)
|
||||
* Easy for other S3 compatible services to implement and enable locking
|
||||
|
||||
The most important things that need to be handled during this implementation:
|
||||
* A simple way to enable/disable locking by using S3 conditional writes.
|
||||
* A path forward to migrate the locking from DynamoDB to S3.
|
||||
* The default behavior should be untouched as long as the `backend` block configuration contains no attributes related to the new locking option.
|
||||
|
||||
## Proposed Solution
|
||||
|
||||
Until recently, most of the approaches that could have been taken for this implementation, could have been prone to data races.
|
||||
But AWS has released new functionality for S3, supporting conditional writes on objects in any S3 bucket.
|
||||
|
||||
For more details on the AWS S3 feature and the way it works, you can read more on the [official docs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/conditional-writes.html).
|
||||
|
||||
> By using conditional writes, you can add an additional header to your WRITE requests to specify preconditions for your Amazon S3 operation. To conditionally write objects, add the HTTP `If-None-Match` or `If-Match` header.
|
||||
>
|
||||
> The `If-None-Match` header prevents overwrites of existing data by validating that there's not an object with the same key name already in your bucket.
|
||||
>
|
||||
> Alternatively, you can add the `If-Match` header to check an object's entity tag (`ETag`) before writing an object. With this header, Amazon S3 compares the provided `ETag` value with the `ETag` value of the object in S3. If the `ETag` values don't match, the operation fails.
|
||||
|
||||
|
||||
To allow this in OpenTofu, the `backend "s3"` will receive one new attribute:
|
||||
* `use_lockfile` `bool` (Default: `false`) - Flag to indicate if the locking should be performed by using strictly the S3 bucket.
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> The name `use_lockfile` was selected this way to keep the [feature parity with Terraform](https://developer.hashicorp.com/terraform/language/backend/s3#state-locking).
|
||||
|
||||
### User Documentation
|
||||
|
||||
In order to make use of this new feature, the users will have to add the attribute in the `backend` block:
|
||||
```terraform
|
||||
terraform {
|
||||
backend "s3" {
|
||||
bucket = "tofu-state-backend"
|
||||
key = "statefile"
|
||||
region = "us-east-1"
|
||||
dynamodb_table = "tofu_locking"
|
||||
use_lockfile = true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
* When the new attribute `use_lockfile` exists and `dynamodb_table` is missing, OpenTofu will try to acquire the lock inside the configured S3 bucket.
|
||||
* When the new attribute `use_lockfile` will exist **alongside** `dynamodb_table`, OpenTofu will:
|
||||
* Acquire the lock in the S3 bucket;
|
||||
* Acquire the lock in DynamoDB table;
|
||||
* Get the digest of the state object from DynamoDB and ensure the state object content integrity;
|
||||
* Perform the requested sub-command;
|
||||
* Release the lock from the S3 bucket;
|
||||
* Release the lock from DynamoDB table.
|
||||
|
||||
The usage of [workspaces](https://opentofu.org/docs/language/state/workspaces/) will not impact this new way of locking. The locking object will always be stored right next to its related state object.
|
||||
|
||||
> [!NOTE]
|
||||
>
|
||||
> OpenTofu [recommends](https://opentofu.org/docs/language/settings/backends/s3/) to have versioning enabled for the S3 buckets used to store state objects.
|
||||
>
|
||||
> Acquiring and releasing locks will add a good amount of writes and reads to the bucket. Therefore, for a versioning-enabled S3 bucket, the number of versions for that object could grow significantly.
|
||||
> Even though the cost should be negligible for the locking objects, any user using this feature could consider configuring the lifecycle of the S3 bucket to limit the number of versions of an object.
|
||||
|
||||
> [!WARNING]
|
||||
>
|
||||
> When OpenTofu S3 backend is used with an S3 compatible provider, it needs to be checked that the provider supports conditional writes in the same way AWS S3 is offering.
|
||||
|
||||
#### Migration paths
|
||||
##### I have no locking enabled
|
||||
In this case, the user can just add the new `use_lockfile=true` and run `tofu init -reconfigure`.
|
||||
|
||||
##### I have DynamoDB locking enabled
|
||||
In case the user has DynamoDB enabled, there are two paths forward:
|
||||
1. Add the new attribute `use_lockfile=true` and run `tofu init -reconfigure`
|
||||
* Later, after a baking period with both locking mechanisms enabled, if no issues encountered, remove the `dynamodb_table` attribute and run `tofu init -reconfigure` again.
|
||||
* Letting both locking mechanisms enabled, ensures that nobody will acquire the lock regardless of having or not the latest configuration.
|
||||
2. Add the new attribute `use_lockfile=true`, remove the `dynamodb_table` one and run `tofu init -reconfigure`
|
||||
* **Caution:** when the configuration updated is executed from multiple places (multiple machines, pipelines on PRs, etc), you might get into issues where one outdated copy of the configuration is using DynamoDB locking and the one updated is using S3 locking. This could create concurrent access on the same state.
|
||||
* Once the state is updated by using this approach, the state digest that OpenTofu was storing in DynamoDB (for data consistency checks) will get stale. If it is wanted to go back to DynamoDB locking, the old digest needs to be cleaned up manually.
|
||||
|
||||
OpenTofu recommends to have both locking mechanisms enabled for a limited amount of time and later remove the DynamoDB locking. This is to ensure that the changes pushed upstream will be propagated into all of your places that the configuration could be executed from.
|
||||
### Technical Approach
|
||||
|
||||
In order to achieve and ensure a proper state locking via S3 bucket, we want to attempt to create the locking object only when it is missing.
|
||||
In order to do so we need to call `s3client.PutObject` with the property `IfNoneMatch: "*"`.
|
||||
For more information, please check the [official documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/conditional-writes.html#conditional-write-key-names).
|
||||
|
||||
But the simplified implementation would look like this:
|
||||
```go
|
||||
input := &s3.PutObjectInput{
|
||||
Bucket: aws.String(bucket),
|
||||
Key: aws.String(key),
|
||||
Body: bytes.NewReader([]byte(lockInfo)),
|
||||
IfNoneMatch: aws.String("*"),
|
||||
}
|
||||
_, err := actor.S3Client.PutObject(ctx, input)
|
||||
```
|
||||
|
||||
The `err` returned above should be handled accordingly with the [behaviour defined](https://docs.aws.amazon.com/AmazonS3/latest/userguide/conditional-writes.html) in the official docs:
|
||||
* HTTP 200 (OK) - Means that the locking object was not existing. Therefore, the lock can be considered as acquired.
|
||||
* For buckets with versioning enabled, if there's no current object version with the same name, or if the current object version is a delete marker, the write operation succeeds.
|
||||
* HTTP 412 (Precondition Failed) - Means that the locking object is already there. Therefore, the whole process should exit because the lock couldn't be acquired.
|
||||
* HTTP 409 (Conflict) - Means that the there was a conflict of concurrent requests. AWS recommends to retry the request in such cases, but we could just handle this similarly to the 412 case.
|
||||
|
||||
#### Digest updates
|
||||
> [!NOTE]
|
||||
> Right now, when locking is enabled on DynamoDB, at the moment of updating the state object content, OpenTofu also writes an entry in DynamoDB with the MD5 sum of the state object.
|
||||
> The reason is to be able to check the integrity of the state object from the S3 bucket in a future run. This is done by reading the digest from DynamoDB and comparing it with the ETag attribute of the state object from S3.
|
||||
|
||||
By moving to the S3 based locking, OpenTofu will store no other file for the digest of the state object. This was a mechanism to validate the state object integrity when the lock was stored in DynamoDB.
|
||||
More info about this topic can be found on the [official documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html).
|
||||
|
||||
But if both locks are enabled (`use_lockfile=true` and `dynamodb_table=<actual_table_name>`), the state digest will still be stored in DynamoDB.
|
||||
|
||||
> [!WARNING]
|
||||
>
|
||||
> By enabling the S3 locking and disabling the DynamoDB one, the digest from DynamoDB will become stale. This means that if it is desired to go back to the DynamoDB locking, the digest needs to be cleaned up or updated in order to allow the content integrity check to work.
|
||||
|
||||
### Open Questions
|
||||
|
||||
* Do we want to provide the option to have the lock objects into another bucket? This will break the feature parity.
|
||||
|
||||
### Future Considerations
|
||||
Later, DynamoDB based locking might be considered for deprecation once (at least) the following are true:
|
||||
* Conditional writes are implemented in the majority of the S3 compatible services.
|
||||
* The adoption rate for the new S3 based locking will be high enough to not affect the existing users.
|
||||
|
||||
## Potential Alternatives
|
||||
Since this new feature relies on the S3 conditional writes, there is hardly other reliable alternative to implement this.
|
Loading…
Reference in New Issue
Block a user