Commit Graph

1258 Commits

Author SHA1 Message Date
James Bardin
a6098b67fa fix test state 2022-12-21 10:47:07 -05:00
James Bardin
0c1aaba635 fix invalid null blocks during refresh
Legacy providers may return null values for nested blocks during
refresh. Because the ReadResource call needs to accept any value to
allow the provider to report external changes, we allowed all changes to
the value as long as the underlying cty.Type was correct, allowing
null block values to be inserted into the state.

While technically invalid, we needed to accept these null values for
compatibility, and they were mostly seen as a nuisance, causing noise in
external changes and plan output. These null block values however can be
inserted into the effective configuration with the use of
`ignore_changes`, which can cause problems where the configuration is
assumed to be completely valid.

Rather than accept the null values, we can insert empty container values
for these blocks when refreshing the instance, which will prevent any
invalid values from entering state at all. Because these must still be
accepted for compatibility, we can only log the difference as a warning.
Currently the NormalizeObjectFromLegacySDK does not report which
specific blocks it fixed, so we just log a generic message.
2022-12-21 10:18:26 -05:00
James Bardin
25ac4d33e4
Merge pull request #31633 from brittandeyoung/f-workspace-selectornew
New Terraform Workspace select flag: `-or-create`
2022-12-16 15:29:25 -05:00
Brittan DeYoung
8881418c99
Update internal/command/workspace_select.go
Co-authored-by: James Bardin <j.bardin@gmail.com>
2022-12-16 15:03:46 -05:00
James Bardin
d60d247e40
Merge pull request #31318 from twittyc/twittyc/terraformWorkspaceInvalidArgsReturnsNon0
Bug fix: Terraform workspace command returns zero exit code when given an invalid argument.
2022-12-16 13:16:56 -05:00
James Bardin
3cda7a0269
Merge pull request #29520 from ComBin/main
Don't show symbols while input if variable marked as sensitive
2022-12-16 13:13:33 -05:00
Liam Cervante
6af6540233
Use the apparentlymart/go-versions library to parse module constraints (#32377)
* Use the apparentlymart/go-versions library to parse module constraints

* goimports

* Update comments, and parse versions carefully

* add acceptance tests to verify behaviour of partial matches

* goimports
2022-12-14 17:02:11 +01:00
James Bardin
404b284911
Merge pull request #31757 from hashicorp/jbardin/terraform-data
New `terraform_data` managed resource to replace `null_resource`
2022-12-12 15:17:02 -05:00
Conor Evans
e206d4e83e
fix(unlock): amend force-unlock description (#32363)
Signed-off-by: Conor Evans <coevans@tcd.ie>

Signed-off-by: Conor Evans <coevans@tcd.ie>
2022-12-09 16:15:27 +00:00
Bryan Stenson
b2f6813341
typo (#32327) 2022-12-09 16:14:01 +00:00
James Bardin
d0d6501c1f s/trigger/triggers_replace/
Rename `triggers` to be more descriptive, making it similar to
`replace_triggered_by`.
2022-12-05 15:23:57 -05:00
James Bardin
58e15c7f0e add terraform_data e2e test 2022-12-05 15:23:57 -05:00
James Bardin
3b73ed3348 new terraform_data managed resource
Replace and enhance the `null_resource` functionality with a new
`terraform_data` managed resource.
2022-12-05 15:23:57 -05:00
xiaozhu36
ec62ca1b70 backend/oss: Ignore the getting oss endpoint error and using string concat instead; Improves the error message level 2022-12-04 11:51:29 +08:00
James Bardin
cbcae8478f
Merge pull request #32209 from hashicorp/jbardin/data-source-destroy-edges
ensure destroy edges from data sources
2022-12-01 10:25:42 -05:00
James Bardin
23aaa39747
Merge pull request #32308 from hashicorp/jbardin/output-eval-fix
always evaluate outputs from state during apply
2022-12-01 09:33:31 -05:00
James Bardin
c66a797f2a
Merge pull request #32307 from hashicorp/jbardin/output-perf
don't re-set changes for refreshed outputs
2022-11-30 14:06:50 -05:00
Martin Atkins
8253821e56 go get github.com/zclconf/go-cty-yaml@v1.0.3
This includes the fix for a bug in what Terraform calls the "yamldecode"
function, where it was not correctly handling any situation where the
decode result is a null value. It was previously returning an unknown
value in that case, whereas now it returns a null value as expected.
2022-11-29 17:45:45 -08:00
James Bardin
dcd762e81d evaluate outputs from state
Outputs were being evaluated from changes, even during apply. Make sure
we update the state correctly, and remove the existing change. This
requires adding more Planning fields to the output nodes to
differentiate whether the output is being planned or applied because the
same type handles both cases. We can evaluate separately whether new
types should be introduced to deal with both cases.

The module node cleanup was also prematurely removing module outputs
from the state before evaluation. This was not noticed before because
the evaluation was always falling back to changes. Have the root module
node do the final cleanup for all its children.

It turns out sensitive was also being handled incorrectly, and only
sensitive from configuration was being considered. Make sure to mark the
output as sensitive when storing sensitive values into state, and OR
sensitive marks with the state config when evaluating the output values.
2022-11-28 16:39:55 -05:00
James Bardin
c9d6f82ac5 don't re-set changes for refreshed outputs
When output values are updated in the refreshed state, we don't need to
re-set the changes which were already set in conjunction with the
current state.
2022-11-28 16:37:23 -05:00
alisdair
ec6451a82a
Merge pull request #31999 from JarrettSpiker/jspiker/workspace-delete-rum-docs
Update workspace delete command docs to reference RUM vs empty state
2022-11-25 12:07:02 -05:00
Jarrett Spiker
21d98697cb Add manual line breaks to workspace delete command help text 2022-11-25 11:42:22 -05:00
James Bardin
2b14670dfd
Merge pull request #32260 from hashicorp/jbardin/resolved-provided-by
ProvidedBy should return the resolved provider
2022-11-22 09:53:41 -05:00
James Bardin
60f82eea40
Merge pull request #32236 from hashicorp/jbardin/1.3-destroy-perf
check walkDestroy to help DestroyEdgeTransformer
2022-11-22 09:45:32 -05:00
James Bardin
c96da72319
Merge pull request #32246 from hashicorp/jbardin/plan-orphan-deleted
A deleted orphan should have no planned change
2022-11-22 09:43:34 -05:00
James Bardin
8e18922170 ProvidedBy should return the resolved provider
Once the ProviderTransformer has resolved and set the exact provider,
the ProvidedBy method should return that exact provider again.

We can hoist the stored provider addr into the AbstractInstance and
avoid the method duplication and slight differences between the
implementations.
2022-11-22 09:41:53 -05:00
James Bardin
79175b29f3
Merge pull request #32261 from sivchari/fix-prealloc
fix: pre allocate for composite literal
2022-11-22 09:18:47 -05:00
Jarrett Spiker
cebd5e3fce Upgrade go-tfe to 1.12.0 2022-11-21 14:54:07 -05:00
Jarrett Spiker
c16d726f2c Succeed cloud workspace deletion if the workspace does not exist 2022-11-21 14:35:33 -05:00
Jarrett Spiker
1dafd7c0b1 Fix test compilation errors caused by interface change 2022-11-21 14:35:33 -05:00
Jarrett Spiker
060255a9d5 Use safe or force workspace delete for cloud backend 2022-11-21 14:35:33 -05:00
sivchari
ef4798de8e fix: pre allocate for composite literal 2022-11-22 02:20:54 +09:00
Sarah French
6fd3a8cdf4
go get cloud.google.com/go/storage@v1.28.0 (#32203)
* go get cloud.google.com/go/storage@v1.28.0

* go mod tidy

* Run `make generate` & `make protobuf` using go1.19.3
2022-11-21 13:14:55 +00:00
James Bardin
7946e4a88a a deleted orphan should have no plan
If when refreshing an orphaned instance the provider indicates it has
already been deleted, there is no reason to create a change for that
instance. A NoOp change should only represent an object that exists and
is not changing.

This was likely left in before in order to try and provide a record of
the change for external consumers of the plan, but newer plans also
contain all changes made outside of Terraform which better accounts for
the difference. The NoOp change now can cause problems, because it may
represent an instance with conditions to check even though that instance
does not exist.
2022-11-18 08:48:15 -05:00
James Bardin
62a8b9ef1d
Merge pull request #32207 from hashicorp/jbardin/destroy-plan-state
Ensure destroy plan contains valid state values
2022-11-17 14:18:17 -05:00
James Bardin
b5168eb6f4
Merge pull request #32208 from hashicorp/jbardin/pre-desstroy-refresh
Make the pre-destroy refresh a full plan
2022-11-17 14:18:06 -05:00
James Bardin
b6a67f622f check walkDestroy to help DestroyEdgeTransformer
In a heavily-connected graph with lots of inter-dependent providers, the
cycle checks for destroy edges across providers can seriously impact
performance. Since the specific cases we need to avoid will involve
create/update nodes, skip the extra checks during a full destroy
operation. Once we find a way to better track these dependencies, the
transformer will not need to do the cycle checks in the first place.
2022-11-17 13:29:09 -05:00
James Bardin
242b8a726c
Merge pull request #32206 from hashicorp/jbardin/communicator-size
fix typo in scp upload size check
2022-11-14 11:05:11 -05:00
James Bardin
ebd5a17b17 ensure destroy edges from data sources
Data resource dependencies are not stored in the state, so we need to
take the latest dependency set to use for any direct connections to
destroy nodes.
2022-11-11 14:56:09 -05:00
James Bardin
3db3ed03fb ensure destroy plan contains valid state values
Some prior refactors left the detroyPlan method a bit confusing, and ran
into a case where the previous run state could be returned as nil.

Get rid of the no longer used pendingPlan value, and track the prior and
prev states directly, making sure we always have a value for both.
2022-11-11 14:34:21 -05:00
James Bardin
3ea704ef81 Make the pre-destroy refresh a full plan
In order to complete the terraform destroy command, a refresh must first
be done to update state and remove any instances which have already been
deleted externally. This was being done with a refresh plan, which will
avoid any condition evaluations and avoid planning new instances. That
however can fail due to invalid references from resources that are
already missing from the state.

A new plan type to handle the concept of the pre-destroy-refresh is
needed here, which should probably be incorporated directly into the
destroy plan, just like the original refresh walk was incorporated into
the normal planning process. That however is major refactoring that is
not appropriate for a patch release.

Instead we make two discrete changes here to prevent blocking a destroy
plan. The first is to use a normal plan to refresh, which will enable
evaluation because missing and inconsistent instances will be planned
for creation and updates, allowing them to be evaluated. That is not
optimal of course, but does revert to the method used by previous
Terraform releases until a better method can be implemented.

The second change is adding a preDestroyRefresh flag to the planning
process. This is checked in any location which evalCheckRules is called,
and lets us change the diagnosticSeverity of the output to only be
warnings, matching the behavior of a normal refresh plan.
2022-11-11 14:33:50 -05:00
James Bardin
8ba8d5aec4 fix typo in upload size check
The scp upload size check had a typo preventing files from reporting
their size, causing an extra temp file to be created.
2022-11-11 14:25:34 -05:00
Liam Cervante
0c7fda1906
Update HCL and go-cty to fix optional and default attributes (#32178)
* Add test cases to verify all the default and optional issues are fixed

* actually commit all the tests

* update go-cty

* Update hcl
2022-11-10 14:00:16 +00:00
Martin Atkins
d0a35c60a7 providercache: Ignore lock-mismatching global cache entries
When we originally introduced the trust-on-first-use checksum locking
mechanism in v0.14, we had to make some tricky decisions about how it
should interact with the pre-existing optional read-through global cache
of provider packages:

The global cache essentially conflicts with the checksum locking because
if the needed provider is already in the cache then Terraform skips
installing the provider from upstream and therefore misses the opportunity
to capture the signed checksums published by the provider developer. We
can't use the signed checksums to verify a cache entry because the origin
registry protocol is still using the legacy ziphash scheme and that is
only usable for the original zipped provider packages and not for the
unpacked-layout cache directory. Therefore we decided to prioritize the
existing cache directory behavior at the expense of the lock file behavior,
making Terraform produce an incomplete lock file in that case.

Now that we've had some real-world experience with the lock file mechanism,
we can see that the chosen compromise was not ideal because it causes
"terraform init" to behave significantly differently in its lock file
update behavior depending on whether or not a particular provider is
already cached. By robbing Terraform of its opportunity to fetch the
official checksums, Terraform must generate a lock file that is inherently
non-portable, which is problematic for any team which works with the same
Terraform configuration on multiple different platforms.

This change addresses that problem by essentially flipping the decision so
that we'll prioritize the lock file behavior over the provider cache
behavior. Now a global cache entry is eligible for use if and only if the
lock file already contains a checksum that matches the cache entry. This
means that the first time a particular configuration sees a new provider
it will always be fetched from the configured installation source
(typically the origin registry) and record the checksums from that source.

On subsequent installs of the same provider version already locked,
Terraform will then consider the cache entry to be eligible and skip
re-downloading the same package.

This intentionally makes the global cache mechanism subordinate to the
lock file mechanism: the lock file must be populated in order for the
global cache to be effective. For those who have many separate
configurations which all refer to the same provider version, they will
need to re-download the provider once for each configuration in order to
gather the information needed to populate the lock file, whereas before
they would have only downloaded it for the _first_ configuration using
that provider.

This should therefore remove the most significant cause of folks ending
up with incomplete lock files that don't work for colleagues using other
platforms, and the expense of bypassing the cache for the first use of
each new package with each new configuration. This tradeoff seems
reasonable because otherwise such users would inevitably need to run
"terraform providers lock" separately anyway, and that command _always_
bypasses the cache. Although this change does decrease the hit rate of the
cache, if we subtract the never-cached downloads caused by
"terraform providers lock" then this is a net benefit overall, and does
the right thing by default without the need to run a separate command.
2022-11-04 16:18:15 -07:00
Brandon Croft
be5984d664
Merge pull request #32004 from hashicorp/brandonc/nested_attr_sensitive
fix: don't reveal nested attributes with sensitive schema
2022-11-02 16:18:04 -06:00
James Bardin
1100eae89f use UIMode instead of 0 changes to detect refresh 2022-11-02 10:56:08 -04:00
James Bardin
cccfa5e4af
Merge pull request #32111 from hashicorp/jbardin/refresh-only-data-read
don't plan data source reads during refresh-only
2022-11-02 08:32:50 -04:00
Liam Cervante
6521355ba5
Convert variable types before applying defaults (#32027)
* Convert variable types before applying defaults

* revert change to unrelated test

* Add another test case to verify behaviour

* update go-cty

* Update internal/terraform/eval_variable.go

Co-authored-by: alisdair <alisdair@users.noreply.github.com>

Co-authored-by: alisdair <alisdair@users.noreply.github.com>
2022-11-02 09:38:23 +01:00
Graham Davison
6663cde619
Merge pull request #23965 from tpaschalis/disallow-s3-backend-key-trailing-slash
S3 Backend : Bucket key should not contain trailing slash
2022-11-01 13:56:43 -07:00
James Bardin
efd77159dd use key data from plan method for apply 2022-11-01 16:18:38 -04:00