* Add failing test case for the given issue
* pause
* don't use local when sending PR for review
* go get github.com/hashicorp/hcl/v2@v2.16.0
* Update go.mod
---------
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
As explained by the deleted comments, this package was used to identify situations where the `terraform 0.12upgrade` command can help migrate 0.11 syntax. Current versions of terraform don't include this command, and it's not likely that users are attempting upgrades from 0.11 to 1.4+
The replacement init swaps the order of the module and backend initialization in order to prepare for the next commit.
Config initialization now takes the following approach:
1. Load the root module, but withhold diagnostic errors until after version check
2. Initialize the backend, but withhold diagnostic errors until after version check
3. Get modules
4. Load all config (root and modules)
5. Check terraform version requirements (this can be defined by nested modules) and display any errors. It's important to show these first because prior errors could be the result of a newer terraform version syntax
6. Finally, show any errors related to backed init or config loading
Although they are not serialized to the final stored state, all module
outputs must be saved in the state for evaluation. There is no defined
schema which is used to identify the overall type of module outputs, so
all outputs must exist in the state to build the correct type for proper
evaluation.
* Add mTLS support for http backend by way of client cert & key, as well as enterprise cacert.
* Fix style.
* Skip cert validation to be sure error is related to missing client cert; not untrusted server cert.
* Remove misplaced err check.
* Fix the size of test using http backend.
* Just for correctness, include all certs in the pem encoded cert - sometimes certs come with a chain of their signers.
* Adjusted names as recommended in PR comments.
* Adjusted names to be full-length and more descriptive.
* Added full-fledged testing with mTLS http server
* Fix goimports.
* Fix the names of the backend config.
* Exclusive lock for write and delete.
* Revert "Fix goimports."
This reverts commit 7d40f6099fbbb675fb2e25e35ee40aeafe3d0a22.
* goimports just for server test.
* Added the go:generation for the mock.
* Move the TLS configuration out to make it more readable - don't replace the HTTPClient as the retryablehttp already creates one - just configure its TLS.
* Just switch the client/data params - felt more natural this way.
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/testdata/gencerts.sh
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* the location of the file name is not sensitive.
* Added error if only one of client_certificate_pem and client_private_key_pem are set.
* Remove testify from test cases; use t.Error* for assert and t.Fatal* for require.
* Fixed import consistency
* Just use default openssl.
* Since file(...) is so trivial to use, changed the client cert, key, and ca cert to be the data.
See also https://github.com/hashicorp/terraform-provider-http/pull/211
Co-authored-by: Sheridan C Rawlins <scr@ouryahoo.com>
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
Currently Terraform will use an entry from the global plugin cache only if
it matches a checksum already recorded in the dependency lock file. This
allows Terraform to produce a complete lock file entry on the first
encounter with a new provider, whereas using the cache in that case would
cause the lock file to only cover the single package in the cache and
thereefore be unusable on any other operating system or CPU architecture.
This temporary CLI config option is a pragmatic exception to support those
who cannot currently correctly use the dependency lock file but who still
want to benefit from the plugin cache. With this setting enabled,
Terraform has permission to produce a dependency lock file that is only
suitable for the current system if that would allow use of an existing
entry in the plugin cache.
We are introducing this option to resolve a conflict between the needs of
folks who are using the dependency lock file as expected and the needs of
folks who cannot use the dependency lock file for some reason. The hope
then is to give respite to those who need this exception in the meantime
while we understand better why they cannot use the dependency lock file
and improve its design so that everyone will be able to use it
successfully in a future version of Terraform. This option will become a
silent no-op in a future version of Terraform, once the dependency lock
file behavior is sufficient for all supported Terraform development
workflows.
The existing set comparison method uses the prior elements with the computed
portions nulled out to find candidates to match the configuration. This
has the shortcoming of always removing optional+computed attributes,
because we have not yet found the configuration to know if attribute was
set or not.
Rather than having to take the most pessimistic value before comparison
to precompute the nulled values, we can compare each candidate directly,
walking the values in tandem. Each prior value is compared against the
config and checked to see if it could have been derived from that
configuration value, which allows us to treat optional+computed as
optional if there is config and computed if there is not.
This removes the ambiguity from having optional+computed attributes
within sets, giving us consistent plans when all values are known.
Unknown values of course are still undecidable, as are edge cases were
providers refresh with altered values or retained changed prior values
plan that were deemed not functionally significant.
Unify the ProposedNew paths for Blocks and Objects. Break out the
individual case blocks into functions, then use a common interface to
dispatch the object creation to the correct function based on schema
type. This cuts the code in half, and prevents the block and object
behavior from diverging.
NestingMap structures are not well tested, and we panic in many
situations when null crops up. Fix the first test cases and start
refactoring best we can. This probably won't go so far as making all the
objchange functions generic over Block and Object, but we can simplify a
lot and verify parity in implementations for now.
We can check if an object in state must have at least partially come
from configuration, by seeing if the prior value has any non-null
attributes which are not computed in the schema.
This is used when the configuration contains a null optional+computed
value, and we want to know if we should plan to send the null value or
the prior state.
This was clearly wrong, but it was also harmless -- in the event of a failing
test due to missing tags, they would get double-reported as both missing and
unexpected. This commit separates out the reporting as intended.
Go's `append()` reserves the right to mutate its primary argument in-place, and
expects the caller to assign its return value to the same variable that was
passed as the primary argument. Due to what was almost definitely a typo
(followed by copy-paste mishap), the configschema `Block.ValueMarks` and
`Object.ValueMarks` functions were treating it like an immutable function that
returns a new slice.
In rare and hard-to-reproduce cases, this was causing bizarre malfunctions when
marking sensitive schema attributes in deeply-nested block structures --
omitting the marks for some sensitive values (🚨), and marking other entire
blocks as sensitive (which is supposed to be impossible). The chaotic and
unreliable nature of the bugs is likely related to `append()`'s automatic slice
reallocation behavior (if the append operation overflows the original array
allocation, the resulting behavior can _look_ immutable), but there might be
other contributing factors too.
This commit fixes existing instances of the problem, and wraps the desired
copy-and-append behavior in a helper function to simplify handling shared parent
paths in an immutable way.
Combine and simplify the set comparison functions for NestingSet blocks
and attribute types.
The set handling for structural attributes was not recursing into nested
values. Once a simplified method for comparing set elements was devised
for nested types, it turns out the same method could be applied to
nested set blocks as well.
* Use the new structured renderer in place of the old diffs package
* remove old plan tests
* refresh only plans should show moved resources in the refresh section
When structural attributes were added, optional+computed were not
correctly handled when containing nested values which could themselves
be computed. This would cause terraform to ignore previously computed
values from state when generating the proposed plan.
The special case for optional+computed was incorrect, but isn't needed
in the context of planning new values anyway. Attributes are either
computed, or not computed. When optional+computed is set and there is
no configuration, the attribute is treated as computed. It is up to the
provider to determine how and when to deal with any changes to that
computed value.
* remove attributes that do not match the relevant attributes filter
* fix formatting
* fix renderer function, don't drop irrelevant attributes just mark them as no-ops
* fix imports
* fix bugs in the renderer exposed by the equivalence tests
* imports
* gofmt
* remove attributes that do not match the relevant attributes filter
* fix formatting
* fix renderer function, don't drop irrelevant attributes just mark them as no-ops
* fix imports
* Add consolidated function description list
* Add function parameter descriptions
* Add descriptions to all functions
* Add sanity test for function descriptions
* Apply suggestions from code review
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
The configuration may be supplying a typed null value to the
terraform_data.input attribute, which must be reflected in the output to
have a valid plan.
* raw unmodified broken tests
* tests execute, no panics
* fix whitespace differences
* fix all the tests
* fix tests
* actually fix tests
* add missing plan metadata into the renderer
* address comments
* complete merge
* remove TODO raising questions about outputs, they are fixed
* missing bold on plan
* pause implementation
* change -> diff, value -> change
* add support for json and multiline strings to the primitive renderer
* goimports
* remove unused function
* go fmt
* address comments
* change -> diff, value -> change
* also update readme#
* pause
* Update internal/command/jsonformat/computed/diff.go
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
* add interface assertions for diff renderers
* Add support for different kinds of blocks, and for sensitive blocks
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
* prep for processing the structured run output
* undo unwanted change to a json key
* Add skeleton functions and API for refactored renderer
* goimports
* Fix documentation of the RenderOpts struct
* Add rendering functionality for primitives to the structured renderer
* add test case for override
* Add support for parsing and rendering sensitive values in the renderer
* Add support for unknown/computed values in the structured renderer
* delete missing unit tests
* Add support for object attributes in the structured renderer
* goimports
* Add support for the replace paths data in the structured renderer
* Add support for maps in the structured renderer
* Add support for lists in the structured renderer
* goimports
* Add support for sets in the structured renderer
* goimports
* Add support for blocks in the structured renderer
* goimports
* Add support for outputs in the structured renderer
* fix ordering of blocks
* remove unused test stub
* fix typo
* prep for processing the structured run output
* undo unwanted change to a json key
* Add skeleton functions and API for refactored renderer
* goimports
* Fix documentation of the RenderOpts struct
* Add rendering functionality for primitives to the structured renderer
* add test case for override
* Add support for parsing and rendering sensitive values in the renderer
* Add support for unknown/computed values in the structured renderer
* delete missing unit tests
* Add support for object attributes in the structured renderer
* goimports
* Add support for the replace paths data in the structured renderer
* Add support for maps in the structured renderer
* Add support for lists in the structured renderer
* goimports
* Add support for sets in the structured renderer
* goimports
* Add support for blocks in the structured renderer
* goimports
* fix ordering of blocks
* remove unused test stub
* prep for processing the structured run output
* undo unwanted change to a json key
* Add skeleton functions and API for refactored renderer
* goimports
* Fix documentation of the RenderOpts struct
* Add rendering functionality for primitives to the structured renderer
* add test case for override
* Add support for parsing and rendering sensitive values in the renderer
* Add support for unknown/computed values in the structured renderer
* delete missing unit tests
* Add support for object attributes in the structured renderer
* goimports
* Add support for the replace paths data in the structured renderer
* Add support for maps in the structured renderer
* Add support for lists in the structured renderer
* goimports
* Add support for sets in the structured renderer
* goimports
* prep for processing the structured run output
* undo unwanted change to a json key
* Add skeleton functions and API for refactored renderer
* goimports
* Fix documentation of the RenderOpts struct
* Add rendering functionality for primitives to the structured renderer
* add test case for override
* Add support for parsing and rendering sensitive values in the renderer
* Add support for unknown/computed values in the structured renderer
* delete missing unit tests
* Add support for object attributes in the structured renderer
* goimports
* Add support for the replace paths data in the structured renderer
* Add support for maps in the structured renderer
* Add support for lists in the structured renderer
* goimports
* add additional comments explaining
* prep for processing the structured run output
* undo unwanted change to a json key
* Add skeleton functions and API for refactored renderer
* goimports
* Fix documentation of the RenderOpts struct
* Add rendering functionality for primitives to the structured renderer
* add test case for override
* Add support for parsing and rendering sensitive values in the renderer
* Add support for unknown/computed values in the structured renderer
* delete missing unit tests
* Add support for object attributes in the structured renderer
* goimports
* Add support for the replace paths data in the structured renderer
* Add support for maps in the structured renderer
* prep for processing the structured run output
* undo unwanted change to a json key
* Add skeleton functions and API for refactored renderer
* goimports
* Fix documentation of the RenderOpts struct
* Add rendering functionality for primitives to the structured renderer
* add test case for override
* Add support for parsing and rendering sensitive values in the renderer
* Add support for unknown/computed values in the structured renderer
* delete missing unit tests
* Add support for object attributes in the structured renderer
* goimports
* Add support for the replace paths data in the structured renderer
* prep for processing the structured run output
* undo unwanted change to a json key
* Add skeleton functions and API for refactored renderer
* goimports
* Fix documentation of the RenderOpts struct
* Add rendering functionality for primitives to the structured renderer
* add test case for override
* Add support for parsing and rendering sensitive values in the renderer
* Add support for unknown/computed values in the structured renderer
* delete missing unit tests
* Add support for object attributes in the structured renderer
* goimports
* prep for processing the structured run output
* undo unwanted change to a json key
* Add skeleton functions and API for refactored renderer
* goimports
* Fix documentation of the RenderOpts struct
* Add rendering functionality for primitives to the structured renderer
* add test case for override
* Add support for parsing and rendering sensitive values in the renderer
* Add support for unknown/computed values in the structured renderer
* delete missing unit tests
* prep for processing the structured run output
* undo unwanted change to a json key
* Add skeleton functions and API for refactored renderer
* goimports
* Fix documentation of the RenderOpts struct
* Add rendering functionality for primitives to the structured renderer
* add test case for override
* Add support for parsing and rendering sensitive values in the renderer
* prep for processing the structured run output
* undo unwanted change to a json key
* Add skeleton functions and API for refactored renderer
* goimports
* Fix documentation of the RenderOpts struct
* Add rendering functionality for primitives to the structured renderer
* add test case for override
* goimports
* prep for processing the structured run output
* undo unwanted change to a json key
* Add skeleton functions and API for refactored renderer
* goimports
* Fix documentation of the RenderOpts struct
* Add README explaining implementation details for renderer and plans for future expansion
* Update internal/command/jsonformat/README.md
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
* address comments
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
Output references must also include the error_message expression.
Fix the early return in referencesForOutput, which could skip
preconditions. The small slice allocation optimization is not really
needed here, since this is not a hot path at all.
NestingSingle blocks removed from from the config were causing a plan to
error out with "... planned for existence but config wants absence".
Terraform core was proposing an incorrect value in this case, taking the
prior instead as a fallback because a null value was not expected.
Unlike other collection nesting modes, a NestingSingle block not present
in the configuration is a null value, and should be allowed when
planning a new value rather than building an empty object or falling
back to the prior value.
Using ignore_changes with a list block, where the provider returned an
invalid null value for that block, can result in a panic when validating
the plan.
Future releases may prevent providers from storing a null block in
state, however we can avoid the panic for now. Only the NestingList case
needs to be handled, because legacy providers only have list and set
blocks, and the set case does not use the config value.
Legacy providers may return null values for nested blocks during
refresh. Because the ReadResource call needs to accept any value to
allow the provider to report external changes, we allowed all changes to
the value as long as the underlying cty.Type was correct, allowing
null block values to be inserted into the state.
While technically invalid, we needed to accept these null values for
compatibility, and they were mostly seen as a nuisance, causing noise in
external changes and plan output. These null block values however can be
inserted into the effective configuration with the use of
`ignore_changes`, which can cause problems where the configuration is
assumed to be completely valid.
Rather than accept the null values, we can insert empty container values
for these blocks when refreshing the instance, which will prevent any
invalid values from entering state at all. Because these must still be
accepted for compatibility, we can only log the difference as a warning.
Currently the NormalizeObjectFromLegacySDK does not report which
specific blocks it fixed, so we just log a generic message.
Make writing a plan file the default. We already create plans which have
no changes so the plan result would need to be checked in automation, so
having plans with errors should not pose a problem.
If we find workflows which cannot handle a plan that can't be applied,
we can reevaluate the need for a specialized flag. In the meantime, it
feels more logical that the plan output would always describe the result
of the plan, even if that included errors.
* Use the apparentlymart/go-versions library to parse module constraints
* goimports
* Update comments, and parse versions carefully
* add acceptance tests to verify behaviour of partial matches
* goimports
This is a prototype of how the CLI layer might make use of Terraform
Core's ability to produce a partial plan if it encounters an error during
planning, with two new situations:
- When using local CLI workflow, Terraform will show the partial plan
before showing any errors.
- "terraform plan" has a new option -always-out=..., which is similar to
the existing -out=... but additionally instructs Terraform to produce
a plan file even if the plan is incomplete due to errors. This means
that the plan can still be inspected by external UI implementations.
This is just a prototype to explore how these parts might fit together.
It's not a complete implementation and so should not be shipped. In
particular, it doesn't include any mention of a plan being incomplete in
the "terraform show -json" output or in the "terraform plan -json" output,
both of which would be required for a complete solution.
In any situation where we return a plan object along with some errors
we'll also explicitly annotate the plan object as being errored so that
we can catch if someone accidentally tries to apply that incomplete plan.
At the moment this situation is impossible to reach but in a later commit
we'll make it possible to save errored plans to disk for further
inspection, at which point it'll become important to not allow applying
them.
For some kinds of plan failure we will already have successfully completed
planning for at least one upstream object before encountering a downstream
error.
Since a downstream failure can be caused by an already-recorded action
from upstream, it might be helpful to inspect the actions planned so far
in order to understand better why the error occurred.
This doesn't yet make this result visible anywhere, and is backward
compatible with existing callers because they currently entirely ignore
the returned plan pointer if the diagnostics contains at least one error.
This includes the fix for a bug in what Terraform calls the "yamldecode"
function, where it was not correctly handling any situation where the
decode result is a null value. It was previously returning an unknown
value in that case, whereas now it returns a null value as expected.
Outputs were being evaluated from changes, even during apply. Make sure
we update the state correctly, and remove the existing change. This
requires adding more Planning fields to the output nodes to
differentiate whether the output is being planned or applied because the
same type handles both cases. We can evaluate separately whether new
types should be introduced to deal with both cases.
The module node cleanup was also prematurely removing module outputs
from the state before evaluation. This was not noticed before because
the evaluation was always falling back to changes. Have the root module
node do the final cleanup for all its children.
It turns out sensitive was also being handled incorrectly, and only
sensitive from configuration was being considered. Make sure to mark the
output as sensitive when storing sensitive values into state, and OR
sensitive marks with the state config when evaluating the output values.
When output values are updated in the refreshed state, we don't need to
re-set the changes which were already set in conjunction with the
current state.
Once the ProviderTransformer has resolved and set the exact provider,
the ProvidedBy method should return that exact provider again.
We can hoist the stored provider addr into the AbstractInstance and
avoid the method duplication and slight differences between the
implementations.
If when refreshing an orphaned instance the provider indicates it has
already been deleted, there is no reason to create a change for that
instance. A NoOp change should only represent an object that exists and
is not changing.
This was likely left in before in order to try and provide a record of
the change for external consumers of the plan, but newer plans also
contain all changes made outside of Terraform which better accounts for
the difference. The NoOp change now can cause problems, because it may
represent an instance with conditions to check even though that instance
does not exist.
In a heavily-connected graph with lots of inter-dependent providers, the
cycle checks for destroy edges across providers can seriously impact
performance. Since the specific cases we need to avoid will involve
create/update nodes, skip the extra checks during a full destroy
operation. Once we find a way to better track these dependencies, the
transformer will not need to do the cycle checks in the first place.
Data resource dependencies are not stored in the state, so we need to
take the latest dependency set to use for any direct connections to
destroy nodes.
Some prior refactors left the detroyPlan method a bit confusing, and ran
into a case where the previous run state could be returned as nil.
Get rid of the no longer used pendingPlan value, and track the prior and
prev states directly, making sure we always have a value for both.
In order to complete the terraform destroy command, a refresh must first
be done to update state and remove any instances which have already been
deleted externally. This was being done with a refresh plan, which will
avoid any condition evaluations and avoid planning new instances. That
however can fail due to invalid references from resources that are
already missing from the state.
A new plan type to handle the concept of the pre-destroy-refresh is
needed here, which should probably be incorporated directly into the
destroy plan, just like the original refresh walk was incorporated into
the normal planning process. That however is major refactoring that is
not appropriate for a patch release.
Instead we make two discrete changes here to prevent blocking a destroy
plan. The first is to use a normal plan to refresh, which will enable
evaluation because missing and inconsistent instances will be planned
for creation and updates, allowing them to be evaluated. That is not
optimal of course, but does revert to the method used by previous
Terraform releases until a better method can be implemented.
The second change is adding a preDestroyRefresh flag to the planning
process. This is checked in any location which evalCheckRules is called,
and lets us change the diagnosticSeverity of the output to only be
warnings, matching the behavior of a normal refresh plan.
When we originally introduced the trust-on-first-use checksum locking
mechanism in v0.14, we had to make some tricky decisions about how it
should interact with the pre-existing optional read-through global cache
of provider packages:
The global cache essentially conflicts with the checksum locking because
if the needed provider is already in the cache then Terraform skips
installing the provider from upstream and therefore misses the opportunity
to capture the signed checksums published by the provider developer. We
can't use the signed checksums to verify a cache entry because the origin
registry protocol is still using the legacy ziphash scheme and that is
only usable for the original zipped provider packages and not for the
unpacked-layout cache directory. Therefore we decided to prioritize the
existing cache directory behavior at the expense of the lock file behavior,
making Terraform produce an incomplete lock file in that case.
Now that we've had some real-world experience with the lock file mechanism,
we can see that the chosen compromise was not ideal because it causes
"terraform init" to behave significantly differently in its lock file
update behavior depending on whether or not a particular provider is
already cached. By robbing Terraform of its opportunity to fetch the
official checksums, Terraform must generate a lock file that is inherently
non-portable, which is problematic for any team which works with the same
Terraform configuration on multiple different platforms.
This change addresses that problem by essentially flipping the decision so
that we'll prioritize the lock file behavior over the provider cache
behavior. Now a global cache entry is eligible for use if and only if the
lock file already contains a checksum that matches the cache entry. This
means that the first time a particular configuration sees a new provider
it will always be fetched from the configured installation source
(typically the origin registry) and record the checksums from that source.
On subsequent installs of the same provider version already locked,
Terraform will then consider the cache entry to be eligible and skip
re-downloading the same package.
This intentionally makes the global cache mechanism subordinate to the
lock file mechanism: the lock file must be populated in order for the
global cache to be effective. For those who have many separate
configurations which all refer to the same provider version, they will
need to re-download the provider once for each configuration in order to
gather the information needed to populate the lock file, whereas before
they would have only downloaded it for the _first_ configuration using
that provider.
This should therefore remove the most significant cause of folks ending
up with incomplete lock files that don't work for colleagues using other
platforms, and the expense of bypassing the cache for the first use of
each new package with each new configuration. This tradeoff seems
reasonable because otherwise such users would inevitably need to run
"terraform providers lock" separately anyway, and that command _always_
bypasses the cache. Although this change does decrease the hit rate of the
cache, if we subtract the never-cached downloads caused by
"terraform providers lock" then this is a net benefit overall, and does
the right thing by default without the need to run a separate command.
* Convert variable types before applying defaults
* revert change to unrelated test
* Add another test case to verify behaviour
* update go-cty
* Update internal/terraform/eval_variable.go
Co-authored-by: alisdair <alisdair@users.noreply.github.com>
Co-authored-by: alisdair <alisdair@users.noreply.github.com>
We need to avoid re-writing the state for every NoOp apply. We may
still be evaluating the instance to account for any side-effects in the
condition checks, however the state of the instance has not changes.
Re-writing the state is a non-current operation, which may require
encoding a fairly large instance state and re-serializing the entire
state blob, so it is best avoided if possible.
Ensure that empty check results are normalized in state serialization to
prevent unexpected state changes from being written.
Because there is no consistent empty, null and omit_empty usage for
state structs, there's no good way to create a test which will fail
for future additions.
If there are no changes, then there is no reason to create an apply
graph since all objects are known. We however do need the walk to match
the expected state structure. This is probably only cleanup of empty
nested modules and outputs, but some investigation is needed before
making the full change.
For now we can store the checks from the plan directly into the new
state, since the apply walk overwrote the results we had already.