When we originally introduced the trust-on-first-use checksum locking
mechanism in v0.14, we had to make some tricky decisions about how it
should interact with the pre-existing optional read-through global cache
of provider packages:
The global cache essentially conflicts with the checksum locking because
if the needed provider is already in the cache then Terraform skips
installing the provider from upstream and therefore misses the opportunity
to capture the signed checksums published by the provider developer. We
can't use the signed checksums to verify a cache entry because the origin
registry protocol is still using the legacy ziphash scheme and that is
only usable for the original zipped provider packages and not for the
unpacked-layout cache directory. Therefore we decided to prioritize the
existing cache directory behavior at the expense of the lock file behavior,
making Terraform produce an incomplete lock file in that case.
Now that we've had some real-world experience with the lock file mechanism,
we can see that the chosen compromise was not ideal because it causes
"terraform init" to behave significantly differently in its lock file
update behavior depending on whether or not a particular provider is
already cached. By robbing Terraform of its opportunity to fetch the
official checksums, Terraform must generate a lock file that is inherently
non-portable, which is problematic for any team which works with the same
Terraform configuration on multiple different platforms.
This change addresses that problem by essentially flipping the decision so
that we'll prioritize the lock file behavior over the provider cache
behavior. Now a global cache entry is eligible for use if and only if the
lock file already contains a checksum that matches the cache entry. This
means that the first time a particular configuration sees a new provider
it will always be fetched from the configured installation source
(typically the origin registry) and record the checksums from that source.
On subsequent installs of the same provider version already locked,
Terraform will then consider the cache entry to be eligible and skip
re-downloading the same package.
This intentionally makes the global cache mechanism subordinate to the
lock file mechanism: the lock file must be populated in order for the
global cache to be effective. For those who have many separate
configurations which all refer to the same provider version, they will
need to re-download the provider once for each configuration in order to
gather the information needed to populate the lock file, whereas before
they would have only downloaded it for the _first_ configuration using
that provider.
This should therefore remove the most significant cause of folks ending
up with incomplete lock files that don't work for colleagues using other
platforms, and the expense of bypassing the cache for the first use of
each new package with each new configuration. This tradeoff seems
reasonable because otherwise such users would inevitably need to run
"terraform providers lock" separately anyway, and that command _always_
bypasses the cache. Although this change does decrease the hit rate of the
cache, if we subtract the never-cached downloads caused by
"terraform providers lock" then this is a net benefit overall, and does
the right thing by default without the need to run a separate command.
* Convert variable types before applying defaults
* revert change to unrelated test
* Add another test case to verify behaviour
* update go-cty
* Update internal/terraform/eval_variable.go
Co-authored-by: alisdair <alisdair@users.noreply.github.com>
Co-authored-by: alisdair <alisdair@users.noreply.github.com>
We need to avoid re-writing the state for every NoOp apply. We may
still be evaluating the instance to account for any side-effects in the
condition checks, however the state of the instance has not changes.
Re-writing the state is a non-current operation, which may require
encoding a fairly large instance state and re-serializing the entire
state blob, so it is best avoided if possible.
Ensure that empty check results are normalized in state serialization to
prevent unexpected state changes from being written.
Because there is no consistent empty, null and omit_empty usage for
state structs, there's no good way to create a test which will fail
for future additions.
If there are no changes, then there is no reason to create an apply
graph since all objects are known. We however do need the walk to match
the expected state structure. This is probably only cleanup of empty
nested modules and outputs, but some investigation is needed before
making the full change.
For now we can store the checks from the plan directly into the new
state, since the apply walk overwrote the results we had already.
* Update gcs.mdx
Updated GCS remote backend to include the >=0.12 remote backend data source formatting (incl .output before output name)
* Update gcs.mdx
removed old data source example
* Update gcs.mdx
updated to local_file and output->outputs
* Update gcs.mdx
removed "outputs" from the pre 0.12 ref
When executing a refresh-only plan, it is not valid to plan a data
source read. If the data source config is not known during planning, the
only valid update would be the prior state, if there is any.
Previously, there was mixed usage of "(sensitive)" and "(sensitive value)" and even though it was more common to see "(sensitive)", the thought is that it's a value we are hiding rather than describing something already shown.
NoOp changes should not participate in a destroy sequence, but because
they are included as normal update nodes the usual connections were
still being made.
Refreshing for a destroy should use the refresh-only plan to avoid
planning new objects or evaluating conditions. This should also be
skipped if there is no state, since there would be nothing to refresh.