HashiCorp legal now requires a copyright claim in a comment at the top of
every substantial file in this repository. If we don't add this ourselves
then a bot will open a PR to add missing entries, but that process adds
git history, pull request, and GitHub notification noise so instead we'll
deal with it proactively as part of our usual code generation steps.
This means that pull requests will fail their checks if there are any
files that lack copyright headers, so we can deal with those before we
merge rather than in a subsequent PR.
Providers that existed prior to refinements (all of them, at the time of
writing) cannot preserve refinements sent in unknown values in the
configuration, and even if one day providers _are_ aware of refinements
there we might add new ones that existing providers don't know how to
handle.
For that reason we'll absolve providers of the responsibility of
preserving refinements from config into plan by fixing some cases where
we were incorrectly using RawEquals to compare values; that function isn't
appropriate for comparing values that might be unknown.
However, to avoid a disruptive change right now this initial fix just
strips off the refinements before comparing. Ideally this should be using
Value.Equals and handling unknown values more explicitly, but we'll save
that for a possible later improvement.
This does not include a similar exception for validating whether a final
value conforms to a plan because the plan value and the final value are
both produced by the same provider and so providers ought to be able to
be consistent with their _own_ treatment of refinements, if any.
Configuration is special because Terraform itself generates that, and so
it can potentially contain refinements that a particular provider has no
awareness of.
If the original value was unknown but its range was refined then the
provider must return a value that is within the refined range, because
otherwise downstream planning decisions could be invalidated.
This relies on cty's definition of whether a value is in a refined range,
which has pretty good coverage for the "false" case and so should give a
pretty good signal, but it'll probably improve over time and so providers
must not rely on any loopholes in the current implementation and must
keep their promises even if Terraform can't currently check them.
If the string to be tested is an unknown value that's been refined with
a prefix and the prefix we're being asked to test is in turn a prefix of
that known prefix then we can return a known answer despite the inputs
not being fully known.
There are also some other similar deductions we can make about other
combinations of inputs.
This extra analysis could be useful in a custom condition check that
requires a string with a particular prefix, since it can allow the
condition to fail even on partially-unknown input, thereby giving earlier
feedback about a problem.
The "id" attribute of this resource type is generated by the provider
itself and can never be null, so we'll refine the range of its unknown
result in case that helps downstream expressions to produce known results
even when the exact value hasn't yet been planned.
cty's new "refinements" concept allows us to reduce the range of unknown
values from our functions. This initial changeset focuses only on
declaring which functions are guaranteed to return a non-null result,
which is a helpful baseline refinement because it allows "== null" and
"!= null" tests to produce known results even when the given value is
otherwise unknown.
This commit also includes some updates to test results that are now
refined based on cty's own built-in refinement behaviors, just as a
result of us having updated cty in the previous commit.
* genconfig: fix nil nested block panic
* genconfig: null NestingSingle blocks should be absent
A NestingSingle block that is null in state should be completely absent from config.
* configschema: make FilterOr variadic
* configschema: apply filters to nested types
* configschema: filter helper/schema id attribute
The legacy SDK adds an Optional+Computed "id" attribute to the
resource schema even if not defined in provider code.
During validation, however, the presence of an extraneous "id"
attribute in config will cause an error.
Remove this attribute so we do not generate an "id" attribute
where there is a risk that it is not in the real resource schema.
* configschema: filter test
* terraform: do not pre-validate generated config
Config generated from a resource's import state may fail validation in
the case of schema behaviours such as ExactlyOneOf and ConflictsWith.
We don't want to fail the plan now, because that would give the user no
way to proceed and fix the config to make it valid. We allow the plan to
complete and output the generated config.
* generate config alongside import process
Rather than waiting until we call `plan()`, generate the configuration
at the point of the import call, so we have the necessary data to return
in case planning fails later.
The `plan` and `state` predeclared variables in the plan() method were
obfuscating the actual return of nil throughout, so those identifiers
were removed for clarity.
* move generateHCLStringAttributes closer to caller
* store generated config in plan on error
* test for config gen with error
* add simple warning when generating config
---------
Co-authored-by: James Bardin <j.bardin@gmail.com>
Previously we just made a hard rule that the state storage for Terraform
Cloud would never save any intermediate snapshots at all, as a coarse way
to mitigate concerns over heightened Terraform Enterprise storage caused
by saving intermediate snapshots.
As a better compromise, we'll now create intermediate snapshots at the
default interval unless the Terraform Cloud API responds with a special
extra header field X-Terraform-Snapshot-Interval, which specifies a
different number of seconds (up to 1 hour) to wait before saving the next
snapshot.
This will then allow Terraform Cloud and Enterprise to provide some dynamic
backpressure when needed, either to reduce the disk usage in Terraform
Enterprise or in situations where Terraform Cloud is under unusual load
and needs to calm the periodic intermediate snapshot writes from clients.
This respects the "force persist" mode so that if Terraform CLI is
interrupted with SIGINT then it'll still be able to urgently persist
a snapshot of whatever state it currently has, in anticipation of probably
being terminated with a more aggressive signal very soon.
We've seen some concern about the additional storage usage implied by
creating intermediate state snapshots for particularly long apply phases
that can arise when managing a large number of resource instances together
in a single workspace.
This is an initial coarse approach to solving that concern, just restoring
the original behavior when running inside Terraform Cloud or Enterprise
for now and not creating snapshots at all.
This is here as a solution of last resort in case we cannot find a better
compromise before the v1.5.0 final release. Hopefully a future commit
will implement a more subtle take on this which still gets some of the
benefits when running in a Terraform Enterprise environment but in a way
that will hopefully be less concerning for Terraform Enterprise
administrators.
This does not affect any other state storage implementation except the
Terraform Cloud integration and the "remote" backend's state storage when
running inside a TFC/TFE-driven remote execution environment.
Previously we just always used the same intermediate state persistence
behavior for all state storages. However, some storages might have access
to additional information that allows them to tailor when they persist,
such as reacting to API rate limit status headers in responses, or just
knowing that a particular storage isn't suited to intermediate snapshots
at all for some reason.
This commit doesn't actually change any observable behavior yet, but it
introduces an optional means for a state storage to customize the behavior
which we may make use of in certain storage implementations in future
commits.
When planning a destroy operations, locals only referenced by root
outputs do not need to be kept in the graph, because the root output
does not get evaluated. Rather than try and prune the local based on
this condition, we can prevent the connection from being created by
ensuring that a root output destroy node has no references.
The separate plan+apply destroy fields used for outputs can be
simplified by combining, since they are only ever referenced together.
* genconfig: fix nil nested block panic
* always InternalValidate test schemas
* genconfig: null NestingSingle blocks should be absent
A NestingSingle block that is null in state should be completely absent from config.
If a resource is already in state, do not attempt to import it again. Resources already in state are filtered out of the plan's import targets.
A change is only considered "importing" if it is adding a new resource instance to the state.
* command: keep our promises
* remove some nil config checks
Remove some of the safety checks that ensure plan nodes have config attached at the appropriate time.
* add GeneratedConfig to plan changes objects
Add a new GeneratedConfig field alongside Importing in plan changes.
* add config generation package
The genconfig package implements HCL config generation from provider state values.
Thanks to @mildwonkey whose implementation of terraform add is the basis for this package.
* generate config during plan
If a resource is being imported and does not already have config, attempt to generate that config during planning. The config is generated from the state as an HCL string, and then parsed back into an hcl.Body to attach to the plan graph node.
The generated config string is attached to the change emitted by the plan.
* complete config generation prototype, and add tests
* plannable import: add a provider argument to the import block
* Update internal/configs/config.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/configs/config.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/configs/config.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* fix formatting and tests
---------
Co-authored-by: Katy Moe <katy@katy.moe>
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* command: keep our promises
* remove some nil config checks
Remove some of the safety checks that ensure plan nodes have config attached at the appropriate time.
* add GeneratedConfig to plan changes objects
Add a new GeneratedConfig field alongside Importing in plan changes.
* add config generation package
The genconfig package implements HCL config generation from provider state values.
Thanks to @mildwonkey whose implementation of terraform add is the basis for this package.
* generate config during plan
If a resource is being imported and does not already have config, attempt to generate that config during planning. The config is generated from the state as an HCL string, and then parsed back into an hcl.Body to attach to the plan graph node.
The generated config string is attached to the change emitted by the plan.
* complete config generation prototype, and add tests
* Plannable import: Add generated config to json and human-readable plan output
---------
Co-authored-by: Katy Moe <katy@katy.moe>
* command: keep our promises
* remove some nil config checks
Remove some of the safety checks that ensure plan nodes have config attached at the appropriate time.
* add GeneratedConfig to plan changes objects
Add a new GeneratedConfig field alongside Importing in plan changes.
* add config generation package
The genconfig package implements HCL config generation from provider state values.
Thanks to @mildwonkey whose implementation of terraform add is the basis for this package.
* generate config during plan
If a resource is being imported and does not already have config, attempt to generate that config during planning. The config is generated from the state as an HCL string, and then parsed back into an hcl.Body to attach to the plan graph node.
The generated config string is attached to the change emitted by the plan.
* complete config generation prototype, and add tests
---------
Co-authored-by: Katy Moe <katy@katy.moe>
The imported resource was being stored in the wrong state, and only
ended up in the refresh state because ReadResource was being called a
second time in the normal refresh path.
Make sure to only refresh the imported resource once. This is still done
separately within importState so that we can handle the error slightly
differently to let the user know if an imported instance does not exist.
* [plannable import] embed the resource id within the changes
* [Plannable Import] Implement streamed logs for -json plan
* use latest structs
* remove implementation plans from TODO
The logic used to prune unused providers was only taking into account
the common case of providers in the root module. The quick check of
looking for up edges doesn't work within a module, because the module
structures will create non-resource nodes connected to the providers.
Use a deeper check of looking for any dependent resources which may
require that provider to be configured.
During a plan, Terraform now checks for the presence of import blocks.
For each resource in config, if an import block is present with a matching address, planning that node will now trigger an ImportResourceState and ReadResource. The resulting state is treated as the node's "refresh state", and planning proceeds as normal from there.
The walkImport operation is now only used for the legacy "terraform import" CLI command. This is the only case under which the plan should produce graphNodeImportStates.
ran into this error while running terraform on a container and saving state to Consul. I suspect my policy needs tweaking but it's impossible to tell with an error like this:
```
╷
│ Error: Failed to save state
│
│ Error saving state: consul CAS failed with transaction errors:
│ [0xc0006e93c8]
╵
```
This PR makes the will include the error messaage in the details so I can continue debugging
Just like in the destroy apply, we can skip the inter-provider cycle
check when creating the destroy plan, which can be expensive when there
are a lot of resource instances with dependencies from another provider.
* Improve environment variable support for the pg backend
This patch does two things:
- it adds environment variable support to the parameters that did
not have it (and uses `PG_CONN_STR` instead of `PGDATABASE` which is
actually more appropriate to match the behavior of other PostgreSQL
utilities)
- better documents how to give the connection parameters as environment
variables for the ones that were already supported based on the
recommendation of @bsouth00
I will prepare a backport of the documentation part of this once it is
merged.
Closes https://github.com/hashicorp/terraform/issues/33024
* Remove global variable in test of the PG backend
The cloud backend, which communicates with TFC like APIs, can create
runs which may have one more configuration parameters altered. These
alterations are emitted as run-events on the run so that API clients
can consume and display them to users. This commit adds a step in
plan operation to query the run-events once a run is created and then
emit specific run-event descriptions to the console as warnings for
the user.
* checks: filter out check diagnostics during certain plans
* wrap diagnostics produced by check blocks in a dedicated check block diagnostic
* address comments
When we plan to destroy an instance, the change recorded should use the
correct type for the resource rather than `DynamicPseudoType`. Most of
the time this is hidden when the change is encoded in the plan, because
any `null` is always encoded to the same value, and when decoded it will
be converted to the schema type. However when apply requires creating a
second plan for an instance's replacement that value is not going to be
encoded, and remains a dynamic value which is sent to the provider.
Most providers won't see that either, as the grpc request also encodes
and decodes the value to conform with the correct schema. The builtin
terraform provider does get the raw cty value though, and when that
dynamic value is returned validation fails when the type does not match.
It is not valid for a provider to return an unknown value for a
configured nested collection, but we need to check for unknowns before
comparing the number of values in the collection.
If a resource has a change in marks from the prior state, we need to
notify the user that an update is going to be necessary to at least
store that new value in the state. If the provider however returns the
prior state value in lieu of a new config value, we need to be sure to
filter any new marks for comparison as well. The comparison of the prior
marks and new marks must take into account whether those new marks could
even be applied, because if the value is unchanged the new marks may be
completely irrelevant.
* Add support for scoped resources
* refactor existing checks addrs and add check block addr
* Add configuration for check blocks
* introduce check blocks into the terraform node and transform graph
* address comments
* address comments
* don't execute checks during destroy operations
* don't even include check nodes for destroy operations
In the case where a provider has been upgraded, and there are external
changes to resources outside of terraform, and -target is being used,
and resources which are not targeted require a schema migration; the
untargeted resources will not have been migrated and cannot be decoded for the
external changes report.
Since there is no way to decode the resources which have been excluded
via -target, we can only skip over them when inspecting
driftedResources. Return warnings for now to indicate that these
resources could not be decoded to help indicate that users will need to
eventually apply these changes.
Module outputs are evaluated from state, so in order to have detailed
information about sensitivity from non-root module outputs, we need to
store the value along with all sensitive marks. This aligns with the
usage of state being the in-memory store for other temporary values like
locals and variables.
When planning encounters an error we were returning early without
cleaning out any planed data sources which cannot be serialized. Move
the cleanup to the common walkPlan method where the PriorState is
assigned so that it cannot be missed.
We inadvertently incorporated the new minor release of cty into the 1.4
branch, and that's introduced some more refined handling of unknown values
that is too much of a change to introduce in a patch release.
Therefore this reverts back to the previous minor release for the v1.4
series, and then we'll separately get the main branch ready to work
correctly with the new cty before Terraform v1.5.
This reverts just the upgrade and the corresponding test changes from
#32775, while retaining the HCL upgrade and the new test case it
introduced for that bug it was trying to fix. That new test is still
passing so it seems that the cty upgrade is not crucial to that fix.
This test was previously not taking into account the fact that the
"Stopping" hook gets sent in the goroutine that calls ctx.Stop, whereas
all of the others get called from inside ctx.Apply, and so there are no
ordering guarantees for that event in relation to the others.
We now handle the stopping event as a special case that is allowed to
appear anywhere in the sequence as long as it appears. The other events
are still strongly ordered because their ordering is important for
correctness of Terraform Core's own behavior.
As some extra insurance we also now check whether the provider's
ApplyResourceChange and Stop functions both ran and reached a suitable
point of execution related to the stop request, which help to ensure not
only that something called Stop but that Terraform Core correctly
interacted with the provider to handle the stop.
While the returned plan is checked for nil in most cases, there was
a single point where the plan was dereferenced which could panic. Rather
than always guarding the dereferences, return early when the plan is
nil.
This test case was making a real DNS call in a non-acceptance test, and
since it was intended to fail it would introduce a several second delay.
This commit replaces the test with a similar one which uses the mocked
disco services for a non-TFE host.
Also restructure the test to use t.Run for clarity.
This is a mostly mechanical refactor with a handful of changes which
are necessary due to the semantic difference between earlyconfig and
configs.
When parsing root and descendant modules in the module installer, we now
check the core version requirements inline. If the Terraform version is
incompatible, we drop any other module loader diagnostics. This ensures
that future language additions don't clutter the output and confuse the
user.
We also add two new checks during the module load process:
* Don't try to load a module with a `nil` source address. This is a
necessary change due to the move away from earlyconfig.
* Don't try to load a module with a blank name (i.e. `module ""`).
Because our module loading manifest uses the stringified module path
as its map key, this causes a collision with the root module, and a
later panic. This is the bug which triggered this refactor in the
first place.
Since it's already possible to activate the dependency lock file using an
environment variable, we should allow opting in to it having broken
behavior using the environment too.
It's kinda odd in retrospect that TF_PLUGIN_CACHE_DIR is the only setting
we allow to be configured both in the environment and the CLI
configuration. That means that the infrastructure for dealing with that
situation was relatively immature here and so I did some light refactoring
to make it unit-testable without actually modifying the test program's
environment.
With the demise of the early config loader, we want to show core
version errors first, followed by backend errors, and only then
show other errors with the configuration.
Terraform Core emits a hook event every time it writes a change into the
in-memory state. Previously the local backend would just copy that into
the transient storage of the state manager, but for most state storage
implementations that doesn't really do anything useful because it just
makes another copy of the state in memory.
We originally added this hook mechanism with the intent of making
Terraform _persist_ the state each time, but we backed that out after
finding that it was a bit too aggressive and was making the state snapshot
history much harder to use in storage systems that can preserve historical
snapshots.
However, sometimes Terraform gets killed mid-apply for whatever reason and
in our previous implementation that meant always losing that transient
state, forcing the user to edit the state manually (or use "import") to
recover a useful state.
In an attempt at finding a sweet spot between these extremes, here we
change the rule so that if an apply runs for longer than 20 seconds then
we'll try to persist the state to the backend in an update that arrives
at least 20 seconds after the first update, and then again for each
additional 20 second period as long as Terraform keeps announcing new
state snapshots.
This also introduces a special interruption mode where if the apply phase
gets interrupted by SIGINT (or equivalent) then the local backend will
try to persist the state immediately in anticipation of a
possibly-imminent SIGKILL, and will then immediately persist any
subsequent state update that arrives until the apply phase is complete.
After interruption Terraform will not start any new operations and will
instead just let any already-running operations run to completion, and so
this will persist the state once per resource instance that is able to
complete before being killed.
This does mean that now long-running applies will generate intermediate
state snapshots where they wouldn't before, but there should still be
considerably fewer snapshots than were created when we were persisting
for each individual state change. We can adjust the 20 second interval
in future commits if we find that this spot isn't as sweet as first
assumed.
The terraform provider was panicking on import, because it didn't
previously have a resource type which could be imported at all. Add a
stub import function for terraform_data as a placeholder to allow the
call to complete successfully. While there's no need to actually import
a terraform_data resource, users will inevitably use this to construct
examples of import actions for learning purposes or bug reports.
This still isn't very useful even for examples however, because the
state-only nature of the terraform_data resource type means that we
can't fill in the state from only the import ID. This means that any
value in `trigger_replace` or `input` will cause a change in the next
plan. Once configuration data is available during import we can extend
this to create a logical final state based on config.
* Add metadata functions command skeleton
* Export functions as JSON via cli command
* Add metadata command
* Add tests to jsonfunction package
* WIP: Add metadata functions test
* Change return_type & type in JSON to json.RawMessage
This enables easier deserialisation of types when parsing the JSON.
* Skip is_nullable when false
* Update cli docs with metadata command
* Use tfdiags to report function marshal errors
* Ignore map, list and type functions
* Test Marshal function with diags
* Test metadata functions command output
* Simplify type marshaling by using cty.Type
* Add static function signatures for can and try
* Update internal/command/jsonfunction/function_test.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
---------
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* go get github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/sts/v20180813@v1.0.588
* feat:support assume_role for COS backend
* update go.mod and go.sum
* change secret_id and secret_key from required to optional
* update cos doc
* update logic by comments
* rm sensitive info in log
Resource instances with no current object in state should not have
orphan nodes added to the graph, as deposed objects are handled
separately. This was previously handled correctly for the non-expanded
case, but expanded resources were missing the appropriate check for a
current object.
Also update the comment in the non-expanded case to hopefully clarify
that we're checking for the presence of a current object, not the
absence of any deposed objects. An instance may have both a current
object and zero or more deposed objects in some circumstances, and if
so, we still want an orphan node to be added if the instance is not in
configuration.
* Implementation of structured logging.
These are the changes that enable the cloud backend to consume
structured logs and make use of the new plan renderer. This will enable
CLI-driven runs to view the structured output in the Terraform Cloud UI.
* Cloud structured logging unit tests
* Remove deferred logs logic, fix minor issues
Color formatting fixes, log type stop lists, default behavior for logs
that are unknown
* Use service disco path in redacted plan url
* Add viewType to Meta object and use it at the call sites
* Assign viewType passed from flags to state-locking cli commands
* Remove temp files
* Set correct mode for statelocker depending on json flag passed to commands
* Add StateLocker interface conformation check for StateLockerJSON
* Remove empty line at end of comment
* Pass correct ViewType to StateLocker from Backend call chain
* Pass viewType to backend migration and initialization functions
* Remove json processing info in process comment
* Restore documentation style of backendMigrateOpts
This makes the behavior of remote state consistent with local state in regards to the initial serial number of the generated / pushed state. Previously remote state's initial push would have a serial number of 0, whereas local state had a serial of > 0. This causes issues with the logic around, for example, ensuring that a plan file cannot be applied if state is stale (see https://github.com/hashicorp/terraform/issues/30670 for example).
* Add failing test case for the given issue
* pause
* don't use local when sending PR for review
* go get github.com/hashicorp/hcl/v2@v2.16.0
* Update go.mod
---------
Co-authored-by: Alisdair McDiarmid <alisdair@users.noreply.github.com>
As explained by the deleted comments, this package was used to identify situations where the `terraform 0.12upgrade` command can help migrate 0.11 syntax. Current versions of terraform don't include this command, and it's not likely that users are attempting upgrades from 0.11 to 1.4+
The replacement init swaps the order of the module and backend initialization in order to prepare for the next commit.
Config initialization now takes the following approach:
1. Load the root module, but withhold diagnostic errors until after version check
2. Initialize the backend, but withhold diagnostic errors until after version check
3. Get modules
4. Load all config (root and modules)
5. Check terraform version requirements (this can be defined by nested modules) and display any errors. It's important to show these first because prior errors could be the result of a newer terraform version syntax
6. Finally, show any errors related to backed init or config loading
Although they are not serialized to the final stored state, all module
outputs must be saved in the state for evaluation. There is no defined
schema which is used to identify the overall type of module outputs, so
all outputs must exist in the state to build the correct type for proper
evaluation.
* Add mTLS support for http backend by way of client cert & key, as well as enterprise cacert.
* Fix style.
* Skip cert validation to be sure error is related to missing client cert; not untrusted server cert.
* Remove misplaced err check.
* Fix the size of test using http backend.
* Just for correctness, include all certs in the pem encoded cert - sometimes certs come with a chain of their signers.
* Adjusted names as recommended in PR comments.
* Adjusted names to be full-length and more descriptive.
* Added full-fledged testing with mTLS http server
* Fix goimports.
* Fix the names of the backend config.
* Exclusive lock for write and delete.
* Revert "Fix goimports."
This reverts commit 7d40f6099fbbb675fb2e25e35ee40aeafe3d0a22.
* goimports just for server test.
* Added the go:generation for the mock.
* Move the TLS configuration out to make it more readable - don't replace the HTTPClient as the retryablehttp already creates one - just configure its TLS.
* Just switch the client/data params - felt more natural this way.
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/testdata/gencerts.sh
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* Update internal/backend/remote-state/http/backend.go
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
* the location of the file name is not sensitive.
* Added error if only one of client_certificate_pem and client_private_key_pem are set.
* Remove testify from test cases; use t.Error* for assert and t.Fatal* for require.
* Fixed import consistency
* Just use default openssl.
* Since file(...) is so trivial to use, changed the client cert, key, and ca cert to be the data.
See also https://github.com/hashicorp/terraform-provider-http/pull/211
Co-authored-by: Sheridan C Rawlins <scr@ouryahoo.com>
Co-authored-by: kmoe <5575356+kmoe@users.noreply.github.com>
Currently Terraform will use an entry from the global plugin cache only if
it matches a checksum already recorded in the dependency lock file. This
allows Terraform to produce a complete lock file entry on the first
encounter with a new provider, whereas using the cache in that case would
cause the lock file to only cover the single package in the cache and
thereefore be unusable on any other operating system or CPU architecture.
This temporary CLI config option is a pragmatic exception to support those
who cannot currently correctly use the dependency lock file but who still
want to benefit from the plugin cache. With this setting enabled,
Terraform has permission to produce a dependency lock file that is only
suitable for the current system if that would allow use of an existing
entry in the plugin cache.
We are introducing this option to resolve a conflict between the needs of
folks who are using the dependency lock file as expected and the needs of
folks who cannot use the dependency lock file for some reason. The hope
then is to give respite to those who need this exception in the meantime
while we understand better why they cannot use the dependency lock file
and improve its design so that everyone will be able to use it
successfully in a future version of Terraform. This option will become a
silent no-op in a future version of Terraform, once the dependency lock
file behavior is sufficient for all supported Terraform development
workflows.
The existing set comparison method uses the prior elements with the computed
portions nulled out to find candidates to match the configuration. This
has the shortcoming of always removing optional+computed attributes,
because we have not yet found the configuration to know if attribute was
set or not.
Rather than having to take the most pessimistic value before comparison
to precompute the nulled values, we can compare each candidate directly,
walking the values in tandem. Each prior value is compared against the
config and checked to see if it could have been derived from that
configuration value, which allows us to treat optional+computed as
optional if there is config and computed if there is not.
This removes the ambiguity from having optional+computed attributes
within sets, giving us consistent plans when all values are known.
Unknown values of course are still undecidable, as are edge cases were
providers refresh with altered values or retained changed prior values
plan that were deemed not functionally significant.
Unify the ProposedNew paths for Blocks and Objects. Break out the
individual case blocks into functions, then use a common interface to
dispatch the object creation to the correct function based on schema
type. This cuts the code in half, and prevents the block and object
behavior from diverging.
NestingMap structures are not well tested, and we panic in many
situations when null crops up. Fix the first test cases and start
refactoring best we can. This probably won't go so far as making all the
objchange functions generic over Block and Object, but we can simplify a
lot and verify parity in implementations for now.
We can check if an object in state must have at least partially come
from configuration, by seeing if the prior value has any non-null
attributes which are not computed in the schema.
This is used when the configuration contains a null optional+computed
value, and we want to know if we should plan to send the null value or
the prior state.
This was clearly wrong, but it was also harmless -- in the event of a failing
test due to missing tags, they would get double-reported as both missing and
unexpected. This commit separates out the reporting as intended.
Go's `append()` reserves the right to mutate its primary argument in-place, and
expects the caller to assign its return value to the same variable that was
passed as the primary argument. Due to what was almost definitely a typo
(followed by copy-paste mishap), the configschema `Block.ValueMarks` and
`Object.ValueMarks` functions were treating it like an immutable function that
returns a new slice.
In rare and hard-to-reproduce cases, this was causing bizarre malfunctions when
marking sensitive schema attributes in deeply-nested block structures --
omitting the marks for some sensitive values (🚨), and marking other entire
blocks as sensitive (which is supposed to be impossible). The chaotic and
unreliable nature of the bugs is likely related to `append()`'s automatic slice
reallocation behavior (if the append operation overflows the original array
allocation, the resulting behavior can _look_ immutable), but there might be
other contributing factors too.
This commit fixes existing instances of the problem, and wraps the desired
copy-and-append behavior in a helper function to simplify handling shared parent
paths in an immutable way.
Combine and simplify the set comparison functions for NestingSet blocks
and attribute types.
The set handling for structural attributes was not recursing into nested
values. Once a simplified method for comparing set elements was devised
for nested types, it turns out the same method could be applied to
nested set blocks as well.
* Use the new structured renderer in place of the old diffs package
* remove old plan tests
* refresh only plans should show moved resources in the refresh section
When structural attributes were added, optional+computed were not
correctly handled when containing nested values which could themselves
be computed. This would cause terraform to ignore previously computed
values from state when generating the proposed plan.
The special case for optional+computed was incorrect, but isn't needed
in the context of planning new values anyway. Attributes are either
computed, or not computed. When optional+computed is set and there is
no configuration, the attribute is treated as computed. It is up to the
provider to determine how and when to deal with any changes to that
computed value.
* remove attributes that do not match the relevant attributes filter
* fix formatting
* fix renderer function, don't drop irrelevant attributes just mark them as no-ops
* fix imports
* fix bugs in the renderer exposed by the equivalence tests
* imports
* gofmt
* remove attributes that do not match the relevant attributes filter
* fix formatting
* fix renderer function, don't drop irrelevant attributes just mark them as no-ops
* fix imports