Just like in the destroy apply, we can skip the inter-provider cycle
check when creating the destroy plan, which can be expensive when there
are a lot of resource instances with dependencies from another provider.
* Improve environment variable support for the pg backend
This patch does two things:
- it adds environment variable support to the parameters that did
not have it (and uses `PG_CONN_STR` instead of `PGDATABASE` which is
actually more appropriate to match the behavior of other PostgreSQL
utilities)
- better documents how to give the connection parameters as environment
variables for the ones that were already supported based on the
recommendation of @bsouth00
I will prepare a backport of the documentation part of this once it is
merged.
Closes https://github.com/hashicorp/terraform/issues/33024
* Remove global variable in test of the PG backend
The cloud backend, which communicates with TFC like APIs, can create
runs which may have one more configuration parameters altered. These
alterations are emitted as run-events on the run so that API clients
can consume and display them to users. This commit adds a step in
plan operation to query the run-events once a run is created and then
emit specific run-event descriptions to the console as warnings for
the user.
* add new actions for executing the equivalence tests after a CRT release
* ready for review
* Update .github/actions/equivalence-test/action.yml
Co-authored-by: CJ Horton <17039873+radditude@users.noreply.github.com>
* address comments
---------
Co-authored-by: CJ Horton <17039873+radditude@users.noreply.github.com>
* checks: filter out check diagnostics during certain plans
* wrap diagnostics produced by check blocks in a dedicated check block diagnostic
* address comments
Although maps and objects are similar, maps require that all values be of the same type while objects can allow different values to have their own type.
This function does not restrict itself to maps- the examples themselves include cases where both strings and lists are passed through, making this an object and not a map.
When we plan to destroy an instance, the change recorded should use the
correct type for the resource rather than `DynamicPseudoType`. Most of
the time this is hidden when the change is encoded in the plan, because
any `null` is always encoded to the same value, and when decoded it will
be converted to the schema type. However when apply requires creating a
second plan for an instance's replacement that value is not going to be
encoded, and remains a dynamic value which is sent to the provider.
Most providers won't see that either, as the grpc request also encodes
and decodes the value to conform with the correct schema. The builtin
terraform provider does get the raw cty value though, and when that
dynamic value is returned validation fails when the type does not match.
It is not valid for a provider to return an unknown value for a
configured nested collection, but we need to check for unknowns before
comparing the number of values in the collection.