providercache: Ignore lock-mismatching global cache entries

When we originally introduced the trust-on-first-use checksum locking
mechanism in v0.14, we had to make some tricky decisions about how it
should interact with the pre-existing optional read-through global cache
of provider packages:

The global cache essentially conflicts with the checksum locking because
if the needed provider is already in the cache then Terraform skips
installing the provider from upstream and therefore misses the opportunity
to capture the signed checksums published by the provider developer. We
can't use the signed checksums to verify a cache entry because the origin
registry protocol is still using the legacy ziphash scheme and that is
only usable for the original zipped provider packages and not for the
unpacked-layout cache directory. Therefore we decided to prioritize the
existing cache directory behavior at the expense of the lock file behavior,
making Terraform produce an incomplete lock file in that case.

Now that we've had some real-world experience with the lock file mechanism,
we can see that the chosen compromise was not ideal because it causes
"terraform init" to behave significantly differently in its lock file
update behavior depending on whether or not a particular provider is
already cached. By robbing Terraform of its opportunity to fetch the
official checksums, Terraform must generate a lock file that is inherently
non-portable, which is problematic for any team which works with the same
Terraform configuration on multiple different platforms.

This change addresses that problem by essentially flipping the decision so
that we'll prioritize the lock file behavior over the provider cache
behavior. Now a global cache entry is eligible for use if and only if the
lock file already contains a checksum that matches the cache entry. This
means that the first time a particular configuration sees a new provider
it will always be fetched from the configured installation source
(typically the origin registry) and record the checksums from that source.

On subsequent installs of the same provider version already locked,
Terraform will then consider the cache entry to be eligible and skip
re-downloading the same package.

This intentionally makes the global cache mechanism subordinate to the
lock file mechanism: the lock file must be populated in order for the
global cache to be effective. For those who have many separate
configurations which all refer to the same provider version, they will
need to re-download the provider once for each configuration in order to
gather the information needed to populate the lock file, whereas before
they would have only downloaded it for the _first_ configuration using
that provider.

This should therefore remove the most significant cause of folks ending
up with incomplete lock files that don't work for colleagues using other
platforms, and the expense of bypassing the cache for the first use of
each new package with each new configuration. This tradeoff seems
reasonable because otherwise such users would inevitably need to run
"terraform providers lock" separately anyway, and that command _always_
bypasses the cache. Although this change does decrease the hit rate of the
cache, if we subtract the never-cached downloads caused by
"terraform providers lock" then this is a net benefit overall, and does
the right thing by default without the need to run a separate command.
This commit is contained in:
Martin Atkins 2022-10-31 12:31:12 -07:00
parent be5984d664
commit d0a35c60a7
5 changed files with 517 additions and 93 deletions

View File

@ -0,0 +1,14 @@
# The global cache is only an eligible installation source if there's already
# a lock entry for the given provider and it contains at least one checksum
# that matches the cache entry.
#
# This lock file therefore matches the "not a real provider" fake executable
# under the "cache" directory, rather than the real provider from upstream,
# so that Terraform CLI will consider the cache entry as valid.
provider "registry.terraform.io/hashicorp/template" {
version = "2.1.0"
hashes = [
"h1:e7YvVlRZlaZJ8ED5KnH0dAg0kPL0nAU7eEoCAZ/sOos=",
]
}

View File

@ -347,98 +347,134 @@ NeedProvider:
// Step 3a: If our global cache already has this version available then
// we'll just link it in.
if cached := i.globalCacheDir.ProviderVersion(provider, version); cached != nil {
if cb := evts.LinkFromCacheBegin; cb != nil {
cb(provider, version, i.globalCacheDir.baseDir)
}
if _, err := cached.ExecutableFile(); err != nil {
err := fmt.Errorf("provider binary not found: %s", err)
errs[provider] = err
if cb := evts.LinkFromCacheFailure; cb != nil {
cb(provider, version, err)
// An existing cache entry is only an acceptable choice
// if there is already a lock file entry for this provider
// and the cache entry matches its checksums.
//
// If there was no lock file entry at all then we need to
// install the package for real so that we can lock as complete
// as possible a set of checksums for all of this provider's
// packages.
//
// If there was a lock file entry but the cache doesn't match
// it then we assume that the lock file checksums were only
// partially populated (e.g. from a local mirror where we can
// only see one package to checksum it) and so we'll fetch
// from upstream to see if the origin can give us a package
// that _does_ match. This might still not work out, but if
// it does then it allows us to avoid returning a checksum
// mismatch error.
acceptablePackage := false
if len(preferredHashes) != 0 {
var err error
acceptablePackage, err = cached.MatchesAnyHash(preferredHashes)
if err != nil {
// If we can't calculate the checksum for the cached
// package then we'll just treat it as a checksum failure.
acceptablePackage = false
}
continue
}
err := i.targetDir.LinkFromOtherCache(cached, preferredHashes)
if err != nil {
errs[provider] = err
if cb := evts.LinkFromCacheFailure; cb != nil {
cb(provider, version, err)
}
continue
}
// We'll fetch what we just linked to make sure it actually
// did show up there.
new := i.targetDir.ProviderVersion(provider, version)
if new == nil {
err := fmt.Errorf("after linking %s from provider cache at %s it is still not detected in the target directory; this is a bug in Terraform", provider, i.globalCacheDir.baseDir)
errs[provider] = err
if cb := evts.LinkFromCacheFailure; cb != nil {
cb(provider, version, err)
}
continue
}
// TODO: Should we emit an event through the events object
// for "there was an entry in the cache but we ignored it
// because the checksum didn't match"? We can't use
// LinkFromCacheFailure in that case because this isn't a
// failure. For now we'll just be quiet about it.
// The LinkFromOtherCache call above should've verified that
// the package matches one of the hashes previously recorded,
// if any. We'll now augment those hashes with one freshly
// calculated from the package we just linked, which allows
// the lock file to gradually transition to recording newer hash
// schemes when they become available.
var priorHashes []getproviders.Hash
if lock != nil && lock.Version() == version {
// If the version we're installing is identical to the
// one we previously locked then we'll keep all of the
// hashes we saved previously and add to it. Otherwise
// we'll be starting fresh, because each version has its
// own set of packages and thus its own hashes.
priorHashes = append(priorHashes, preferredHashes...)
// NOTE: The behavior here is unfortunate when a particular
// provider version was already cached on the first time
// the current configuration requested it, because that
// means we don't currently get the opportunity to fetch
// and verify the checksums for the new package from
// upstream. That's currently unavoidable because upstream
// checksums are in the "ziphash" format and so we can't
// verify them against our cache directory's unpacked
// packages: we'd need to go fetch the package from the
// origin and compare against it, which would defeat the
// purpose of the global cache.
//
// If we fetch from upstream on the first encounter with
// a particular provider then we'll end up in the other
// codepath below where we're able to also include the
// checksums from the origin registry.
}
newHash, err := cached.Hash()
if err != nil {
err := fmt.Errorf("after linking %s from provider cache at %s, failed to compute a checksum for it: %s", provider, i.globalCacheDir.baseDir, err)
errs[provider] = err
if cb := evts.LinkFromCacheFailure; cb != nil {
cb(provider, version, err)
if acceptablePackage {
if cb := evts.LinkFromCacheBegin; cb != nil {
cb(provider, version, i.globalCacheDir.baseDir)
}
if _, err := cached.ExecutableFile(); err != nil {
err := fmt.Errorf("provider binary not found: %s", err)
errs[provider] = err
if cb := evts.LinkFromCacheFailure; cb != nil {
cb(provider, version, err)
}
continue
}
continue
}
// The hashes slice gets deduplicated in the lock file
// implementation, so we don't worry about potentially
// creating a duplicate here.
var newHashes []getproviders.Hash
newHashes = append(newHashes, priorHashes...)
newHashes = append(newHashes, newHash)
locks.SetProvider(provider, version, reqs[provider], newHashes)
if cb := evts.ProvidersLockUpdated; cb != nil {
// We want to ensure that newHash and priorHashes are
// sorted. newHash is a single value, so it's definitely
// sorted. priorHashes are pulled from the lock file, so
// are also already sorted.
cb(provider, version, []getproviders.Hash{newHash}, nil, priorHashes)
}
if cb := evts.LinkFromCacheSuccess; cb != nil {
cb(provider, version, new.PackageDir)
err := i.targetDir.LinkFromOtherCache(cached, preferredHashes)
if err != nil {
errs[provider] = err
if cb := evts.LinkFromCacheFailure; cb != nil {
cb(provider, version, err)
}
continue
}
// We'll fetch what we just linked to make sure it actually
// did show up there.
new := i.targetDir.ProviderVersion(provider, version)
if new == nil {
err := fmt.Errorf("after linking %s from provider cache at %s it is still not detected in the target directory; this is a bug in Terraform", provider, i.globalCacheDir.baseDir)
errs[provider] = err
if cb := evts.LinkFromCacheFailure; cb != nil {
cb(provider, version, err)
}
continue
}
// The LinkFromOtherCache call above should've verified that
// the package matches one of the hashes previously recorded,
// if any. We'll now augment those hashes with one freshly
// calculated from the package we just linked, which allows
// the lock file to gradually transition to recording newer hash
// schemes when they become available.
var priorHashes []getproviders.Hash
if lock != nil && lock.Version() == version {
// If the version we're installing is identical to the
// one we previously locked then we'll keep all of the
// hashes we saved previously and add to it. Otherwise
// we'll be starting fresh, because each version has its
// own set of packages and thus its own hashes.
priorHashes = append(priorHashes, preferredHashes...)
// NOTE: The behavior here is unfortunate when a particular
// provider version was already cached on the first time
// the current configuration requested it, because that
// means we don't currently get the opportunity to fetch
// and verify the checksums for the new package from
// upstream. That's currently unavoidable because upstream
// checksums are in the "ziphash" format and so we can't
// verify them against our cache directory's unpacked
// packages: we'd need to go fetch the package from the
// origin and compare against it, which would defeat the
// purpose of the global cache.
//
// If we fetch from upstream on the first encounter with
// a particular provider then we'll end up in the other
// codepath below where we're able to also include the
// checksums from the origin registry.
}
newHash, err := cached.Hash()
if err != nil {
err := fmt.Errorf("after linking %s from provider cache at %s, failed to compute a checksum for it: %s", provider, i.globalCacheDir.baseDir, err)
errs[provider] = err
if cb := evts.LinkFromCacheFailure; cb != nil {
cb(provider, version, err)
}
continue
}
// The hashes slice gets deduplicated in the lock file
// implementation, so we don't worry about potentially
// creating a duplicate here.
var newHashes []getproviders.Hash
newHashes = append(newHashes, priorHashes...)
newHashes = append(newHashes, newHash)
locks.SetProvider(provider, version, reqs[provider], newHashes)
if cb := evts.ProvidersLockUpdated; cb != nil {
// We want to ensure that newHash and priorHashes are
// sorted. newHash is a single value, so it's definitely
// sorted. priorHashes are pulled from the lock file, so
// are also already sorted.
cb(provider, version, []getproviders.Hash{newHash}, nil, priorHashes)
}
if cb := evts.LinkFromCacheSuccess; cb != nil {
cb(provider, version, new.PackageDir)
}
continue // Don't need to do full install, then.
}
continue // Don't need to do full install, then.
}
}
@ -491,7 +527,7 @@ NeedProvider:
}
new := installTo.ProviderVersion(provider, version)
if new == nil {
err := fmt.Errorf("after installing %s it is still not detected in the target directory; this is a bug in Terraform", provider)
err := fmt.Errorf("after installing %s it is still not detected in %s; this is a bug in Terraform", provider, installTo.BasePath())
errs[provider] = err
if cb := evts.FetchPackageFailure; cb != nil {
cb(provider, version, err)
@ -521,6 +557,28 @@ NeedProvider:
}
continue
}
// We should now also find the package in the linkTo dir, which
// gives us the final value of "new" where the path points in to
// the true target directory, rather than possibly the global
// cache directory.
new = linkTo.ProviderVersion(provider, version)
if new == nil {
err := fmt.Errorf("after installing %s it is still not detected in %s; this is a bug in Terraform", provider, linkTo.BasePath())
errs[provider] = err
if cb := evts.FetchPackageFailure; cb != nil {
cb(provider, version, err)
}
continue
}
if _, err := new.ExecutableFile(); err != nil {
err := fmt.Errorf("provider binary not found: %s", err)
errs[provider] = err
if cb := evts.FetchPackageFailure; cb != nil {
cb(provider, version, err)
}
continue
}
}
authResults[provider] = authResult

View File

@ -327,10 +327,7 @@ func TestEnsureProviderVersions(t *testing.T) {
AuthResult string
}{
"2.1.0",
// NOTE: With global cache enabled, the fetch
// goes into the global cache dir and
// we then to it from the local cache dir.
filepath.Join(inst.globalCacheDir.BasePath(), "example.com/foo/beep/2.1.0/bleep_bloop"),
filepath.Join(dir.BasePath(), "example.com/foo/beep/2.1.0/bleep_bloop"),
"unauthenticated",
},
},
@ -338,7 +335,7 @@ func TestEnsureProviderVersions(t *testing.T) {
}
},
},
"successful initial install of one provider through a warm global cache": {
"successful initial install of one provider through a warm global cache but without a lock file entry": {
Source: getproviders.NewMockSource(
[]getproviders.PackageMeta{
{
@ -416,6 +413,12 @@ func TestEnsureProviderVersions(t *testing.T) {
beepProvider: getproviders.MustParseVersionConstraints(">= 2.0.0"),
},
},
{
Event: "ProvidersFetched",
Args: map[addrs.Provider]*getproviders.PackageAuthenticationResult{
beepProvider: nil,
},
},
},
beepProvider: {
{
@ -431,6 +434,162 @@ func TestEnsureProviderVersions(t *testing.T) {
Provider: beepProvider,
Args: "2.1.0",
},
// Existing cache entry is ineligible for linking because
// we have no lock file checksums to compare it to.
// Instead, we install from upstream and lock with
// whatever checksums we learn in that process.
{
Event: "FetchPackageMeta",
Provider: beepProvider,
Args: "2.1.0",
},
{
Event: "FetchPackageBegin",
Provider: beepProvider,
Args: struct {
Version string
Location getproviders.PackageLocation
}{
"2.1.0",
beepProviderDir,
},
},
{
Event: "ProvidersLockUpdated",
Provider: beepProvider,
Args: struct {
Version string
Local []getproviders.Hash
Signed []getproviders.Hash
Prior []getproviders.Hash
}{
"2.1.0",
[]getproviders.Hash{"h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84="},
nil,
nil,
},
},
{
Event: "FetchPackageSuccess",
Provider: beepProvider,
Args: struct {
Version string
LocalDir string
AuthResult string
}{
"2.1.0",
filepath.Join(dir.BasePath(), "/example.com/foo/beep/2.1.0/bleep_bloop"),
"unauthenticated",
},
},
},
}
},
},
"successful initial install of one provider through a warm global cache and correct locked checksum": {
Source: getproviders.NewMockSource(
[]getproviders.PackageMeta{
{
Provider: beepProvider,
Version: getproviders.MustParseVersion("2.0.0"),
TargetPlatform: fakePlatform,
Location: beepProviderDir,
},
{
Provider: beepProvider,
Version: getproviders.MustParseVersion("2.1.0"),
TargetPlatform: fakePlatform,
Location: beepProviderDir,
},
},
nil,
),
LockFile: `
# The existing cache entry is valid only if it matches a
# checksum already recorded in the lock file.
provider "example.com/foo/beep" {
version = "2.1.0"
constraints = ">= 1.0.0"
hashes = [
"h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84=",
]
}
`,
Prepare: func(t *testing.T, inst *Installer, dir *Dir) {
globalCacheDirPath := tmpDir(t)
globalCacheDir := NewDirWithPlatform(globalCacheDirPath, fakePlatform)
_, err := globalCacheDir.InstallPackage(
context.Background(),
getproviders.PackageMeta{
Provider: beepProvider,
Version: getproviders.MustParseVersion("2.1.0"),
TargetPlatform: fakePlatform,
Location: beepProviderDir,
},
nil,
)
if err != nil {
t.Fatalf("failed to populate global cache: %s", err)
}
inst.SetGlobalCacheDir(globalCacheDir)
},
Mode: InstallNewProvidersOnly,
Reqs: getproviders.Requirements{
beepProvider: getproviders.MustParseVersionConstraints(">= 2.0.0"),
},
Check: func(t *testing.T, dir *Dir, locks *depsfile.Locks) {
if allCached := dir.AllAvailablePackages(); len(allCached) != 1 {
t.Errorf("wrong number of cache directory entries; want only one\n%s", spew.Sdump(allCached))
}
if allLocked := locks.AllProviders(); len(allLocked) != 1 {
t.Errorf("wrong number of provider lock entries; want only one\n%s", spew.Sdump(allLocked))
}
gotLock := locks.Provider(beepProvider)
wantLock := depsfile.NewProviderLock(
beepProvider,
getproviders.MustParseVersion("2.1.0"),
getproviders.MustParseVersionConstraints(">= 2.0.0"),
[]getproviders.Hash{beepProviderHash},
)
if diff := cmp.Diff(wantLock, gotLock, depsfile.ProviderLockComparer); diff != "" {
t.Errorf("wrong lock entry\n%s", diff)
}
gotEntry := dir.ProviderLatestVersion(beepProvider)
wantEntry := &CachedProvider{
Provider: beepProvider,
Version: getproviders.MustParseVersion("2.1.0"),
PackageDir: filepath.Join(dir.BasePath(), "example.com/foo/beep/2.1.0/bleep_bloop"),
}
if diff := cmp.Diff(wantEntry, gotEntry); diff != "" {
t.Errorf("wrong cache entry\n%s", diff)
}
},
WantEvents: func(inst *Installer, dir *Dir) map[addrs.Provider][]*testInstallerEventLogItem {
return map[addrs.Provider][]*testInstallerEventLogItem{
noProvider: {
{
Event: "PendingProviders",
Args: map[addrs.Provider]getproviders.VersionConstraints{
beepProvider: getproviders.MustParseVersionConstraints(">= 2.0.0"),
},
},
},
beepProvider: {
{
Event: "QueryPackagesBegin",
Provider: beepProvider,
Args: struct {
Constraints string
Locked bool
}{">= 2.0.0", true},
},
{
Event: "QueryPackagesSuccess",
Provider: beepProvider,
Args: "2.1.0",
},
{
Event: "LinkFromCacheBegin",
Provider: beepProvider,
@ -454,7 +613,7 @@ func TestEnsureProviderVersions(t *testing.T) {
"2.1.0",
[]getproviders.Hash{"h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84="},
nil,
nil,
[]getproviders.Hash{"h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84="},
},
},
{
@ -472,6 +631,184 @@ func TestEnsureProviderVersions(t *testing.T) {
}
},
},
"successful initial install of one provider through a warm global cache with an incompatible checksum": {
Source: getproviders.NewMockSource(
[]getproviders.PackageMeta{
{
Provider: beepProvider,
Version: getproviders.MustParseVersion("2.0.0"),
TargetPlatform: fakePlatform,
Location: beepProviderDir,
},
{
Provider: beepProvider,
Version: getproviders.MustParseVersion("2.1.0"),
TargetPlatform: fakePlatform,
Location: beepProviderDir,
},
},
nil,
),
LockFile: `
# This is approximating the awkward situation where the lock
# file was populated by someone who installed from a location
# other than the origin registry annd so the set of checksums
# is incomplete. In this case we can't prove that our cache
# entry is valid and so we silently ignore the cache entry
# and try to install from upstream anyway, in the hope that
# this will give us an opportunity to access the origin
# registry and get a checksum that works for the current
# platform.
provider "example.com/foo/beep" {
version = "2.1.0"
constraints = ">= 1.0.0"
hashes = [
# NOTE: This is the correct checksum for the
# beepProviderDir package, but we're going to
# intentionally install from a different directory
# below so that the entry in the cache will not
# match this checksum.
"h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84=",
]
}
`,
Prepare: func(t *testing.T, inst *Installer, dir *Dir) {
// This is another "beep provider" package directory that
// has a different checksum than the one in beepProviderDir.
// We're mimicking the situation where the lock file was
// originally built from beepProviderDir but the local system
// is running on a different platform and so its existing
// cache entry doesn't match the checksum.
beepProviderOtherPlatformDir := getproviders.PackageLocalDir("testdata/beep-provider-other-platform")
globalCacheDirPath := tmpDir(t)
globalCacheDir := NewDirWithPlatform(globalCacheDirPath, fakePlatform)
_, err := globalCacheDir.InstallPackage(
context.Background(),
getproviders.PackageMeta{
Provider: beepProvider,
Version: getproviders.MustParseVersion("2.1.0"),
TargetPlatform: fakePlatform,
Location: beepProviderOtherPlatformDir,
},
nil,
)
if err != nil {
t.Fatalf("failed to populate global cache: %s", err)
}
inst.SetGlobalCacheDir(globalCacheDir)
},
Mode: InstallNewProvidersOnly,
Reqs: getproviders.Requirements{
beepProvider: getproviders.MustParseVersionConstraints(">= 2.0.0"),
},
Check: func(t *testing.T, dir *Dir, locks *depsfile.Locks) {
if allCached := dir.AllAvailablePackages(); len(allCached) != 1 {
t.Errorf("wrong number of cache directory entries; want only one\n%s", spew.Sdump(allCached))
}
if allLocked := locks.AllProviders(); len(allLocked) != 1 {
t.Errorf("wrong number of provider lock entries; want only one\n%s", spew.Sdump(allLocked))
}
gotLock := locks.Provider(beepProvider)
wantLock := depsfile.NewProviderLock(
beepProvider,
getproviders.MustParseVersion("2.1.0"),
getproviders.MustParseVersionConstraints(">= 2.0.0"),
[]getproviders.Hash{beepProviderHash},
)
if diff := cmp.Diff(wantLock, gotLock, depsfile.ProviderLockComparer); diff != "" {
t.Errorf("wrong lock entry\n%s", diff)
}
gotEntry := dir.ProviderLatestVersion(beepProvider)
wantEntry := &CachedProvider{
Provider: beepProvider,
Version: getproviders.MustParseVersion("2.1.0"),
PackageDir: filepath.Join(dir.BasePath(), "example.com/foo/beep/2.1.0/bleep_bloop"),
}
if diff := cmp.Diff(wantEntry, gotEntry); diff != "" {
t.Errorf("wrong cache entry\n%s", diff)
}
},
WantEvents: func(inst *Installer, dir *Dir) map[addrs.Provider][]*testInstallerEventLogItem {
return map[addrs.Provider][]*testInstallerEventLogItem{
noProvider: {
{
Event: "PendingProviders",
Args: map[addrs.Provider]getproviders.VersionConstraints{
beepProvider: getproviders.MustParseVersionConstraints(">= 2.0.0"),
},
},
{
Event: "ProvidersFetched",
Args: map[addrs.Provider]*getproviders.PackageAuthenticationResult{
beepProvider: nil,
},
},
},
beepProvider: {
{
Event: "QueryPackagesBegin",
Provider: beepProvider,
Args: struct {
Constraints string
Locked bool
}{">= 2.0.0", true},
},
{
Event: "QueryPackagesSuccess",
Provider: beepProvider,
Args: "2.1.0",
},
{
Event: "FetchPackageMeta",
Provider: beepProvider,
Args: "2.1.0",
},
{
Event: "FetchPackageBegin",
Provider: beepProvider,
Args: struct {
Version string
Location getproviders.PackageLocation
}{
"2.1.0",
beepProviderDir,
},
},
{
Event: "ProvidersLockUpdated",
Provider: beepProvider,
Args: struct {
Version string
Local []getproviders.Hash
Signed []getproviders.Hash
Prior []getproviders.Hash
}{
"2.1.0",
[]getproviders.Hash{"h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84="},
nil,
[]getproviders.Hash{"h1:2y06Ykj0FRneZfGCTxI9wRTori8iB7ZL5kQ6YyEnh84="},
},
},
{
Event: "FetchPackageSuccess",
Provider: beepProvider,
Args: struct {
Version string
LocalDir string
AuthResult string
}{
"2.1.0",
filepath.Join(dir.BasePath(), "/example.com/foo/beep/2.1.0/bleep_bloop"),
"unauthenticated",
},
},
},
}
},
},
"successful reinstall of one previously-locked provider": {
Source: getproviders.NewMockSource(
[]getproviders.PackageMeta{
@ -1602,7 +1939,7 @@ func TestEnsureProviderVersions(t *testing.T) {
inst := NewInstaller(outputDir, source)
if test.Prepare != nil {
test.Prepare(t, inst, outputDir)
}
} /* boop */
locks, lockDiags := depsfile.LoadLocksFromBytes([]byte(test.LockFile), "test.lock.hcl")
if lockDiags.HasErrors() {

View File

@ -125,6 +125,14 @@ func installFromLocalArchive(ctx context.Context, meta getproviders.PackageMeta,
filename := meta.Location.String()
// NOTE: We're not checking whether there's already a directory at
// targetDir with some files in it. Packages are supposed to be immutable
// and therefore we'll just be overwriting all of the existing files with
// their same contents unless something unusual is happening. If something
// unusual _is_ happening then this will produce something that doesn't
// match the allowed hashes and so our caller should catch that after
// we return if so.
err := unzip.Decompress(targetDir, filename, true, 0000)
if err != nil {
return authResult, err

View File

@ -0,0 +1,7 @@
This is not a real provider executable. It's just here to give the installer
something to copy in some of our installer test cases.
This must be different than the file of the same name in the sibling directory
"beep-provider", because we're using this to stand in for a valid package
that was built for a different platform than the one whose checksum is recorded
in the lock file.