Nested files aren't supported on MinIO, and as our storage layout is
filesystem based, we don't actually use nest files in the code.
Remove the test so that we can support MinIO.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
This fixes some of the tests for minio.
The walk tests needs a version of minio that contains https://github.com/minio/minio/pull/18099
The storage classes minio supports are a subset of the s3 classes.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
This commit removes some `conn` parameters of private functions, which can
be obtain from the struct itself. The `conn` is for the old `redisgo` library,
which is replaced by `go-redis` in #4019.
Signed-off-by: bin liu <liubin0329@gmail.com>
In terms of results, a`manifestTagsPathSpec{ name: "repo" }` equals
`manifestTagPathSpec{ name: "repo", tag: "" }`, but from the intention,
the `manifestTagsPathSpec` should be used.
Signed-off-by: bin liu <liubin0329@gmail.com>
This commit cleans up and attempts to optimise the performance of image push in S3 driver.
There are 2 main changes:
* we refactor the S3 driver Writer where instead of using separate bytes
slices for ready and pending parts which get constantly appended data
into them causing unnecessary allocations we use optimised bytes
buffers; we make sure these are used efficiently when written to.
* we introduce a memory pool that is used for allocating the byte
buffers introduced above
These changes should alleviate high memory pressure on the push path to S3.
Co-authored-by: Cory Snider <corhere@gmail.com>
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
Add two new checks to the testsuite that check
the driver can handle zero byte files and appends to zero
byte files correctly
Signed-off-by: Neil Wilson <neil@aldur.co.uk>
In case drvr.PutContent fails and returns error we'd have
some extra memory allocated, though in this case
(test with known size of the slice being iterated), that's fine.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
Only some of the S3 storage driver calls were propagating context to the
S3 API calls. This commit updates the S3 storage drivers so the context
is propagated to all the S3 API calls.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
This integrates the new module, which was extracted from this repository
at commit b9b19409cf458dcb9e1253ff44ba75bd0620faa6;
# install filter-repo (https://github.com/newren/git-filter-repo/blob/main/INSTALL.md)
brew install git-filter-repo
# create a temporary clone of docker
cd ~/Projects
git clone https://github.com/distribution/distribution.git reference
cd reference
# commit taken from
git rev-parse --verify HEAD
b9b19409cf
# remove all code, except for general files, 'reference/', and rename to /
git filter-repo \
--path .github/workflows/codeql-analysis.yml \
--path .github/workflows/fossa.yml \
--path .golangci.yml \
--path distribution-logo.svg \
--path CODE-OF-CONDUCT.md \
--path CONTRIBUTING.md \
--path GOVERNANCE.md \
--path README.md \
--path LICENSE \
--path MAINTAINERS \
--path-glob 'reference/*.*' \
--path-rename reference/:
# initialize go.mod
go mod init github.com/distribution/reference
go mod tidy -go=1.20
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Storage drivers may be able to take advantage of the hint to start
their walk more efficiently.
For S3: The API takes a start-after parameter. Registries with many
repositories can drastically reduce calls to s3 by telling s3 to only
list results lexographically after the last parameter.
For the fallback: We can start deeper in the tree and avoid statting
the files and directories before the hint in a walk. For a filesystem
this improves performance a little, but many of the API based drivers
are currently treated like a filesystem, so this drastically improves
the performance of GCP and Azure blob.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
We are replacing the very outdated redigo Go module with the official
redis Go module, go-redis.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
This commit removes `oss` storage driver from distribution as well as
`alicdn` storage middleware which only works with the `oss` driver.
There are several reasons for it:
* no real-life expertise among the maintainers
* oss is compatible with S3 API operations required by S3 storage driver
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
The Azure tests fail if there is no Azure configuration available,
instead they should be skipped.
Also, one of the Azure tests is wrong and doesn't match the code.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
Other storage drivers will only return children and below, s3 should do
the same. The only reason it was returning was because of the addition
of a / to ensure we treat the from as a directory.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
This test will only work on an s3 bucket on an s3 outpost. Most
developers won't have access to one of these.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
If we haven't set a storage class there's no point in checking the
storage class applied to the object - s3 will choose one.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
This commit removes swift storage driver from distribution.
There are several reasons for it:
* no real life expertise among the maintainers
* swift is compatible with S3 API operations required by S3 storage driver
This will also remove depedencies that are also hard to keep up with.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
Split request and hit metrics into separate metrics, rather than using
labels. This avoids duplication of data and makes metric math easier.
* Count cache errors separately to avoid weird math.
* Hit ratio: `registry_storage_cache_hits_total / registry_storage_cache_requests_total`
* Miss ratio: `1 - (registry_storage_cache_hits_total / registry_storage_cache_requests_total`
* Misses: `registry_storage_cache_requests_total -
registry_storage_cache_hits_total`
Signed-off-by: Ben Kochie <superq@gmail.com>
This enables go build tags so the GCS and OSS driver support is
available in the binary distributed via the image build by Dockerfile.
This led to quite a few fixes in the GCS and OSS packages raised as
warning by golang-ci linter.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
Something seems broken on azure/azure sdk side - it is currently not
possible to copy a blob of type AppendBlob using `CopyFromURL`.
Using the AppendBlob client via NewAppendBlobClient does not work
either.
According to Azure the correct way to do this is by using
StartCopyFromURL. Because this is an async operation, we need to do
polling ourselves. A simple backoff mechanism is used, where during each
iteration, the configured delay is multiplied by the retry number.
Also introduces two new config options for the Azure driver:
copy_status_poll_max_retry, and copy_status_poll_delay.
Signed-off-by: Flavian Missi <fmissi@redhat.com>
It is desirable to remove Platform from distribution.Descriptor because it doesn't really belong there. However this would be a further breaking change because the References() call would no longer be returning plaform information when it reurns descriptors of manifests, which is started to for OCI Indices after c94f288 and this feature was added to Docker Manifest Lists in 1a059fe. I don't want to take away something people clearly want.
Signed-off-by: Bracken Dawson <abdawson@gmail.com>
both oss and gcs driver were missing the context parameter that is
required to satisfy the storagedriver.FileWriter interface.
Signed-off-by: Flavian Missi <fmissi@redhat.com>
Stat(ctx, "/") is called by the registry healthcheck.
Also fixes blob name building in the Azure driver so it no longer
returns empty blob names. This was causing errors in the healthcheck
call to Stat for Azure.
Signed-off-by: Flavian Missi <fmissi@redhat.com>
Dot-imports were only used in a couple of places, and replacing them
makes it more explicit what's imported.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Microsoft has updated the golang Azure SDK significantly. Update the
azure storage driver to use the new SDK. Add support for client
secret and MSI authentication schemes in addition to shared key
authentication.
Implement rootDirectory support for the azure storage driver to mirror
the S3 driver.
Signed-off-by: Kirat Singh <kirat.singh@beacon.io>
Co-authored-by: Cory Snider <corhere@gmail.com>
- DRY out SchemaVersion literals
- Better name the predefined Versioned struct for the Image Index
- Var names, declarations, else cases.
Co-authored-by: Milos Gajdos <milosthegajdos@gmail.com>
Signed-off-by: Bracken Dawson <abdawson@gmail.com>
Empty platform structs were already supported after splitting OCI Image
Index out from Docker Manifest List.
Signed-off-by: Bracken Dawson <abdawson@gmail.com>
Move implementation of the index from the manifestlist package to the ocischema package so that other modules making empty imports support the manifest types their authors would expect. This is a breaking change to distribution as a library but not the registry.
As OCI 1.0 released the manifest and index together, that is a good package from which to initialise both manifests. The docker manifest and manifest list remain in separate packages because one was released later.
The image index and manifest list still share common code in many functions not intended for import by other modules.
Signed-off-by: Bracken Dawson <abdawson@gmail.com>
This is an edge case when we are trying to upload an empty chunk of data using
a MultiPart upload. As a result we are trying to complete the MultipartUpload
with an empty slice of `completedUploadedParts` which will always lead to 400
being returned from S3 See: https://docs.aws.amazon.com/sdk-for-go/api/service/s3/#CompletedMultipartUpload
Solution: we upload an empty i.e. 0 byte part as a single part and then append it
to the completedUploadedParts slice used to complete the Multipart upload.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
The loop that iterates over paginated lists of S3 multipart upload parts
appears to be using the wrong variable in its loop condition. Nothing
inside the loop affects the value of `resp.IsTruncated`, so this loop
will either be wrongly skipped or loop forever.
It looks like this is a regression caused by commit
7736319f2e. The return value of
`ListMultipartUploads` used to be assigned to a variable named `resp`,
but it was renamed to `partsList` without updating the for loop
condition.
I believe this is causing an error we're seeing with large layer uploads
at commit time:
upload resumed at wrong offset: 5242880000 != 5815706782
Missing parts of the multipart S3 upload would cause an incorrect size
calculation in `newWriter`.
Signed-off-by: Aaron Lehmann <alehmann@netflix.com>
Previously we used a custom Transport in order to modify the user agent header.
This prevented the AWS SDK from being able to customize SSL and other client TLS
parameters since it could not understand the Transport type.
Instead we can simply use the SDK function MakeAddToUserAgentFreeFormHandler to
customize the UserAgent if necessary and leave all the TLS configuration to the
AWS SDK.
The only exception being SkipVerify which we have to handle, but we can set it
onto the standard http.Transport which does not interfere with the SDKs ability
to set other options.
Signed-off-by: Kirat Singh <kirat.singh@gmail.com>
`registry/storage/driver/inmemory/driver_test.go` times out after ~10min. The slow test is `testsuites.go:TestWriteReadLargeStreams()` which writes a 5GB file.
Root cause is inefficient slice reallocation algorithm. The slice holding file bytes grows only 32K on each allocation. To fix it, this PR grows slice exponentially.
Signed-off-by: Wei Meng <wemeng@microsoft.com>
Go 1.18 and up now provides a strings.Cut() which is better suited for
splitting key/value pairs (and similar constructs), and performs better:
```go
func BenchmarkSplit(b *testing.B) {
b.ReportAllocs()
data := []string{"12hello=world", "12hello=", "12=hello", "12hello"}
for i := 0; i < b.N; i++ {
for _, s := range data {
_ = strings.SplitN(s, "=", 2)[0]
}
}
}
func BenchmarkCut(b *testing.B) {
b.ReportAllocs()
data := []string{"12hello=world", "12hello=", "12=hello", "12hello"}
for i := 0; i < b.N; i++ {
for _, s := range data {
_, _, _ = strings.Cut(s, "=")
}
}
}
```
BenchmarkSplit
BenchmarkSplit-10 8244206 128.0 ns/op 128 B/op 4 allocs/op
BenchmarkCut
BenchmarkCut-10 54411998 21.80 ns/op 0 B/op 0 allocs/op
While looking at occurrences of `strings.Split()`, I also updated some for alternatives,
or added some constraints;
- for cases where an specific number of items is expected, I used `strings.SplitN()`
with a suitable limit. This prevents (theoretical) unlimited splits.
- in some cases it we were using `strings.Split()`, but _actually_ were trying to match
a prefix; for those I replaced the code to just match (and/or strip) the prefix.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>