Stat(ctx, "/") is called by the registry healthcheck.
Also fixes blob name building in the Azure driver so it no longer
returns empty blob names. This was causing errors in the healthcheck
call to Stat for Azure.
Signed-off-by: Flavian Missi <fmissi@redhat.com>
When configuring bugsnag, bugsnag will fork the process, resulting the port 5001 listened twice. The PR fix this error by moving the initialization of prometheus server after the configuration of bugsnag
Signed-off-by: Honglin Feng <tifayuki@gmail.com>
(cherry picked from commit 5a6a2d6ae06453136f5e1cfb5e9efa20c27085d9)
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Signed-off-by: David van der Spek <vanderspek.david@gmail.com>
Dot-imports were only used in a couple of places, and replacing them
makes it more explicit what's imported.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Microsoft has updated the golang Azure SDK significantly. Update the
azure storage driver to use the new SDK. Add support for client
secret and MSI authentication schemes in addition to shared key
authentication.
Implement rootDirectory support for the azure storage driver to mirror
the S3 driver.
Signed-off-by: Kirat Singh <kirat.singh@beacon.io>
Co-authored-by: Cory Snider <corhere@gmail.com>
Introduced a Catalog entry in the configuration struct. With it,
it's possible to control the maximum amount of entries returned
by /v2/catalog (`GetCatalog` in registry/handlers/catalog.go).
It's set to a default value of 1000.
`GetCatalog` returns 100 entries by default if no `n` is
provided. When provided it will be validated to be between `0`
and `MaxEntries` defined in Configuration. When `n` is outside
the aforementioned boundary, an error response is returned.
`GetCatalog` now handles `n=0` gracefully with an empty response
as well.
Signed-off-by: José D. Gómez R. <1josegomezr@gmail.com>
This is an edge case when we are trying to upload an empty chunk of data using
a MultiPart upload. As a result we are trying to complete the MultipartUpload
with an empty slice of `completedUploadedParts` which will always lead to 400
being returned from S3 See: https://docs.aws.amazon.com/sdk-for-go/api/service/s3/#CompletedMultipartUpload
Solution: we upload an empty i.e. 0 byte part as a single part and then append it
to the completedUploadedParts slice used to complete the Multipart upload.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
The loop that iterates over paginated lists of S3 multipart upload parts
appears to be using the wrong variable in its loop condition. Nothing
inside the loop affects the value of `resp.IsTruncated`, so this loop
will either be wrongly skipped or loop forever.
It looks like this is a regression caused by commit
7736319f2e. The return value of
`ListMultipartUploads` used to be assigned to a variable named `resp`,
but it was renamed to `partsList` without updating the for loop
condition.
I believe this is causing an error we're seeing with large layer uploads
at commit time:
upload resumed at wrong offset: 5242880000 != 5815706782
Missing parts of the multipart S3 upload would cause an incorrect size
calculation in `newWriter`.
Signed-off-by: Aaron Lehmann <alehmann@netflix.com>
Previously we used a custom Transport in order to modify the user agent header.
This prevented the AWS SDK from being able to customize SSL and other client TLS
parameters since it could not understand the Transport type.
Instead we can simply use the SDK function MakeAddToUserAgentFreeFormHandler to
customize the UserAgent if necessary and leave all the TLS configuration to the
AWS SDK.
The only exception being SkipVerify which we have to handle, but we can set it
onto the standard http.Transport which does not interfere with the SDKs ability
to set other options.
Signed-off-by: Kirat Singh <kirat.singh@gmail.com>
Currently, "response completed with error" log lines include an
`auth.user.name` key, but successful "response completed" lines do not
include this, because they are logged a few stack frames up where
`auth.user.name` is not present on the `Context`. Move the successful
request logging inside the `dispatcher` closure, where the logger on the
context automatically includes this key.
Signed-off-by: Aaron Lehmann <alehmann@netflix.com>
`registry/storage/driver/inmemory/driver_test.go` times out after ~10min. The slow test is `testsuites.go:TestWriteReadLargeStreams()` which writes a 5GB file.
Root cause is inefficient slice reallocation algorithm. The slice holding file bytes grows only 32K on each allocation. To fix it, this PR grows slice exponentially.
Signed-off-by: Wei Meng <wemeng@microsoft.com>
Go 1.18 and up now provides a strings.Cut() which is better suited for
splitting key/value pairs (and similar constructs), and performs better:
```go
func BenchmarkSplit(b *testing.B) {
b.ReportAllocs()
data := []string{"12hello=world", "12hello=", "12=hello", "12hello"}
for i := 0; i < b.N; i++ {
for _, s := range data {
_ = strings.SplitN(s, "=", 2)[0]
}
}
}
func BenchmarkCut(b *testing.B) {
b.ReportAllocs()
data := []string{"12hello=world", "12hello=", "12=hello", "12hello"}
for i := 0; i < b.N; i++ {
for _, s := range data {
_, _, _ = strings.Cut(s, "=")
}
}
}
```
BenchmarkSplit
BenchmarkSplit-10 8244206 128.0 ns/op 128 B/op 4 allocs/op
BenchmarkCut
BenchmarkCut-10 54411998 21.80 ns/op 0 B/op 0 allocs/op
While looking at occurrences of `strings.Split()`, I also updated some for alternatives,
or added some constraints;
- for cases where an specific number of items is expected, I used `strings.SplitN()`
with a suitable limit. This prevents (theoretical) unlimited splits.
- in some cases it we were using `strings.Split()`, but _actually_ were trying to match
a prefix; for those I replaced the code to just match (and/or strip) the prefix.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Embed the interface that we're mocking; calling any of it's methods
that are not implemented will panic, so should give the same result
as before.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Both of these were deprecated in 55f675811a,
but the format of the GoDoc comments didn't follow the correct format, which
caused them not being picked up by tools as "deprecated".
This patch updates uses in the codebase to use the alternatives.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>