Something seems broken on azure/azure sdk side - it is currently not
possible to copy a blob of type AppendBlob using `CopyFromURL`.
Using the AppendBlob client via NewAppendBlobClient does not work
either.
According to Azure the correct way to do this is by using
StartCopyFromURL. Because this is an async operation, we need to do
polling ourselves. A simple backoff mechanism is used, where during each
iteration, the configured delay is multiplied by the retry number.
Also introduces two new config options for the Azure driver:
copy_status_poll_max_retry, and copy_status_poll_delay.
Signed-off-by: Flavian Missi <fmissi@redhat.com>
Co-authored-by: Nicolas De Loof <nicolas.deloof@gmail.com>
Co-authored-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Laura Brehm <laurabrehm@hey.com>
both oss and gcs driver were missing the context parameter that is
required to satisfy the storagedriver.FileWriter interface.
Signed-off-by: Flavian Missi <fmissi@redhat.com>
Stat(ctx, "/") is called by the registry healthcheck.
Also fixes blob name building in the Azure driver so it no longer
returns empty blob names. This was causing errors in the healthcheck
call to Stat for Azure.
Signed-off-by: Flavian Missi <fmissi@redhat.com>
When configuring bugsnag, bugsnag will fork the process, resulting the port 5001 listened twice. The PR fix this error by moving the initialization of prometheus server after the configuration of bugsnag
Signed-off-by: Honglin Feng <tifayuki@gmail.com>
(cherry picked from commit 5a6a2d6ae06453136f5e1cfb5e9efa20c27085d9)
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Signed-off-by: David van der Spek <vanderspek.david@gmail.com>
Dot-imports were only used in a couple of places, and replacing them
makes it more explicit what's imported.
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Microsoft has updated the golang Azure SDK significantly. Update the
azure storage driver to use the new SDK. Add support for client
secret and MSI authentication schemes in addition to shared key
authentication.
Implement rootDirectory support for the azure storage driver to mirror
the S3 driver.
Signed-off-by: Kirat Singh <kirat.singh@beacon.io>
Co-authored-by: Cory Snider <corhere@gmail.com>
Introduced a Catalog entry in the configuration struct. With it,
it's possible to control the maximum amount of entries returned
by /v2/catalog (`GetCatalog` in registry/handlers/catalog.go).
It's set to a default value of 1000.
`GetCatalog` returns 100 entries by default if no `n` is
provided. When provided it will be validated to be between `0`
and `MaxEntries` defined in Configuration. When `n` is outside
the aforementioned boundary, an error response is returned.
`GetCatalog` now handles `n=0` gracefully with an empty response
as well.
Signed-off-by: José D. Gómez R. <1josegomezr@gmail.com>
This is an edge case when we are trying to upload an empty chunk of data using
a MultiPart upload. As a result we are trying to complete the MultipartUpload
with an empty slice of `completedUploadedParts` which will always lead to 400
being returned from S3 See: https://docs.aws.amazon.com/sdk-for-go/api/service/s3/#CompletedMultipartUpload
Solution: we upload an empty i.e. 0 byte part as a single part and then append it
to the completedUploadedParts slice used to complete the Multipart upload.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
The loop that iterates over paginated lists of S3 multipart upload parts
appears to be using the wrong variable in its loop condition. Nothing
inside the loop affects the value of `resp.IsTruncated`, so this loop
will either be wrongly skipped or loop forever.
It looks like this is a regression caused by commit
7736319f2e. The return value of
`ListMultipartUploads` used to be assigned to a variable named `resp`,
but it was renamed to `partsList` without updating the for loop
condition.
I believe this is causing an error we're seeing with large layer uploads
at commit time:
upload resumed at wrong offset: 5242880000 != 5815706782
Missing parts of the multipart S3 upload would cause an incorrect size
calculation in `newWriter`.
Signed-off-by: Aaron Lehmann <alehmann@netflix.com>
Previously we used a custom Transport in order to modify the user agent header.
This prevented the AWS SDK from being able to customize SSL and other client TLS
parameters since it could not understand the Transport type.
Instead we can simply use the SDK function MakeAddToUserAgentFreeFormHandler to
customize the UserAgent if necessary and leave all the TLS configuration to the
AWS SDK.
The only exception being SkipVerify which we have to handle, but we can set it
onto the standard http.Transport which does not interfere with the SDKs ability
to set other options.
Signed-off-by: Kirat Singh <kirat.singh@gmail.com>