Unfortunately one of the changes we merged in broken the support for
http.ProxyFromEnvironment https://pkg.go.dev/net/http#ProxyFromEnvironment
This commit attempts to fix that by cloning the http.DefaultTransport
and updating it accordingly.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
This commit updates (writer).Writer() method in S3 storage driver to
handle the case where an append is attempted to a zer-size content.
S3 does not allow appending to already committed content, so we are
optiing to provide the following case as a narrowed down behaviour:
Writer can only append to zero byte content - in that case, a new S3
MultipartUpload is created that will be used for overriding the already
committed zero size content.
Appending to non-zero size content fails with error.
Co-authored-by: Cory Snider <corhere@gmail.com>
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
GCS storage driver used to be conditionally built due to its being
outdated and basically unmaintained. Recently the driver has gone
through a rework and updates. Let's remove the build tag so we have less
headaches dealing with it and try keeping it up to date.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
This linter both prevents parallel test races as well as
suggests parallel tests where appropriate:
See: https://github.com/moricho/tparallel
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
We make sure they're not hiding at the bottom or in the middle
which makes debugging an utter nightmare!
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
This commit refactors the GCS storage driver from the ground up and makes
it more consistent with the rest of the storage drivers.
We are also fixing GCS authentication using default app credentials:
When the default application credentials are used we don't initialize the
GCS storage client which then panics.
Co-authored-by: Cory Snider <corhere@gmail.com>
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
For some reason a PR we merged passed the build even though it was
missing various func parameters. This commmit fixes it.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
Several storage drivers and storage middlewares need to introspect the
client HTTP request in order to construct content-redirect URLs. The
request is indirectly passed into the driver interface method URLFor()
through the context argument, which is bad practice. The request should
be passed in as an explicit argument as the method is only called from
request handlers.
Replace the URLFor() method with a RedirectURL() method which takes an
HTTP request as a parameter instead of a context. Drop the options
argument from URLFor() as in practice it only ever encoded the request
method, which can now be fetched directly from the request. No URLFor()
callers ever passed in an "expiry" option, either.
Signed-off-by: Cory Snider <csnider@mirantis.com>
The RemoteAddr and RemoteIP functions operate on *http.Request values,
not contexts. They have very low cohesion with the rest of the package.
Signed-off-by: Cory Snider <csnider@mirantis.com>
Our context package predates the establishment of current best practices
regarding context usage and it shows. It encourages bad practices such
as using contexts to propagate non-request-scoped values like the
application version and using string-typed keys for context values. Move
the package internal to remove it from the API surface of
distribution/v3@v3.0.0 so we are free to iterate on it without being
constrained by compatibility.
Signed-off-by: Cory Snider <csnider@mirantis.com>
This commit make the S3 driver chunk size constants more straightforward
to understand -- instead of remembering the bit shifts we make this more
explicit.
We are also updating append parameter to the `(writer).Write` to follow
the new convention we are trying to establish.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
This commit changes storagedriver.Filewriter interface
by adding context.Context as an argument to its Commit
func.
We pass the context appropriately where need be throughout
the distribution codebase to all the writers and tests.
S3 driver writer unfortunately must maintain the context
passed down to it from upstream so it contnues to
implement io.Writer and io.Closer interfaces which do not
allow accepting the context in any of their funcs.
Co-authored-by: Cory Snider <corhere@gmail.com>
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
Small refactoring of storagedriver errors.
We change the Enclosed field to Detail and make sure
Errors get properly serialized to JSON.
We also add tests.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
Nested files aren't supported on MinIO, and as our storage layout is
filesystem based, we don't actually use nest files in the code.
Remove the test so that we can support MinIO.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
This fixes some of the tests for minio.
The walk tests needs a version of minio that contains https://github.com/minio/minio/pull/18099
The storage classes minio supports are a subset of the s3 classes.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
This commit cleans up and attempts to optimise the performance of image push in S3 driver.
There are 2 main changes:
* we refactor the S3 driver Writer where instead of using separate bytes
slices for ready and pending parts which get constantly appended data
into them causing unnecessary allocations we use optimised bytes
buffers; we make sure these are used efficiently when written to.
* we introduce a memory pool that is used for allocating the byte
buffers introduced above
These changes should alleviate high memory pressure on the push path to S3.
Co-authored-by: Cory Snider <corhere@gmail.com>
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
Add two new checks to the testsuite that check
the driver can handle zero byte files and appends to zero
byte files correctly
Signed-off-by: Neil Wilson <neil@aldur.co.uk>
In case drvr.PutContent fails and returns error we'd have
some extra memory allocated, though in this case
(test with known size of the slice being iterated), that's fine.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
Only some of the S3 storage driver calls were propagating context to the
S3 API calls. This commit updates the S3 storage drivers so the context
is propagated to all the S3 API calls.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
Storage drivers may be able to take advantage of the hint to start
their walk more efficiently.
For S3: The API takes a start-after parameter. Registries with many
repositories can drastically reduce calls to s3 by telling s3 to only
list results lexographically after the last parameter.
For the fallback: We can start deeper in the tree and avoid statting
the files and directories before the hint in a walk. For a filesystem
this improves performance a little, but many of the API based drivers
are currently treated like a filesystem, so this drastically improves
the performance of GCP and Azure blob.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
This commit removes `oss` storage driver from distribution as well as
`alicdn` storage middleware which only works with the `oss` driver.
There are several reasons for it:
* no real-life expertise among the maintainers
* oss is compatible with S3 API operations required by S3 storage driver
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
The Azure tests fail if there is no Azure configuration available,
instead they should be skipped.
Also, one of the Azure tests is wrong and doesn't match the code.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
Other storage drivers will only return children and below, s3 should do
the same. The only reason it was returning was because of the addition
of a / to ensure we treat the from as a directory.
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>