Running with the race detector may cause some parts
of the code to run slower causing a race in the scheduler
ordering.
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
This change to the S3 Move method uses S3's multipart upload API to copy
objects whose size exceeds a threshold. Parts are copied concurrently.
The level of concurrency, part size, and threshold are all configurable
with reasonable defaults.
Using the multipart upload API has two benefits.
* The S3 Move method can now handle objects over 5 GB, fixing #886.
* Moving most objects, and espectially large ones, is faster. For
example, moving a 1 GB object averaged 30 seconds but now averages 10.
Signed-off-by: Noah Treuhaft <noah.treuhaft@docker.com>
This is already supported by ncw/swift, so we just need to pass the
parameters from the storage driver.
Signed-off-by: Stefan Majewsky <stefan.majewsky@sap.com>
Use the much faster math/rand.Read function where cryptographic
guarantees are not required. The unit test suite should speed up a
little bit but we've already optimized around this, so it may not
matter.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Previous component-wise path comparison is recursive and generates a
large amount of garbage. This more efficient version simply replaces the
path comparison with the zero-value to sort before everything. We do
this by replacing the byte-wise comparison that swaps a single character
inline for the separator comparison, such that separators sort first.
The resulting implementation provides component-wise path comparison
with no cost incurred for allocation or stack frame.
Direction of the comparison is also reversed to match Go style.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
* Allow precomputed stats on cross-mounted blobs
Signed-off-by: Michal Minář <miminar@redhat.com>
* Extended cross-repo mount tests
Signed-off-by: Michal Minář <miminar@redhat.com>
* Add Object ACL Support to the S3 Storage Backend
Signed-off-by: Frank Chen <frankchn@gmail.com>
* Made changes per @RichardScothern's comments
Signed-off-by: Frank Chen <frankchn@gmail.com>
* Fix Typos
Signed-off-by: Frank Chen <frankchn@gmail.com>
Pass the manifestURL directly into the schema2 manifest handler instead of
accessing through the repository as it has since the reference is now an
interface.
Signed-off-by: Richard Scothern <richard.scothern@docker.com>
Until we have some experience hosting foreign layer manifests, the Hub
operators wish to limit foreign layers on Hub. To that end, this change
adds registry configuration options to restrict the URLs that may appear
in pushed manifests.
Signed-off-by: Noah Treuhaft <noah.treuhaft@docker.com>
Adds a constant leeway (60 seconds) to the nbf and exp claim check to
account for clock skew between the registry servers and the
authentication server that generated the JWT.
The leeway of 60 seconds is a bit arbitrary but based on the RFC
recommendation and hub.docker.com logs/metrics where we don't see
drifts of more than a second on our servers running ntpd.
I didn't attempt to make the leeway configurable as it would add extra
complexity to the PR and I am not sure how Distribution prefer to
handle runtime flags like that.
Also, I am simplifying the exp and nbf check for readability as the
previous `NOT (A AND B)` with cmp operators was not very friendly.
Ref:
https://tools.ietf.org/html/rfc7519#section-4.1.5
Signed-off-by: Marcus Martins <marcus@docker.com>
This fixes errors other than io.EOF from being dropped when a storage driver
lists repositories. For example, filesystem driver may point to a missing
directory and errors, which then gets subsequently dropped.
Signed-off-by: Edgar Lee <edgar.lee@docker.com>
Allows using v2 for v1 endpoints.
The primary use case being for search which does not have a v2 specification.
Added a user scope for allowing v2 search
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
This is similar to waitForSegmentsToShowUp which is called during
Close/Commit. Intuitively, you wouldn't expect missing segments to be a
problem during read operations, since the previous Close/Commit
confirmed that all segments are there.
But due to the distributed nature of Swift, the read request could be
hitting a different storage node of the Swift cluster, where the
segments are still missing.
Load tests on my team's staging Swift cluster have shown this to occur
about once every 100-200 layer uploads when the Swift proxies are under
high load. The retry logic, borrowed from waitForSegmentsToShowUp, fixes
this temporary inconsistency.
Signed-off-by: Stefan Majewsky <stefan.majewsky@sap.com>