Commit Graph

230 Commits (e85ef3c019a2809b3397771d385581ee09fc7649)

Author SHA1 Message Date
Alvin Feng 45bb7c9cc9 Remove expires tag from s3 upload
Signed-off-by: Alvin Feng <alvin4feng@yahoo.com>
2017-03-17 23:41:15 +00:00
Derek McGowan 4f87c80073 Merge pull request #2192 from uhayate/refactor-code-style
refactor the code style in distribution/registry/storage/driver/s3-goamz/s3.go
2017-02-15 17:12:16 -08:00
sakeven 72bdf0e320 check whether must use v4 auth in specific aws region
Signed-off-by: sakeven <jc5930@sina.cn>
2017-02-14 10:42:20 +08:00
uhayate 75c2e524a1 refactor the code style in distribution/registry/storage/driver/s3-goamz/s3.go
Signed-off-by: uhayate <uhayate.gong@daocloud.io>
2017-02-13 17:29:08 +08:00
Michal Fojtik 9e510d67f5 Add more regions to registry S3 storage driver
Namely adding ca-central-1, ap-south-1 and eu-west-1.

Signed-off-by: Michal Fojtik <mfojtik@redhat.com>
2017-01-11 22:38:24 +01:00
Ahmet Alp Balkan 0a1ce58e2c
azure: revendor + remove hacky solution in is404
Removing the temporary workaround in is404() method by re-vendoring
the azure-sdk-for-go.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
2017-01-09 17:22:28 -08:00
yixi zhang 8e915d69f4 Use app.driver.Stat for registry health check
`app.driver.List` on `"/"` is very expensive if registry contains significant amount of images. And the result isn't used anyways.
In most (if not all) storage drivers, `Stat` has a cheaper implementation, so use it instead to achieve the same goal.

Signed-off-by: yixi zhang <yixi@memsql.com>
2016-12-21 17:12:43 -08:00
Derek McGowan 15dc1296af Merge pull request #2088 from ahmetalpbalkan/pr-upstream-azure-race-fix
azure: fix race condition in PutContent()
2016-12-06 14:07:53 -08:00
Ahmet Alp Balkan 78d0660319
azure: fix race condition in PutContent()
See #2077 for background.

The PR #1438 which was not reviewed by azure folks basically introduced
a race condition around uploads to the same blob by multiple clients
concurrently as it used the "writer" type for PutContent(), introduced in #1438.
This does chunked upload of blobs using "AppendBlob" type, which was not atomic.

Usage of "writer" type and thus AppendBlobs on metadata files is currently not
concurrency-safe and generally, they are not the right type of blob for the job.

This patch fixes PutContent() to use the atomic upload operation that works
for uploads smaller than 64 MB and creates blobs with "BlockBlob" type. To be
backwards compatible, we query the type of the blob first and if it is not
a "BlockBlob" we delete the blob first before doing an atomic PUT. This
creates a small inconsistency/race window "only once". Once the blob is made
"BlockBlob", it is overwritten with a single PUT atomicallly next time.

Therefore, going forward, PutContent() will be producing BlockBlobs and it
will silently migrate the AppendBlobs introduced in #1438 to BlockBlobs with
this patch.

Tested with existing code side by side, both registries with and without this
patch work fine without breaking each other. So this should be good from a
backwards/forward compatiblity perspective, with a cost of doing an extra
HEAD checking the blob type.

Fixes #2077.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
2016-11-30 12:40:43 -08:00
Kira 4accc8f2be filter listResponse.Contents in driver.List()
Signed-off-by: Kira <me@imkira.com>
2016-11-17 10:38:56 +08:00
Richard Scothern 6e62b39842 Merge pull request #2036 from pyr/fix/sort-v2-headers
v2 signer: correctly sort headers
2016-11-10 15:31:24 -08:00
Richard Scothern 4d65dd513e Merge pull request #2038 from spacexnice/master
fix: oss driver would get connection reset by peer when upload large image layer.
2016-11-10 14:44:32 -08:00
yaoyao.xyy a4a227e351 oss native large file copy consume too much time which will eventually lead to client timeout because of no data transmit throughout native copy. change maxCopySize to 128MB, ensure only sm all medium size file use oss native copy to avoid connection reset by peer. And fix Move function with CopyLargeFileInParallel to optimize oss upload copy
Signed-off-by: yaoyao.xyy <yaoyao.xyy@alibaba-inc.com>
2016-11-08 12:14:13 +08:00
Derek McGowan a2611c7520 Merge pull request #2027 from ahmetalpbalkan/pr-azure-memleak2
Update vendored azure-sdk-for-go
2016-11-04 10:08:40 -07:00
Ahmet Alp Balkan 2ab25288a2
Update vendored azure-sdk-for-go
Updating to a recent version of Azure Storage SDK to be
able to patch some memory leaks through configurable HTTP client
changes which were made possible by recent patches to it.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
2016-11-03 13:24:57 -07:00
Pierre-Yves Ritschard f1cf7de788 fixup! v2 signer: correctly sort headers
Signed-off-by: Pierre-Yves Ritschard <pyr@spootnik.org>
2016-11-02 17:07:02 +01:00
Pierre-Yves Ritschard 775cc6d632 v2 signer: correctly sort headers
The current code determines the header order for the
"string-to-sign" payload by sorting on the concatenation
of headers and values, whereas it should only happen on the
key.

During multipart uploads, since `x-amz-copy-source-range` and
`x-amz-copy-source` headers are present, V2 signatures fail to
validate since header order is swapped.

This patch reverts to the expected behavior.

Signed-off-by: Pierre-Yves Ritschard <pyr@spootnik.org>
2016-11-02 17:01:34 +01:00
Ahmet Alp Balkan a994f35657
driver/swift: Fix go vet warning
Driver was passing connections by copying. Storing
`swift.Connection` as pointer to fix the warnings.

Ref: #2030.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
2016-10-31 11:41:53 -07:00
Ahmet Alp Balkan 6d2a0bafcd
storagedriver/azure: close leaking response body
In GetContent() we read the bytes from a blob but do not close
the underlying response body.

Signed-off-by: Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
2016-10-28 15:13:22 -07:00
Matt Bentley 3857f50825
Added new us-east-2 region for S3
Signed-off-by: Matt Bentley <mbentley@mbentley.net>
2016-10-18 12:30:34 -04:00
Richard Scothern d0cdc4802b Merge pull request #2002 from lnr0626/1996-instance-roles-with-regionendpoint
Allow using ec2 roles when specifying region endpoint
2016-10-17 13:50:02 -07:00
Richard Scothern a621a86cb4 Fix aliyun OSS Delete method's notion of subpaths
Deleting "/a" was deleting "/a/b" but also "/ab".

Signed-off-by: Richard Scothern <richard.scothern@docker.com>
2016-10-17 09:43:15 -07:00
Noah Treuhaft 12e73f01d2 Fix s3-goamz Delete method's notion of subpaths
Deleting "/a" was deleting "/a/b" but also "/ab".

Signed-off-by: Noah Treuhaft <noah.treuhaft@docker.com>
2016-10-17 09:43:15 -07:00
Lloyd Ramey c8ea7840d3 Allow using ec2 roles when specifying region endpoint
Signed-off-by: Lloyd Ramey <lnr0626@gmail.com>
2016-10-13 18:07:37 -04:00
Noah Treuhaft 76226c61a9 Fix S3 Delete method's notion of subpaths
Deleting "/a" was deleting "/a/b" but also "/ab".

Signed-off-by: Noah Treuhaft <noah.treuhaft@docker.com>
2016-10-06 11:21:55 -07:00
Derek McGowan d35d94dcec
Update to fix lint errors
Context should use type values instead of strings.
Updated direct calls to WithValue, but still other uses of string keys.
Update Acl to ACL in s3 driver.

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2016-10-05 17:47:12 -07:00
Fabio Berchtold 7dcac52f18 Add v2 signature signing to S3 storage driver (#1800)
* Add v2 signature signing to S3 storage driver

Closes #1796
Closes #1606

Signed-off-by: Fabio Berchtold <fabio.berchtold@swisscom.com>

* use Logrus for debug logging

Signed-off-by: Fabio Berchtold <fabio.berchtold@swisscom.com>

* use 'date' instead of 'x-amz-date' in request header

Signed-off-by: Fabio Berchtold <fabio.berchtold@swisscom.com>

* only allow v4 signature signing against AWS S3

Signed-off-by: Fabio Berchtold <fabio.berchtold@swisscom.com>
2016-09-01 13:52:40 -07:00
Matthew Green dea554fc7c Swift driver now bulk deletes in chunks specified by the server (#1915)
Swift driver now bulk deletes in chunks specified by the server

Signed-off-by: Matthew Green <matthew.green@uk.ibm.com>
2016-08-24 10:09:25 -07:00
Noah Treuhaft 63468ef4a8 Use multipart upload API in S3 Move method
This change to the S3 Move method uses S3's multipart upload API to copy
objects whose size exceeds a threshold.  Parts are copied concurrently.
The level of concurrency, part size, and threshold are all configurable
with reasonable defaults.

Using the multipart upload API has two benefits.

* The S3 Move method can now handle objects over 5 GB, fixing #886.

* Moving most objects, and espectially large ones, is faster.  For
  example, moving a 1 GB object averaged 30 seconds but now averages 10.

Signed-off-by: Noah Treuhaft <noah.treuhaft@docker.com>
2016-08-16 10:53:24 -07:00
Stefan Majewsky a7c6bfd59f [swift] support different user-domain and tenant-domain
This is already supported by ncw/swift, so we just need to pass the
parameters from the storage driver.

Signed-off-by: Stefan Majewsky <stefan.majewsky@sap.com>
2016-08-15 11:21:42 +02:00
Stephen J Day 040db51795
testutil, storage: use math/rand.Read where possible
Use the much faster math/rand.Read function where cryptographic
guarantees are not required. The unit test suite should speed up a
little bit but we've already optimized around this, so it may not
matter.

Signed-off-by: Stephen J Day <stephen.day@docker.com>
2016-08-10 14:26:12 -07:00
Frank Chen 87917f3052 Add 'objectAcl' Option to the S3 Storage Backend (#1867)
* Add Object ACL Support to the S3 Storage Backend

Signed-off-by: Frank Chen <frankchn@gmail.com>

* Made changes per @RichardScothern's comments

Signed-off-by: Frank Chen <frankchn@gmail.com>

* Fix Typos

Signed-off-by: Frank Chen <frankchn@gmail.com>
2016-07-27 12:26:57 -07:00
Richard Scothern f27ceb7ab5 Merge pull request #1710 from majewsky/swift/wait-for-dlo-segments-during-read
[Swift] add simple heuristic to detect incomplete DLOs during read ops
2016-07-19 09:07:44 -07:00
Richard Scothern 3da5f9088d Allow EC2 IAM roles to be used when authorizing region endpoints
Signed-off-by: Richard Scothern <richard.scothern@docker.com>
2016-07-11 10:54:57 -07:00
Stefan Majewsky 1f03d4e77d [Swift] add simple heuristic to detect incomplete DLOs during read ops
This is similar to waitForSegmentsToShowUp which is called during
Close/Commit. Intuitively, you wouldn't expect missing segments to be a
problem during read operations, since the previous Close/Commit
confirmed that all segments are there.

But due to the distributed nature of Swift, the read request could be
hitting a different storage node of the Swift cluster, where the
segments are still missing.

Load tests on my team's staging Swift cluster have shown this to occur
about once every 100-200 layer uploads when the Swift proxies are under
high load. The retry logic, borrowed from waitForSegmentsToShowUp, fixes
this temporary inconsistency.

Signed-off-by: Stefan Majewsky <stefan.majewsky@sap.com>
2016-07-08 13:47:41 +02:00
Josh Chorlton 2d0a5ecc0e fixed s3 Delete bug due to read-after-delete inconsistency
Signed-off-by: Josh Chorlton <josh.chorlton@docker.com>
2016-06-28 14:22:15 -07:00
Richard Scothern edd7cb5249 Merge pull request #1739 from cezarsa/master
[Swift] Expose EndpointType parameter in driver
2016-06-15 10:33:48 -07:00
Richard Scothern 1fc752c718 Merge pull request #1706 from aibaars/registry-size-close
Blobwriter: call BlobWriter.Size after BlobWriter.Close
2016-06-13 16:29:35 -07:00
Cezar Sa Espinola 7f72092940
Expose EndpointType parameter in swift storage driver
Signed-off-by: Cezar Sa Espinola <cezarsa@gmail.com>
2016-06-13 19:28:45 -03:00
allencloud db90724ab0 fix typos
Signed-off-by: allencloud <allen.sun@daocloud.io>
2016-06-02 23:03:27 +08:00
Richard Scothern df2184c810 Merge pull request #1627 from luckyraul/swift_auth_url
Swift auth version param
2016-06-01 11:23:23 -07:00
Richard Scothern 4f2ee029a2 Add 'us-gov-west-1' to the valid region list.
Signed-off-by: Richard Scothern <richard.scothern@docker.com>
2016-05-09 16:38:16 +01:00
Arthur Baars eca581cf36 StorageDriver: GCS: allow Cancel on a closed FileWriter
Signed-off-by: Arthur Baars <arthur@semmle.com>
2016-05-06 13:04:30 +01:00
Arthur Baars 1d782c38f2 StorageDriver: Test case for #1698
Signed-off-by: Arthur Baars <arthur@semmle.com>
2016-05-06 13:04:30 +01:00
Richard Scothern c047d34b22 Merge pull request #1695 from tonyhb/add-regulator-to-filesystem
Add regulator to filesystem
2016-05-04 10:05:51 -07:00
Tony Holdstock-Brown c9c62380ff Don't wrap thead limits when using a negative int
Signed-off-by: Tony Holdstock-Brown <tony@docker.com>
2016-05-03 16:03:44 -07:00
Tony Holdstock-Brown 33c448f147 Implement regulator in filesystem driver
This commit refactors base.regulator into the 2.4 interfaces and adds a
filesystem configuration option `maxthreads` to configure the regulator.

By default `maxthreads` is set to 100. This means the FS driver is
limited to 100 concurrent blocking file operations. Any subsequent
operations will block in Go until previous filesystem operations
complete.

This ensures that the registry can never open thousands of simultaneous
threads from os filesystem operations.

Note that `maxthreads` can never be less than 25.

Add test case covering parsable string maxthreads

Signed-off-by: Tony Holdstock-Brown <tony@docker.com>
2016-05-03 09:33:22 -07:00
Richard Scothern 5d08dfa70c Merge pull request #1650 from majewsky/swift/wait-for-dlo-segments
[Swift] wait for DLO segments to show up when Close()ing the writer
2016-05-02 13:41:26 -07:00
Richard Scothern a7dda2ce93 Merge pull request #1665 from andrewhsu/middleware-redirect
add middleware storage driver for redirect
2016-04-27 15:05:52 -07:00
Josh Hawn e4dd3359cc Regulate filesystem driver to max of 100 calls
It's easily possible for a flood of requests to trigger thousands of
concurrent file accesses on the storage driver. Each file I/O call creates
a new OS thread that is not reaped by the Golang runtime. By limiting it
to only 100 at a time we can effectively bound the number of OS threads
in use by the storage driver.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)

Signed-off-by: Tony Holdstock-Brown <tony@docker.com>
2016-04-26 14:44:13 -07:00