Compare commits

...

110 commits

Author SHA1 Message Date
Milos Gajdos
27206bcd3b
Merge pull request #4009 from thaJeztah/2.8_backport_enable_build_tags
[release/2.8 backport] Enable Go build tags
2023-08-22 15:10:59 +01:00
Milos Gajdos
110cb7538d
Enable build tags in 2.8
It would appear we were missing the Go build tags on 2.8.X branch so the
images would not have the necessary support for some storage drivers
causing breakages to end users trying to use them.

This commit fixes both the build and linting issues.

Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-08-21 13:58:10 +02:00
Sebastiaan van Stijn
2d62a4027a
s3: add interface assertion
This was added for the other drivers in 6b388b1ba6,
but it missed the s3 storage driver.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5b3be39870)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-08-21 13:57:02 +02:00
Milos Gajdos
2548973b1d
Enable Go build tags
This enables go build tags so the GCS and OSS driver support is
available in the binary distributed via the image build by Dockerfile.

This led to quite a few fixes in the GCS and OSS packages raised as
warning by golang-ci linter.

Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
(cherry picked from commit 6b388b1ba6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-08-21 13:50:24 +02:00
Milos Gajdos
8728c52ef2
Merge pull request #3926 from marcusirgens/use-build-tags
Pass `BUILDTAGS` argument to `go build`
2023-06-07 09:53:15 +01:00
Marcus Pettersen Irgens
ab7178cc0a
Pass BUILDTAGS argument to go build
Signed-off-by: Marcus Pettersen Irgens <m@mrcus.dev>
2023-05-19 18:38:27 +02:00
Milos Gajdos
7c354a4b40
Merge pull request #3915 from distribution/2.8.2-release-notes
Add v2.8.2 release notes
2023-05-11 11:11:57 +01:00
Milos Gajdos
a173a9c625
Add v2.8.2 release notes
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-05-11 10:47:17 +01:00
Milos Gajdos
4894d35ecc
Merge pull request #3914 from vvoland/handle-forbidden-28
[release/2.8 backport] registry/errors: Parse http forbidden as denied
2023-05-11 10:00:25 +01:00
Milos Gajdos
f067f66d3d
Merge pull request #3783 from ndeloof/accept-encoding-28
[release/2.8 backport] revert "registry/client: set Accept: identity header when getting layers
2023-05-11 09:54:18 +01:00
Paweł Gronowski
483ad69da3
registry/errors: Parse http forbidden as denied
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 5f1df02149)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2023-05-11 10:45:46 +02:00
Nicolas De Loof
2b0f84df21
Revert "registry/client: set Accept: identity header when getting layers"
This reverts commit 16f086a0ec.

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2023-05-10 23:00:15 +02:00
Milos Gajdos
320d6a141f
Merge pull request #3912 from distribution/2.8.2-beta.2-release-notes
Add 2.8.2 beta.2 release notes
2023-05-10 00:16:38 +01:00
Milos Gajdos
5f3ca1b2fb
Add release notes for 2.8.2-beta.2 release
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-05-10 00:12:20 +01:00
Milos Gajdos
cb840f63b3
Merge pull request #3911 from thaJeztah/2.8_backport_fix_releaser_filenames
[release/2.8 backport] Dockerfile: fix filenames of artifacts
2023-05-09 23:43:34 +01:00
Sebastiaan van Stijn
e884644fff
Dockerfile: fix filenames of artifacts
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 435c7b9a7b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-10 00:27:45 +02:00
Milos Gajdos
963c19952a
Merge pull request #3909 from distribution/2.8.2-beta-release-notes
Add 2.8.2-beta.1 release notes
2023-05-09 22:39:59 +01:00
Milos Gajdos
ac6c72b25f
Add 2.8.2-beta.1 release notes
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-05-09 22:22:05 +01:00
Milos Gajdos
dcb637d6ea
Merge pull request from GHSA-hqxw-f8mx-cpmw
[release/2.8] Fix runaway allocation on /v2/_catalog
2023-05-09 21:21:54 +01:00
Milos Gajdos
08f5645587
Merge pull request #3893 from pluralsh/part-pagination
[release/2.8] Add code to handle pagination of parts. Fixes max layer size of 10GB bug
2023-05-09 20:58:24 +01:00
Milos Gajdos
4a35c451a0
Merge pull request #3908 from thaJeztah/2.8_backport_bump_go1.19.9
[release/2.8 backport] update to go1.19.9
2023-05-09 19:16:47 +01:00
Milos Gajdos
ae58bde985
Fix gofmt warnings
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-05-09 18:58:38 +01:00
Sebastiaan van Stijn
3f2a4e24a7
update to go1.19.9
Added back minor versions in these, so that we have a somewhat more
reproducible state in the repository when tagging releases.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 322eb4eecf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 17:57:57 +02:00
Sebastiaan van Stijn
9c04409fdb
[release/2.8] ignore deprecation of io/ioutil
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 17:57:28 +02:00
Milos Gajdos
b791fdc2c6
Merge pull request #3907 from thaJeztah/2.8_backport_update_xx
[release/2.8 backport] Dockerfile: update xx to v1.2.1
2023-05-09 15:58:05 +01:00
Sebastiaan van Stijn
3d8f3cc4a5
Dockerfile: update xx to v1.2.1
full diff: https://github.com/tonistiigi/xx/compare/v1.1.1...v1.2.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8c4d2b9d65)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 15:32:28 +02:00
Milos Gajdos
d3fac541b1
Merge pull request #3903 from thaJeztah/2.8_bump_go_118
[release/2.8] bump up golang version (alternative)
2023-05-09 13:59:02 +01:00
Wang Yan
70db3a46d9
bump up golang version
upgrade go version to v1.18.8

Signed-off-by: Wang Yan <wangyan@vmware.com>
2023-05-09 10:59:43 +02:00
CrazyMax
db1389e043
dockerfiles: formatting
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 0e17e54091)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 10:59:43 +02:00
CrazyMax
018472de2d
dockerfiles: set ALPINE_VERSION
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit b066451b40)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 10:59:42 +02:00
CrazyMax
19b3feb5df
Update to xx 1.1.1
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 52a88c596b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 10:59:42 +02:00
CrazyMax
14bd72bcf8
Dockerfile: switch to xx
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 87f93ede9e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 10:59:42 +02:00
Wang Yan
2392893bcf
bump up golang v1.17
Signed-off-by: Wang Yan <wangyan@vmware.com>
(cherry picked from commit 3f4c558dac)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 10:59:38 +02:00
Sebastiaan van Stijn
092a2197ff
[release/2.8] fix package name in Dockerfile
The 2.8 release is still named github.com/docker/distribution.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 10:53:15 +02:00
David van der Spek
22a805033a fix(ci): use go install instead of go get
Signed-off-by: David van der Spek <vanderspek.david@gmail.com>
2023-05-08 23:21:18 -05:00
Derek McGowan
1d52366d2c Merge pull request #2815 from bainsy88/issue_2814
Add code to handle pagination of parts. Fixes max layer size of 10GB bug

Signed-off-by: David van der Spek <vanderspek.david@gmail.com>
2023-05-08 23:21:18 -05:00
Jose D. Gomez R
521ea3d973
Fix runaway allocation on /v2/_catalog
Introduced a Catalog entry in the configuration struct. With it,
it's possible to control the maximum amount of entries returned
by /v2/catalog (`GetCatalog` in registry/handlers/catalog.go).

It's set to a default value of 1000.

`GetCatalog` returns 100 entries by default if no `n` is
provided. When provided it will be validated to be between `0`
and `MaxEntries` defined in Configuration. When `n` is outside
the aforementioned boundary, ErrorCodePaginationNumberInvalid is
returned.

`GetCatalog` now handles `n=0` gracefully with an empty response
as well.

Signed-off-by: José D. Gómez R. <1josegomezr@gmail.com>
Co-authored-by: Cory Snider <corhere@gmail.com>
2023-04-24 18:53:43 +02:00
Milos Gajdos
82d6c3d007
Merge pull request #3815 from wy65701436/release/2.8-cp-3615
[release/2.8] Fix panic in inmemory driver
2023-04-17 15:58:21 +01:00
Shengjing Zhu
ad5991de09 Fix panic in inmemory driver
Signed-off-by: Shengjing Zhu <zhsj@debian.org>
2022-12-04 22:47:15 +08:00
Hayley Swimelar
dc5b207fdd
Merge pull request #3650 from thaJeztah/2.8_bump_alpine
[release/2.8 backport] Fix CVE-2022-28391 by bumping alpine from 3.14 to 3.16
2022-05-26 09:32:25 -07:00
Silvin Lubecki
38018aeb5d
Fix CVE-2022-28391 by bumping alpine from 3.15 to 3.16
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9f2bc25b7a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-05-26 13:25:35 +02:00
Milos Gajdos
b5ca020cfb
Merge pull request #3605 from milosgajdos/update-release-notes
Update 2.8.1. release notes
2022-03-08 17:52:36 +00:00
Milos Gajdos
1b5f094086
Merge pull request #3604 from crazy-max/2.8-go-1.16.15
go 1.16.15
2022-03-08 17:15:10 +00:00
Milos Gajdos
96cc1fdb3c
FIx typo
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2022-03-08 17:14:24 +00:00
Milos Gajdos
e744906f09
Update 2.8.1. release notes
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2022-03-08 17:11:29 +00:00
CrazyMax
3df9fce2be
go 1.16.15
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
2022-03-08 17:54:16 +01:00
Milos Gajdos
9a0196b801
Merge pull request #3596 from milosgajdos/fix-go-mod-v2.8.1
Prepare for v2.8.1 release
2022-03-01 11:37:47 +00:00
Milos Gajdos
6736d1881a
Prepare for v2.8.1 release
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2022-02-24 13:44:40 +00:00
Milos Gajdos
e4a447d0d7
Merge pull request #3595 from crazy-max/2.8-ci-gitref
[2.8 backport] ci: use proper git ref for versioning
2022-02-23 08:59:59 +00:00
CrazyMax
80acbdf0a2
ci: use proper git ref for versioning
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit fabf9cd4e9)
2022-02-22 22:05:10 +01:00
Milos Gajdos
dcf66392d6
Update README so the release pipeline works properly.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2022-02-07 15:40:21 +00:00
Milos Gajdos
212b38ed22
Merge pull request #3552 from milosgajdos/v2.8.0-release
Prepare for v2.8.0 release
2022-01-21 12:46:32 +00:00
Milos Gajdos
359b97a75a
Merge pull request #3568 from crazy-max/2.8-artifacts
[2.8] Release artifacts
2022-01-21 12:11:22 +00:00
Milos Gajdos
d5d89a46a3
Make this releaes a beta release first.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2022-01-21 11:36:41 +00:00
CrazyMax
6241e099e1
[2.8] Release artifacts
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
2022-01-19 16:54:30 +01:00
Milos Gajdos
1840415ca8
Merge pull request #3565 from crazy-max/2.8-gha
[2.8] Release workflow
2022-01-13 16:56:37 +00:00
CrazyMax
65ca39e605
release workflow
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
2022-01-12 16:34:14 +01:00
Milos Gajdos
1ddad0bad8
Apply suggestions from code review
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2021-12-22 09:13:32 +00:00
Milos Gajdos
3960a560bb
Prepare for v2.8.0 release
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2021-12-21 13:24:39 +00:00
Milos Gajdos
3b7b534569
Merge pull request from GHSA-qq97-vm5h-rrhg
[release/2.7] manifest: validate document type before unmarshal
2021-11-23 19:16:40 +00:00
Milos Gajdos
afe85428bb
Merge pull request #3466 from thaJeztah/2.7_update_jwt
[release/2.7] github.com/golang-jwt/jwt v3.2.2
2021-11-23 09:10:53 +00:00
Milos Gajdos
f7365390ef
Merge pull request #3535 from thaJeztah/2.7_bump_oci_specs 2021-11-18 08:34:49 +00:00
Sebastiaan van Stijn
97f6daced4
[release/2.7] vendor: github.com/opencontainers/image-spec v1.0.2
(previous version vendored was v1.0.0)

full diff: ab7389ef9f...v1.0.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-11-17 22:31:14 +01:00
Milos Gajdos
4313c14723
Merge pull request #3531 from wy65701436/fix-rand
[release/2.7]fix go check issues
2021-11-17 20:14:46 +00:00
Wang Yan
9a3ff11330 fix go check issues
G404: Replace math rand with crypto rand

Signed-off-by: Wang Yan <wangyan@vmware.com>
2021-11-16 17:46:08 +08:00
Samuel Karp
10ade61de9
manifest: validate document type before unmarshal
Signed-off-by: Samuel Karp <skarp@amazon.com>
2021-11-05 10:16:09 -07:00
Milos Gajdos
691e62e7ef
Merge pull request #3495 from thaJeztah/2.7_backport_must
[release/2.7 backport] Change should to must in v2 spec
2021-09-08 14:44:47 +01:00
Justin Cormack
19b573a6f7
Change should to must in v2 spec
We found some examples of manifests with URLs specififed that did
not provide a digest or size. This breaks the security model by allowing
the content to change, as it no longer provides a Merkle tree. This
was not intended, so explicitly disallow by tightening wording.

Signed-off-by: Justin Cormack <justin.cormack@docker.com>
(cherry picked from commit 1660df4b60)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-09-08 15:24:07 +02:00
Sebastiaan van Stijn
c5679da3a1
[release/2.7] vendor: github.com/golang-jwt/jwt v3.2.1
to address CVE-2020-26160

full diff: a601269ab7...v3.2.2

3.2.1 release notes
---------------------------------------

- Import Path Change: See MIGRATION_GUIDE.md for tips on updating your code
  Changed the import path from github.com/dgrijalva/jwt-go to github.com/golang-jwt/jwt
- Fixed type confusion issue between string and []string in VerifyAudience.
  This fixes CVE-2020-26160

3.2.2 release notes
---------------------------------------

- Starting from this release, we are adopting the policy to support the most 2
  recent versions of Go currently available. By the time of this release, this
  is Go 1.15 and 1.16.
- Fixed a potential issue that could occur when the verification of exp, iat
  or nbf was not required and contained invalid contents, i.e. non-numeric/date.
  Thanks for @thaJeztah for making us aware of that and @giorgos-f3 for originally
  reporting it to the formtech fork.
- Added support for EdDSA / ED25519.
- Optimized allocations.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-08-10 13:05:39 +02:00
Wang Yan
61e7e20823
Merge pull request #3472 from thaJeztah/2.7_update_go116
[release/2.7] update to go1.16
2021-08-10 18:59:49 +08:00
Sebastiaan van Stijn
d836b23fc2
[release/2.7] update to go1.16
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-08-10 11:32:03 +02:00
Milos Gajdos
18230b7b34
Merge pull request #3384 from wy65701436/release/2.7-cp-3169
[backport release/2.7]Added flag for user configurable cipher suites
2021-03-23 15:23:04 +00:00
Milos Gajdos
51636a6711
Merge pull request #3385 from wy65701436/release/2.7-ci
enable ci for release/2.7
2021-03-23 15:22:46 +00:00
Derek McGowan
09109ab50a Fix gosimple checks
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Signed-off-by: Wang Yan <wangyan@vmware.com>
2021-03-23 21:03:20 +08:00
Manish Tomar
89e6568e34 Remove err nil check
since type checking nil will not panic and return appropriately

Signed-off-by: Manish Tomar <manish.tomar@docker.com>
Signed-off-by: wang yan <wangyan@vmware.com>
2021-03-23 21:03:16 +08:00
Manish Tomar
3c64ff10bb Fix gometalint errors
Signed-off-by: Manish Tomar <manish.tomar@docker.com>
Signed-off-by: wang yan <wangyan@vmware.com>
2021-03-23 21:03:10 +08:00
sayboras
f807afbf85 Migrate to golangci-lint
Signed-off-by: Tam Mach <sayboras@yahoo.com>
Signed-off-by: wang yan <wangyan@vmware.com>
2021-03-23 21:02:54 +08:00
Wang Yan
9142de99fa enable ci for release/2.7
Signed-off-by: Wang Yan <wangyan@vmware.com>
2021-03-23 18:46:17 +08:00
David Luu
cc341b0110 Added flag for user configurable cipher suites
Configuration of list of cipher suites allows a user to disable use
of weak ciphers or continue to support them for legacy usage if they
so choose.

List of available cipher suites at:
https://golang.org/pkg/crypto/tls/#pkg-constants

Default cipher suites have been updated to:
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_AES_128_GCM_SHA256
- TLS_CHACHA20_POLY1305_SHA256
- TLS_AES_256_GCM_SHA384

MinimumTLS has also been updated to include TLS 1.3 as an option
and now defaults to TLS 1.2 since 1.0 and 1.1 have been deprecated.

Signed-off-by: David Luu <david@davidluu.info>
2021-03-23 18:42:12 +08:00
Milos Gajdos
cc866a5bf3
Merge pull request #3370 from wy65701436/release/2.7-cp-3309
[cherry pick]close the io.ReadCloser from storage driver
2021-02-26 09:00:00 +00:00
Wang Yan
3fe1d67ace close the io.ReadCloser from storage driver
Backport PR #3309 to release/2.7

Signed-off-by: Wang Yan <wangyan@vmware.com>
2021-02-23 18:48:00 +08:00
Wang Yan
6300300270
Merge pull request #3347 from wy65701436/release/2.7-cp-ci
[backport release/2.7] First draft of actions based ci
2021-02-16 23:19:12 +08:00
Chris Patterson
f1bd655119 First draft of actions based ci
Signed-off-by: Chris Patterson <chrispat@github.com>
2021-02-01 11:04:54 +08:00
João Pereira
d7362d7e3a
Merge pull request #3297 from thaJeztah/2.7_backport_fix_header
Remove empty Content-Type header
2021-01-30 10:28:10 +00:00
Smasherr
cf8615dedf
Remove empty Content-Type header
Fixes #3288

Signed-off-by: Smasherr <soundcracker@gmail.com>
(cherry picked from commit c8d90f904f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-11-16 11:15:10 +01:00
Derek McGowan
70e0022e42
Merge pull request #3197 from thaJeztah/2.7_backport_add_redirect
[release/2.7 backport] docs: add redirect for old URL
2020-07-08 16:08:40 -07:00
Sebastiaan van Stijn
48eeac88e9
docs: add redirect for old URL
Looks like there's some projects refering to this old URL:
https://grep.app/search?q=https%3A//docs.docker.com/reference/api/registry_api/

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7728c5e445)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-08 12:22:22 +02:00
Derek McGowan
a45a401e97
Merge pull request #3119 from wy65701436/release/2.7-cp-2879
[release/2.7] Fix s3 driver for supporting ceph radosgw
2020-03-10 20:48:21 -07:00
Thomas Berger
e2f006ac2b S3 Driver: added comment for missing KeyCount workaround
Signed-off-by: Thomas Berger <loki@lokis-chaos.de>
Signed-off-by: wang yan <wangyan@vmware.com>
2020-03-10 22:41:10 +08:00
Eohyung Lee
0a1e4a57e2 Fix s3 driver for supporting ceph radosgw
Radosgw does not support S3 `GET Bucket` API v2 API but v1.
This API has backward compatibility, so most of this API is working
correctly but we can not get `KeyCount` in v1 API and which is only
for v2 API.

Signed-off-by: Eohyung Lee <liquidnuker@gmail.com>
2020-03-10 22:35:31 +08:00
Derek McGowan
bdf503a444
Merge pull request #3088 from thaJeztah/2.7_backport_fix_cloudfront_middleware
[release/2.7 backport] Bugfix: Make ipfilteredby not required
2020-02-23 00:07:58 -08:00
Derek McGowan
be75da0ef2
Merge pull request #3002 from thaJeztah/2.7_backport_add_normalize_util
[release/2.7 backport] Add reference.ParseDockerRef utility function
2020-02-21 10:13:42 -08:00
Vishesh Jindal
afa91463d6
Bugfix: Make ipfilteredby not required
Signed-off-by: Vishesh Jindal <vishesh92@gmail.com>
(cherry picked from commit f9a0506191)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-28 19:41:02 +01:00
Sebastiaan van Stijn
fad36ed1a1
Add reference.ParseDockerRef utility function
ParseDockerRef normalizes the image reference following the docker
convention. This is added mainly for backward compatibility. The reference
returned can only be either tagged or digested. For reference contains both tag
and digest, the function returns digested reference, e.g.

    docker.io/library/busybox:latest@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa

will be returned as

    docker.io/library/busybox@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0ac367fd6b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-20 13:50:06 +01:00
Derek McGowan
cfd1309845
Merge pull request #3073 from thaJeztah/2.7_backport_table_fix
[release/2.7 backport] fix markdown issues on configuration page
2019-12-16 22:19:04 -08:00
Derek McGowan
a85caead04
Merge pull request #3001 from dmcgowan/2.7-fix-vndr-checks
[release/2.7] Fix vndr and check
2019-12-16 21:51:28 -08:00
Adrian Plata
f999f540d3
Fixing broken table
Signed-off-by: Adrian Plata <adrian.plata@docker.com>
(cherry picked from commit b4694b0d2d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-16 13:22:39 +01:00
Vishesh Jindal
c636ed788a
Fix cloudfront documentation formatting
Signed-off-by: Vishesh Jindal <vishesh92@gmail.com>
(cherry picked from commit e1e72e9563)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-16 13:22:13 +01:00
Derek McGowan
5883e2d935
Fix vndr and check
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
2019-09-03 13:19:34 -07:00
Derek McGowan
269d18d9a8
Merge pull request #2987 from adrian-plata/release/2.7
[release/2.7] Adding deprecated schema v1 page
2019-09-03 12:08:26 -07:00
Adrian Plata
a3c027e626
Adding deprecated schema instructions
Signed-off-by: Adrian Plata <adrian.plata@docker.com>
(cherry picked from commit 07a50201c9)
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
2019-09-03 11:56:53 -07:00
Derek McGowan
2461543d98
Merge pull request #2824 from dmcgowan/update-version-file-2.7.1
Update version file for 2.7.1
2019-01-17 15:19:26 -08:00
Derek McGowan
5b98226afe
Update version file for 2.7.1
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
2019-01-17 15:16:54 -08:00
Derek McGowan
2eab12df9b
Merge pull request #2805 from dmcgowan/release-2.7.1
Release notes for 2.7.1
2019-01-17 15:10:29 -08:00
Derek McGowan
445ef068dd
Release notes for 2.7.1
Release notes for single fix release

Signed-off-by: Derek McGowan <derek@mcgstyle.net>
2019-01-17 15:07:35 -08:00
Ryan Abrams
cbc30be414
Merge pull request #2821 from caervs/ISS-2819
Use same env var in Dockerfile and Makefile
2019-01-17 09:53:49 -08:00
Ryan Abrams
bf74e4f91d Use same env var in Dockerfile and Makefile
Ensures that build tags get set in the Dockerfile so that OSS and GCS drivers
are built into the official registry binary.

Closes #2819

Signed-off-by: Ryan Abrams <rdabrams@gmail.com>
2019-01-16 11:16:11 -08:00
Ryan Abrams
62994fdd12
Merge pull request #2804 from caervs/ISS-2793-2.7
[2.7] Add docs for autoredirect config parameter
2019-01-07 14:35:16 -08:00
Derek McGowan
e702d95cfd
Merge pull request #2802 from davidswu/2.7-autoredirect
[2.7] default autoredirect to false
2019-01-07 10:32:14 -08:00
David Wu
caf43bbcc2 default autoredirect to false
Signed-off-by: David Wu <david.wu@docker.com>
2019-01-04 13:47:17 -08:00
147 changed files with 2407 additions and 823 deletions

1
.dockerignore Normal file
View file

@ -0,0 +1 @@
bin/

92
.github/workflows/build.yml vendored Normal file
View file

@ -0,0 +1,92 @@
name: build
on:
push:
branches:
- 'release/*'
tags:
- 'v*'
pull_request:
env:
DOCKERHUB_SLUG: distribution/distribution
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
-
name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
images: |
${{ env.DOCKERHUB_SLUG }}
### versioning strategy
### push semver tag v2.9.0 on main (default branch)
# distribution/distribution:2.9.0
# distribution/distribution:latest
### push semver tag v2.8.0 on release/2.8 branch
# distribution/distribution:2.8.0
### push on main
# distribution/distribution:edge
tags: |
type=semver,pattern={{version}}
type=ref,event=pr
# don't create latest tag on release/2.x
flavor: |
latest=false
labels: |
org.opencontainers.image.title=Distribution
org.opencontainers.image.description=The toolkit to pack, ship, store, and deliver container content
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
-
name: Build artifacts
uses: docker/bake-action@v1
with:
targets: artifact-all
-
name: Move artifacts
run: |
mv ./bin/**/* ./bin/
-
name: Upload artifacts
uses: actions/upload-artifact@v2
with:
name: registry
path: ./bin/*
if-no-files-found: error
-
name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build image
uses: docker/bake-action@v1
with:
files: |
./docker-bake.hcl
${{ steps.meta.outputs.bake-file }}
targets: image-all
push: ${{ startsWith(github.ref, 'refs/tags/') }}
-
name: GitHub Release
uses: softprops/action-gh-release@v1
if: startsWith(github.ref, 'refs/tags/')
with:
draft: true
files: |
bin/*.tar.gz
bin/*.sha256
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

50
.github/workflows/ci.yml vendored Normal file
View file

@ -0,0 +1,50 @@
name: CI
on:
push:
pull_request:
jobs:
build:
runs-on: ubuntu-latest
env:
BUILDTAGS: "include_oss,include_gcs"
CGO_ENABLED: 1
GO111MODULE: "auto"
GOPATH: ${{ github.workspace }}
GOOS: linux
COMMIT_RANGE: ${{ github.event_name == 'pull_request' && format('{0}..{1}',github.event.pull_request.base.sha, github.event.pull_request.head.sha) || github.sha }}
steps:
- uses: actions/checkout@v2
with:
path: src/github.com/docker/distribution
fetch-depth: 50
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.19.9
- name: Dependencies
run: |
sudo apt-get -q update
sudo -E apt-get -yq --no-install-suggests --no-install-recommends install python2-minimal
cd /tmp && go install github.com/vbatts/git-validation@latest
- name: Build
working-directory: ./src/github.com/docker/distribution
run: |
DCO_VERBOSITY=-q script/validate/dco
GO111MODULE=on script/setup/install-dev-tools
script/validate/vendor
go build .
make check
make build
make binaries
if [ "$GOOS" = "linux" ]; then make coverage ; fi
- uses: codecov/codecov-action@v1
with:
directory: ./src/github.com/docker/distribution

27
.golangci.yml Normal file
View file

@ -0,0 +1,27 @@
linters:
enable:
- structcheck
- varcheck
- staticcheck
- unconvert
- gofmt
- goimports
- golint
- ineffassign
- vet
- unused
- misspell
disable:
- errcheck
run:
deadline: 2m
skip-dirs:
- vendor
issues:
exclude-rules:
# io/ioutil is deprecated, but won't be removed until Go v2. It's safe to ignore for the release/2.8 branch.
- text: "SA1019: \"io/ioutil\" has been deprecated since Go 1.16"
linters:
- staticcheck

View file

@ -1,16 +0,0 @@
{
"Vendor": true,
"Deadline": "2m",
"Sort": ["linter", "severity", "path", "line"],
"EnableGC": true,
"Enable": [
"structcheck",
"staticcheck",
"unconvert",
"gofmt",
"goimports",
"golint",
"vet"
]
}

View file

@ -30,3 +30,22 @@ Helen Xie <xieyulin821@harmonycloud.cn> Helen-xie <xieyulin821@harmonycloud.cn>
Mike Brown <brownwm@us.ibm.com> Mike Brown <mikebrow@users.noreply.github.com> Mike Brown <brownwm@us.ibm.com> Mike Brown <mikebrow@users.noreply.github.com>
Manish Tomar <manish.tomar@docker.com> Manish Tomar <manishtomar@users.noreply.github.com> Manish Tomar <manish.tomar@docker.com> Manish Tomar <manishtomar@users.noreply.github.com>
Sakeven Jiang <jc5930@sina.cn> sakeven <jc5930@sina.cn> Sakeven Jiang <jc5930@sina.cn> sakeven <jc5930@sina.cn>
Milos Gajdos <milosgajdos83@gmail.com> Milos Gajdos <milosgajdos@users.noreply.github.com>
Derek McGowan <derek@mcgstyle.net> Derek McGowa <dmcgowan@users.noreply.github.com>
Adrian Plata <adrian.plata@docker.com> Adrian Plata <@users.noreply.github.com>
Sebastiaan van Stijn <github@gone.nl> Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>
Vishesh Jindal <vishesh92@gmail.com> Vishesh Jindal <vishesh92@users.noreply.github.com>
Wang Yan <wangyan@vmware.com> Wang Yan <wy65701436@users.noreply.github.com>
Chris Patterson <chrispat@github.com> Chris Patterson <chrispat@users.noreply.github.com>
Eohyung Lee <liquidnuker@gmail.com> Eohyung Lee <leoh0@users.noreply.github.com>
João Pereira <484633+joaodrp@users.noreply.github.com>
Smasherr <soundcracker@gmail.com> Smasherr <Smasherr@users.noreply.github.com>
Thomas Berger <loki@lokis-chaos.de> Thomas Berger <tbe@users.noreply.github.com>
Samuel Karp <skarp@amazon.com> Samuel Karp <samuelkarp@users.noreply.github.com>
Justin Cormack <justin.cormack@docker.com>
sayboras <sayboras@yahoo.com>
CrazyMax <github@crazymax.dev> <1951866+crazy-max@users.noreply.github.com>
Hayley Swimelar <hswimelar@gmail.com>
Jose D. Gomez R <jose.gomez@suse.com>
Shengjing Zhu <zhsj@debian.org>
Silvin Lubecki <31478878+silvin-lubecki@users.noreply.github.com>

View file

@ -1,51 +0,0 @@
dist: trusty
sudo: required
# setup travis so that we can run containers for integration tests
services:
- docker
language: go
go:
- "1.11.x"
go_import_path: github.com/docker/distribution
addons:
apt:
packages:
- python-minimal
env:
- TRAVIS_GOOS=linux DOCKER_BUILDTAGS="include_oss include_gcs" TRAVIS_CGO_ENABLED=1
before_install:
- uname -r
- sudo apt-get -q update
install:
- go get -u github.com/vbatts/git-validation
# TODO: Add enforcement of license
# - go get -u github.com/kunalkushwaha/ltag
- cd $TRAVIS_BUILD_DIR
script:
- export GOOS=$TRAVIS_GOOS
- export CGO_ENABLED=$TRAVIS_CGO_ENABLED
- DCO_VERBOSITY=-q script/validate/dco
- GOOS=linux script/setup/install-dev-tools
- script/validate/vendor
- go build -i .
- make check
- make build
- make binaries
# Currently takes too long
#- if [ "$GOOS" = "linux" ]; then make test-race ; fi
- if [ "$GOOS" = "linux" ]; then make coverage ; fi
after_success:
- bash <(curl -s https://codecov.io/bash) -F linux
before_deploy:
# Run tests with storage driver configurations

View file

@ -114,4 +114,4 @@ the registry binary generated in the "./bin" directory:
### Optional build tags ### Optional build tags
Optional [build tags](http://golang.org/pkg/go/build/) can be provided using Optional [build tags](http://golang.org/pkg/go/build/) can be provided using
the environment variable `DOCKER_BUILDTAGS`. the environment variable `BUILDTAGS`.

View file

@ -1,22 +1,59 @@
FROM golang:1.11-alpine AS build # syntax=docker/dockerfile:1
ENV DISTRIBUTION_DIR /go/src/github.com/docker/distribution ARG GO_VERSION=1.19.9
ENV DOCKER_BUILDTAGS include_oss include_gcs ARG ALPINE_VERSION=3.16
ARG XX_VERSION=1.2.1
ARG GOOS=linux FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
ARG GOARCH=amd64 FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine${ALPINE_VERSION} AS base
ARG GOARM=6 COPY --from=xx / /
RUN apk add --no-cache bash coreutils file git
ENV GO111MODULE=auto
ENV CGO_ENABLED=0
WORKDIR /go/src/github.com/docker/distribution
RUN set -ex \ FROM base AS version
&& apk add --no-cache make git file ARG PKG="github.com/docker/distribution"
RUN --mount=target=. \
VERSION=$(git describe --match 'v[0-9]*' --dirty='.m' --always --tags) REVISION=$(git rev-parse HEAD)$(if ! git diff --no-ext-diff --quiet --exit-code; then echo .m; fi); \
echo "-X ${PKG}/version.Version=${VERSION#v} -X ${PKG}/version.Revision=${REVISION} -X ${PKG}/version.Package=${PKG}" | tee /tmp/.ldflags; \
echo -n "${VERSION}" | tee /tmp/.version;
WORKDIR $DISTRIBUTION_DIR FROM base AS build
COPY . $DISTRIBUTION_DIR ARG TARGETPLATFORM
RUN CGO_ENABLED=0 make PREFIX=/go clean binaries && file ./bin/registry | grep "statically linked" ARG LDFLAGS="-s -w"
ARG BUILDTAGS="include_oss,include_gcs"
RUN --mount=type=bind,target=/go/src/github.com/docker/distribution,rw \
--mount=type=cache,target=/root/.cache/go-build \
--mount=target=/go/pkg/mod,type=cache \
--mount=type=bind,source=/tmp/.ldflags,target=/tmp/.ldflags,from=version \
set -x ; xx-go build -tags "${BUILDTAGS}" -trimpath -ldflags "$(cat /tmp/.ldflags) ${LDFLAGS}" -o /usr/bin/registry ./cmd/registry \
&& xx-verify --static /usr/bin/registry
FROM alpine FROM scratch AS binary
COPY --from=build /usr/bin/registry /
FROM base AS releaser
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
WORKDIR /work
RUN --mount=from=binary,target=/build \
--mount=type=bind,target=/src \
--mount=type=bind,source=/tmp/.version,target=/tmp/.version,from=version \
VERSION=$(cat /tmp/.version) \
&& mkdir -p /out \
&& cp /build/registry /src/README.md /src/LICENSE . \
&& tar -czvf "/out/registry_${VERSION#v}_${TARGETOS}_${TARGETARCH}${TARGETVARIANT}.tar.gz" * \
&& sha256sum -z "/out/registry_${VERSION#v}_${TARGETOS}_${TARGETARCH}${TARGETVARIANT}.tar.gz" | awk '{ print $1 }' > "/out/registry_${VERSION#v}_${TARGETOS}_${TARGETARCH}${TARGETVARIANT}.tar.gz.sha256"
FROM scratch AS artifact
COPY --from=releaser /out /
FROM alpine:${ALPINE_VERSION}
RUN apk add --no-cache ca-certificates
COPY cmd/registry/config-dev.yml /etc/docker/registry/config.yml COPY cmd/registry/config-dev.yml /etc/docker/registry/config.yml
COPY --from=build /go/src/github.com/docker/distribution/bin/registry /bin/registry COPY --from=binary /registry /bin/registry
VOLUME ["/var/lib/registry"] VOLUME ["/var/lib/registry"]
EXPOSE 5000 EXPOSE 5000
ENTRYPOINT ["registry"] ENTRYPOINT ["registry"]

View file

@ -50,7 +50,7 @@ version/version.go:
check: ## run all linters (TODO: enable "unused", "varcheck", "ineffassign", "unconvert", "staticheck", "goimports", "structcheck") check: ## run all linters (TODO: enable "unused", "varcheck", "ineffassign", "unconvert", "staticheck", "goimports", "structcheck")
@echo "$(WHALE) $@" @echo "$(WHALE) $@"
gometalinter --config .gometalinter.json ./... @GO111MODULE=off golangci-lint --build-tags "${BUILDTAGS}" run
test: ## run tests, except integration test with test.short test: ## run tests, except integration test with test.short
@echo "$(WHALE) $@" @echo "$(WHALE) $@"

View file

@ -2,7 +2,7 @@
The Docker toolset to pack, ship, store, and deliver content. The Docker toolset to pack, ship, store, and deliver content.
This repository's main product is the Docker Registry 2.0 implementation This repository provides the Docker Registry 2.0 implementation
for storing and distributing Docker images. It supersedes the for storing and distributing Docker images. It supersedes the
[docker/docker-registry](https://github.com/docker/docker-registry) [docker/docker-registry](https://github.com/docker/docker-registry)
project with a new API design, focused around security and performance. project with a new API design, focused around security and performance.

View file

@ -10,7 +10,7 @@ import (
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1" v1 "github.com/opencontainers/image-spec/specs-go/v1"
) )
var ( var (

View file

@ -21,7 +21,7 @@ import (
"text/template" "text/template"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2" v2 "github.com/docker/distribution/registry/api/v2"
) )
var spaceRegex = regexp.MustCompile(`\n\s*`) var spaceRegex = regexp.MustCompile(`\n\s*`)

View file

@ -108,6 +108,12 @@ type Configuration struct {
// A file may contain multiple CA certificates encoded as PEM // A file may contain multiple CA certificates encoded as PEM
ClientCAs []string `yaml:"clientcas,omitempty"` ClientCAs []string `yaml:"clientcas,omitempty"`
// Specifies the lowest TLS version allowed
MinimumTLS string `yaml:"minimumtls,omitempty"`
// Specifies a list of cipher suites allowed
CipherSuites []string `yaml:"ciphersuites,omitempty"`
// LetsEncrypt is used to configuration setting up TLS through // LetsEncrypt is used to configuration setting up TLS through
// Let's Encrypt instead of manually specifying certificate and // Let's Encrypt instead of manually specifying certificate and
// key. If a TLS certificate is specified, the Let's Encrypt // key. If a TLS certificate is specified, the Let's Encrypt
@ -188,6 +194,7 @@ type Configuration struct {
} `yaml:"redis,omitempty"` } `yaml:"redis,omitempty"`
Health Health `yaml:"health,omitempty"` Health Health `yaml:"health,omitempty"`
Catalog Catalog `yaml:"catalog,omitempty"`
Proxy Proxy `yaml:"proxy,omitempty"` Proxy Proxy `yaml:"proxy,omitempty"`
@ -238,6 +245,16 @@ type Configuration struct {
} `yaml:"policy,omitempty"` } `yaml:"policy,omitempty"`
} }
// Catalog is composed of MaxEntries.
// Catalog endpoint (/v2/_catalog) configuration, it provides the configuration
// options to control the maximum number of entries returned by the catalog endpoint.
type Catalog struct {
// Max number of entries returned by the catalog endpoint. Requesting n entries
// to the catalog endpoint will return at most MaxEntries entries.
// An empty or a negative value will set a default of 1000 maximum entries by default.
MaxEntries int `yaml:"maxentries,omitempty"`
}
// LogHook is composed of hook Level and Type. // LogHook is composed of hook Level and Type.
// After hooks configuration, it can execute the next handling automatically, // After hooks configuration, it can execute the next handling automatically,
// when defined levels of log message emitted. // when defined levels of log message emitted.
@ -388,7 +405,7 @@ func (loglevel *Loglevel) UnmarshalYAML(unmarshal func(interface{}) error) error
switch loglevelString { switch loglevelString {
case "error", "warn", "info", "debug": case "error", "warn", "info", "debug":
default: default:
return fmt.Errorf("Invalid loglevel %s Must be one of [error, warn, info, debug]", loglevelString) return fmt.Errorf("invalid loglevel %s Must be one of [error, warn, info, debug]", loglevelString)
} }
*loglevel = Loglevel(loglevelString) *loglevel = Loglevel(loglevelString)
@ -463,7 +480,7 @@ func (storage *Storage) UnmarshalYAML(unmarshal func(interface{}) error) error {
} }
if len(types) > 1 { if len(types) > 1 {
return fmt.Errorf("Must provide exactly one storage type. Provided: %v", types) return fmt.Errorf("must provide exactly one storage type. Provided: %v", types)
} }
} }
*storage = storageMap *storage = storageMap
@ -578,7 +595,7 @@ type Events struct {
IncludeReferences bool `yaml:"includereferences"` // include reference data in manifest events IncludeReferences bool `yaml:"includereferences"` // include reference data in manifest events
} }
//Ignore configures mediaTypes and actions of the event, that it won't be propagated // Ignore configures mediaTypes and actions of the event, that it won't be propagated
type Ignore struct { type Ignore struct {
MediaTypes []string `yaml:"mediatypes"` // target media types to ignore MediaTypes []string `yaml:"mediatypes"` // target media types to ignore
Actions []string `yaml:"actions"` // ignore action types Actions []string `yaml:"actions"` // ignore action types
@ -664,12 +681,17 @@ func Parse(rd io.Reader) (*Configuration, error) {
if v0_1.Loglevel != Loglevel("") { if v0_1.Loglevel != Loglevel("") {
v0_1.Loglevel = Loglevel("") v0_1.Loglevel = Loglevel("")
} }
if v0_1.Catalog.MaxEntries <= 0 {
v0_1.Catalog.MaxEntries = 1000
}
if v0_1.Storage.Type() == "" { if v0_1.Storage.Type() == "" {
return nil, errors.New("No storage configuration provided") return nil, errors.New("no storage configuration provided")
} }
return (*Configuration)(v0_1), nil return (*Configuration)(v0_1), nil
} }
return nil, fmt.Errorf("Expected *v0_1Configuration, received %#v", c) return nil, fmt.Errorf("expected *v0_1Configuration, received %#v", c)
}, },
}, },
}) })

View file

@ -71,6 +71,9 @@ var configStruct = Configuration{
}, },
}, },
}, },
Catalog: Catalog{
MaxEntries: 1000,
},
HTTP: struct { HTTP: struct {
Addr string `yaml:"addr,omitempty"` Addr string `yaml:"addr,omitempty"`
Net string `yaml:"net,omitempty"` Net string `yaml:"net,omitempty"`
@ -83,6 +86,8 @@ var configStruct = Configuration{
Certificate string `yaml:"certificate,omitempty"` Certificate string `yaml:"certificate,omitempty"`
Key string `yaml:"key,omitempty"` Key string `yaml:"key,omitempty"`
ClientCAs []string `yaml:"clientcas,omitempty"` ClientCAs []string `yaml:"clientcas,omitempty"`
MinimumTLS string `yaml:"minimumtls,omitempty"`
CipherSuites []string `yaml:"ciphersuites,omitempty"`
LetsEncrypt struct { LetsEncrypt struct {
CacheFile string `yaml:"cachefile,omitempty"` CacheFile string `yaml:"cachefile,omitempty"`
Email string `yaml:"email,omitempty"` Email string `yaml:"email,omitempty"`
@ -105,6 +110,8 @@ var configStruct = Configuration{
Certificate string `yaml:"certificate,omitempty"` Certificate string `yaml:"certificate,omitempty"`
Key string `yaml:"key,omitempty"` Key string `yaml:"key,omitempty"`
ClientCAs []string `yaml:"clientcas,omitempty"` ClientCAs []string `yaml:"clientcas,omitempty"`
MinimumTLS string `yaml:"minimumtls,omitempty"`
CipherSuites []string `yaml:"ciphersuites,omitempty"`
LetsEncrypt struct { LetsEncrypt struct {
CacheFile string `yaml:"cachefile,omitempty"` CacheFile string `yaml:"cachefile,omitempty"`
Email string `yaml:"email,omitempty"` Email string `yaml:"email,omitempty"`
@ -520,6 +527,7 @@ func copyConfig(config Configuration) *Configuration {
configCopy.Version = MajorMinorVersion(config.Version.Major(), config.Version.Minor()) configCopy.Version = MajorMinorVersion(config.Version.Major(), config.Version.Minor())
configCopy.Loglevel = config.Loglevel configCopy.Loglevel = config.Loglevel
configCopy.Log = config.Log configCopy.Log = config.Log
configCopy.Catalog = config.Catalog
configCopy.Log.Fields = make(map[string]interface{}, len(config.Log.Fields)) configCopy.Log.Fields = make(map[string]interface{}, len(config.Log.Fields))
for k, v := range config.Log.Fields { for k, v := range config.Log.Fields {
configCopy.Log.Fields[k] = v configCopy.Log.Fields[k] = v
@ -540,9 +548,7 @@ func copyConfig(config Configuration) *Configuration {
} }
configCopy.Notifications = Notifications{Endpoints: []Endpoint{}} configCopy.Notifications = Notifications{Endpoints: []Endpoint{}}
for _, v := range config.Notifications.Endpoints { configCopy.Notifications.Endpoints = append(configCopy.Notifications.Endpoints, config.Notifications.Endpoints...)
configCopy.Notifications.Endpoints = append(configCopy.Notifications.Endpoints, v)
}
configCopy.HTTP.Headers = make(http.Header) configCopy.HTTP.Headers = make(http.Header)
for k, v := range config.HTTP.Headers { for k, v := range config.HTTP.Headers {

View file

@ -122,7 +122,7 @@ func (p *Parser) Parse(in []byte, v interface{}) error {
parseInfo, ok := p.mapping[versionedStruct.Version] parseInfo, ok := p.mapping[versionedStruct.Version]
if !ok { if !ok {
return fmt.Errorf("Unsupported version: %q", versionedStruct.Version) return fmt.Errorf("unsupported version: %q", versionedStruct.Version)
} }
parseAs := reflect.New(parseInfo.ParseAs) parseAs := reflect.New(parseInfo.ParseAs)

View file

@ -15,7 +15,7 @@
// The above will store the version in the context and will be available to // The above will store the version in the context and will be available to
// the logger. // the logger.
// //
// Logging // # Logging
// //
// The most useful aspect of this package is GetLogger. This function takes // The most useful aspect of this package is GetLogger. This function takes
// any context.Context interface and returns the current logger from the // any context.Context interface and returns the current logger from the
@ -65,7 +65,7 @@
// added to the request context, is unique to that context and can have // added to the request context, is unique to that context and can have
// request scoped variables. // request scoped variables.
// //
// HTTP Requests // # HTTP Requests
// //
// This package also contains several methods for working with http requests. // This package also contains several methods for working with http requests.
// The concepts are very similar to those described above. We simply place the // The concepts are very similar to those described above. We simply place the

View file

@ -246,11 +246,7 @@ func (ctx *muxVarsContext) Value(key interface{}) interface{} {
return ctx.vars return ctx.vars
} }
if strings.HasPrefix(keyStr, "vars.") { if v, ok := ctx.vars[strings.TrimPrefix(keyStr, "vars.")]; ok {
keyStr = strings.TrimPrefix(keyStr, "vars.")
}
if v, ok := ctx.vars[keyStr]; ok {
return v return v
} }
} }

View file

@ -2,9 +2,10 @@ package main
import ( import (
"context" "context"
"crypto/rand"
"encoding/json" "encoding/json"
"flag" "flag"
"math/rand" "math/big"
"net/http" "net/http"
"strconv" "strconv"
"strings" "strings"
@ -141,8 +142,15 @@ const refreshTokenLength = 15
func newRefreshToken() string { func newRefreshToken() string {
s := make([]rune, refreshTokenLength) s := make([]rune, refreshTokenLength)
max := int64(len(refreshCharacters))
for i := range s { for i := range s {
s[i] = refreshCharacters[rand.Intn(len(refreshCharacters))] randInt, err := rand.Int(rand.Reader, big.NewInt(max))
// let '0' serves the failure case
if err != nil {
logrus.Infof("Error on making refersh token: %v", err)
randInt = big.NewInt(0)
}
s[i] = refreshCharacters[randInt.Int64()]
} }
return string(s) return string(s)
} }

56
docker-bake.hcl Normal file
View file

@ -0,0 +1,56 @@
group "default" {
targets = ["image-local"]
}
// Special target: https://github.com/docker/metadata-action#bake-definition
target "docker-metadata-action" {
tags = ["registry:local"]
}
target "binary" {
target = "binary"
output = ["./bin"]
}
target "artifact" {
target = "artifact"
output = ["./bin"]
}
target "artifact-all" {
inherits = ["artifact"]
platforms = [
"linux/amd64",
"linux/arm/v6",
"linux/arm/v7",
"linux/arm64",
"linux/ppc64le",
"linux/s390x"
]
}
// Special target: https://github.com/docker/metadata-action#bake-definition
target "docker-metadata-action" {
tags = ["registry:local"]
}
target "image" {
inherits = ["docker-metadata-action"]
}
target "image-local" {
inherits = ["image"]
output = ["type=docker"]
}
target "image-all" {
inherits = ["image"]
platforms = [
"linux/amd64",
"linux/arm/v6",
"linux/arm/v7",
"linux/arm64",
"linux/ppc64le",
"linux/s390x"
]
}

View file

@ -703,15 +703,20 @@ interpretation of the options.
| `baseurl` | yes | The `SCHEME://HOST[/PATH]` at which Cloudfront is served. | | `baseurl` | yes | The `SCHEME://HOST[/PATH]` at which Cloudfront is served. |
| `privatekey` | yes | The private key for Cloudfront, provided by AWS. | | `privatekey` | yes | The private key for Cloudfront, provided by AWS. |
| `keypairid` | yes | The key pair ID provided by AWS. | | `keypairid` | yes | The key pair ID provided by AWS. |
| `duration` | no | An integer and unit for the duration of the Cloudfront session. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, or `h`. For example, `3000s` is valid, but `3000 s` is not. If you do not specify a `duration` or you specify an integer without a time unit, the duration defaults to `20m` (20 minutes).| | `duration` | no | An integer and unit for the duration of the Cloudfront session. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, or `h`. For example, `3000s` is valid, but `3000 s` is not. If you do not specify a `duration` or you specify an integer without a time unit, the duration defaults to `20m` (20 minutes). |
|`ipfilteredby`|no | A string with the following value `none|aws|awsregion`. | | `ipfilteredby` | no | A string with the following value `none`, `aws` or `awsregion`. |
|`awsregion`|no | A comma separated string of AWS regions, only available when `ipfilteredby` is `awsregion`. For example, `us-east-1, us-west-2`| | `awsregion` | no | A comma separated string of AWS regions, only available when `ipfilteredby` is `awsregion`. For example, `us-east-1, us-west-2` |
|`updatefrenquency`|no | The frequency to update AWS IP regions, default: `12h`| | `updatefrenquency` | no | The frequency to update AWS IP regions, default: `12h` |
|`iprangesurl`|no | The URL contains the AWS IP ranges information, default: `https://ip-ranges.amazonaws.com/ip-ranges.json`| | `iprangesurl` | no | The URL contains the AWS IP ranges information, default: `https://ip-ranges.amazonaws.com/ip-ranges.json` |
Then value of ipfilteredby:
`none`: default, do not filter by IP
`aws`: IP from AWS goes to S3 directly Value of `ipfilteredby` can be:
`awsregion`: IP from certain AWS regions goes to S3 directly, use together with `awsregion`
| Value | Description |
|-------------|------------------------------------|
| `none` | default, do not filter by IP |
| `aws` | IP from AWS goes to S3 directly |
| `awsregion` | IP from certain AWS regions goes to S3 directly, use together with `awsregion`. |
### `redirect` ### `redirect`
@ -777,6 +782,10 @@ http:
clientcas: clientcas:
- /path/to/ca.pem - /path/to/ca.pem
- /path/to/another/ca.pem - /path/to/another/ca.pem
minimumtls: tls1.2
ciphersuites:
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
letsencrypt: letsencrypt:
cachefile: /path/to/cache-file cachefile: /path/to/cache-file
email: emailused@letsencrypt.com email: emailused@letsencrypt.com
@ -815,6 +824,46 @@ and proxy connections to the registry server.
| `certificate` | yes | Absolute path to the x509 certificate file. | | `certificate` | yes | Absolute path to the x509 certificate file. |
| `key` | yes | Absolute path to the x509 private key file. | | `key` | yes | Absolute path to the x509 private key file. |
| `clientcas` | no | An array of absolute paths to x509 CA files. | | `clientcas` | no | An array of absolute paths to x509 CA files. |
| `minimumtls` | no | Minimum TLS version allowed (tls1.0, tls1.1, tls1.2, tls1.3). Defaults to tls1.2 |
| `ciphersuites` | no | Cipher suites allowed. Please see below for allowed values and default. |
Available cipher suites:
- TLS_RSA_WITH_RC4_128_SHA
- TLS_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_128_CBC_SHA256
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_RC4_128_SHA
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
- TLS_ECDHE_RSA_WITH_RC4_128_SHA
- TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_AES_128_GCM_SHA256
- TLS_AES_256_GCM_SHA384
- TLS_CHACHA20_POLY1305_SHA256
Default cipher suites:
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_AES_128_GCM_SHA256
- TLS_CHACHA20_POLY1305_SHA256
- TLS_AES_256_GCM_SHA384
### `letsencrypt` ### `letsencrypt`

View file

@ -2,6 +2,8 @@
title: "HTTP API V2" title: "HTTP API V2"
description: "Specification for the Registry API." description: "Specification for the Registry API."
keywords: registry, on-prem, images, tags, repository, distribution, api, advanced keywords: registry, on-prem, images, tags, repository, distribution, api, advanced
redirect_from:
- /reference/api/registry_api/
--- ---
# Docker Registry HTTP API V2 # Docker Registry HTTP API V2

View file

@ -2,6 +2,8 @@
title: "HTTP API V2" title: "HTTP API V2"
description: "Specification for the Registry API." description: "Specification for the Registry API."
keywords: registry, on-prem, images, tags, repository, distribution, api, advanced keywords: registry, on-prem, images, tags, repository, distribution, api, advanced
redirect_from:
- /reference/api/registry_api/
--- ---
# Docker Registry HTTP API V2 # Docker Registry HTTP API V2

View file

@ -0,0 +1,41 @@
---
title: Update deprecated schema image manifest version 2, v1 images
description: Update deprecated schema v1 iamges
keywords: registry, on-prem, images, tags, repository, distribution, api, advanced, manifest
---
## Image manifest version 2, schema 1
With the release of image manifest version 2, schema 2, image manifest version
2, schema 1 has been deprecated. This could lead to compatibility and
vulnerability issues in images that haven't been updated to image manifest
version 2, schema 2.
This page contains information on how to update from image manifest version 2,
schema 1. However, these instructions will not ensure your new image will run
successfully. There may be several other issues to troubleshoot that are
associated with the deprecated image manifest that will block your image from
running succesfully. A list of possible methods to help update your image is
also included below.
### Update to image manifest version 2, schema 2
One way to upgrade an image from image manifest version 2, schema 1 to
schema 2 is to `docker pull` the image and then `docker push` the image with a
current version of Docker. Doing so will automatically convert the image to use
the latest image manifest specification.
Converting an image to image manifest version 2, schema 2 converts the
manifest format, but does not update the contents within the image. Images
using manifest version 2, schema 1 may contain unpatched vulnerabilities. We
recommend looking for an alternative image or rebuilding it.
### Update FROM statement
You can rebuild the image by updating the `FROM` statement in your
`Dockerfile`. If your image manifest is out-of-date, there is a chance the
image pulled from your `FROM` statement in your `Dockerfile` is also
out-of-date. See the [Dockerfile reference](https://docs.docker.com/engine/reference/builder/#from)
and the [Dockerfile best practices guide](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/)
for more information on how to update the `FROM` statement in your
`Dockerfile`.

View file

@ -220,7 +220,7 @@ image. It's the direct replacement for the schema-1 manifest.
- **`urls`** *array* - **`urls`** *array*
Provides a list of URLs from which the content may be fetched. Content Provides a list of URLs from which the content may be fetched. Content
should be verified against the `digest` and `size`. This field is must be verified against the `digest` and `size`. This field is
optional and uncommon. optional and uncommon.
## Example Image Manifest ## Example Image Manifest

View file

@ -14,7 +14,7 @@ var (
// DownHandler registers a manual_http_status that always returns an Error // DownHandler registers a manual_http_status that always returns an Error
func DownHandler(w http.ResponseWriter, r *http.Request) { func DownHandler(w http.ResponseWriter, r *http.Request) {
if r.Method == "POST" { if r.Method == "POST" {
updater.Update(errors.New("Manual Check")) updater.Update(errors.New("manual Check"))
} else { } else {
w.WriteHeader(http.StatusNotFound) w.WriteHeader(http.StatusNotFound)
} }

View file

@ -13,7 +13,7 @@
// particularly useful for checks that verify upstream connectivity or // particularly useful for checks that verify upstream connectivity or
// database status, since they might take a long time to return/timeout. // database status, since they might take a long time to return/timeout.
// //
// Installing // # Installing
// //
// To install health, just import it in your application: // To install health, just import it in your application:
// //
@ -35,7 +35,7 @@
// After importing these packages to your main application, you can start // After importing these packages to your main application, you can start
// registering checks. // registering checks.
// //
// Registering Checks // # Registering Checks
// //
// The recommended way of registering checks is using a periodic Check. // The recommended way of registering checks is using a periodic Check.
// PeriodicChecks run on a certain schedule and asynchronously update the // PeriodicChecks run on a certain schedule and asynchronously update the
@ -84,7 +84,7 @@
// return Errors.new("This is an error!") // return Errors.new("This is an error!")
// })) // }))
// //
// Examples // # Examples
// //
// You could also use the health checker mechanism to ensure your application // You could also use the health checker mechanism to ensure your application
// only comes up if certain conditions are met, or to allow the developer to // only comes up if certain conditions are met, or to allow the developer to

View file

@ -8,7 +8,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/manifest" "github.com/docker/distribution/manifest"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1" v1 "github.com/opencontainers/image-spec/specs-go/v1"
) )
const ( const (
@ -54,6 +54,9 @@ func init() {
} }
imageIndexFunc := func(b []byte) (distribution.Manifest, distribution.Descriptor, error) { imageIndexFunc := func(b []byte) (distribution.Manifest, distribution.Descriptor, error) {
if err := validateIndex(b); err != nil {
return nil, distribution.Descriptor{}, err
}
m := new(DeserializedManifestList) m := new(DeserializedManifestList)
err := m.UnmarshalJSON(b) err := m.UnmarshalJSON(b)
if err != nil { if err != nil {
@ -163,7 +166,7 @@ func FromDescriptorsWithMediaType(descriptors []ManifestDescriptor, mediaType st
}, },
} }
m.Manifests = make([]ManifestDescriptor, len(descriptors), len(descriptors)) m.Manifests = make([]ManifestDescriptor, len(descriptors))
copy(m.Manifests, descriptors) copy(m.Manifests, descriptors)
deserialized := DeserializedManifestList{ deserialized := DeserializedManifestList{
@ -177,7 +180,7 @@ func FromDescriptorsWithMediaType(descriptors []ManifestDescriptor, mediaType st
// UnmarshalJSON populates a new ManifestList struct from JSON data. // UnmarshalJSON populates a new ManifestList struct from JSON data.
func (m *DeserializedManifestList) UnmarshalJSON(b []byte) error { func (m *DeserializedManifestList) UnmarshalJSON(b []byte) error {
m.canonical = make([]byte, len(b), len(b)) m.canonical = make([]byte, len(b))
// store manifest list in canonical // store manifest list in canonical
copy(m.canonical, b) copy(m.canonical, b)
@ -214,3 +217,23 @@ func (m DeserializedManifestList) Payload() (string, []byte, error) {
return mediaType, m.canonical, nil return mediaType, m.canonical, nil
} }
// unknownDocument represents a manifest, manifest list, or index that has not
// yet been validated
type unknownDocument struct {
Config interface{} `json:"config,omitempty"`
Layers interface{} `json:"layers,omitempty"`
}
// validateIndex returns an error if the byte slice is invalid JSON or if it
// contains fields that belong to a manifest
func validateIndex(b []byte) error {
var doc unknownDocument
if err := json.Unmarshal(b, &doc); err != nil {
return err
}
if doc.Config != nil || doc.Layers != nil {
return errors.New("index: expected index but found manifest")
}
return nil
}

View file

@ -7,7 +7,9 @@ import (
"testing" "testing"
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/opencontainers/image-spec/specs-go/v1" "github.com/docker/distribution/manifest/ocischema"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
) )
var expectedManifestListSerialization = []byte(`{ var expectedManifestListSerialization = []byte(`{
@ -303,3 +305,33 @@ func TestMediaTypes(t *testing.T) {
mediaTypeTest(t, v1.MediaTypeImageIndex, v1.MediaTypeImageIndex, false) mediaTypeTest(t, v1.MediaTypeImageIndex, v1.MediaTypeImageIndex, false)
mediaTypeTest(t, v1.MediaTypeImageIndex, v1.MediaTypeImageIndex+"XXX", true) mediaTypeTest(t, v1.MediaTypeImageIndex, v1.MediaTypeImageIndex+"XXX", true)
} }
func TestValidateManifest(t *testing.T) {
manifest := ocischema.Manifest{
Config: distribution.Descriptor{Size: 1},
Layers: []distribution.Descriptor{{Size: 2}},
}
index := ManifestList{
Manifests: []ManifestDescriptor{
{Descriptor: distribution.Descriptor{Size: 3}},
},
}
t.Run("valid", func(t *testing.T) {
b, err := json.Marshal(index)
if err != nil {
t.Fatal("unexpected error marshaling index", err)
}
if err := validateIndex(b); err != nil {
t.Error("index should be valid", err)
}
})
t.Run("invalid", func(t *testing.T) {
b, err := json.Marshal(manifest)
if err != nil {
t.Fatal("unexpected error marshaling manifest", err)
}
if err := validateIndex(b); err == nil {
t.Error("manifest should not be valid")
}
})
}

View file

@ -7,7 +7,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/manifest" "github.com/docker/distribution/manifest"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1" v1 "github.com/opencontainers/image-spec/specs-go/v1"
) )
// Builder is a type for constructing manifests. // Builder is a type for constructing manifests.
@ -48,7 +48,7 @@ func NewManifestBuilder(bs distribution.BlobService, configJSON []byte, annotati
// valid media type for oci image manifests currently: "" or "application/vnd.oci.image.manifest.v1+json" // valid media type for oci image manifests currently: "" or "application/vnd.oci.image.manifest.v1+json"
func (mb *Builder) SetMediaType(mediaType string) error { func (mb *Builder) SetMediaType(mediaType string) error {
if mediaType != "" && mediaType != v1.MediaTypeImageManifest { if mediaType != "" && mediaType != v1.MediaTypeImageManifest {
return errors.New("Invalid media type for OCI image manifest") return errors.New("invalid media type for OCI image manifest")
} }
mb.mediaType = mediaType mb.mediaType = mediaType

View file

@ -7,7 +7,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1" v1 "github.com/opencontainers/image-spec/specs-go/v1"
) )
type mockBlobService struct { type mockBlobService struct {

View file

@ -8,7 +8,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/manifest" "github.com/docker/distribution/manifest"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1" v1 "github.com/opencontainers/image-spec/specs-go/v1"
) )
var ( var (
@ -22,6 +22,9 @@ var (
func init() { func init() {
ocischemaFunc := func(b []byte) (distribution.Manifest, distribution.Descriptor, error) { ocischemaFunc := func(b []byte) (distribution.Manifest, distribution.Descriptor, error) {
if err := validateManifest(b); err != nil {
return nil, distribution.Descriptor{}, err
}
m := new(DeserializedManifest) m := new(DeserializedManifest)
err := m.UnmarshalJSON(b) err := m.UnmarshalJSON(b)
if err != nil { if err != nil {
@ -87,7 +90,7 @@ func FromStruct(m Manifest) (*DeserializedManifest, error) {
// UnmarshalJSON populates a new Manifest struct from JSON data. // UnmarshalJSON populates a new Manifest struct from JSON data.
func (m *DeserializedManifest) UnmarshalJSON(b []byte) error { func (m *DeserializedManifest) UnmarshalJSON(b []byte) error {
m.canonical = make([]byte, len(b), len(b)) m.canonical = make([]byte, len(b))
// store manifest in canonical // store manifest in canonical
copy(m.canonical, b) copy(m.canonical, b)
@ -122,3 +125,22 @@ func (m *DeserializedManifest) MarshalJSON() ([]byte, error) {
func (m DeserializedManifest) Payload() (string, []byte, error) { func (m DeserializedManifest) Payload() (string, []byte, error) {
return v1.MediaTypeImageManifest, m.canonical, nil return v1.MediaTypeImageManifest, m.canonical, nil
} }
// unknownDocument represents a manifest, manifest list, or index that has not
// yet been validated
type unknownDocument struct {
Manifests interface{} `json:"manifests,omitempty"`
}
// validateManifest returns an error if the byte slice is invalid JSON or if it
// contains fields that belong to a index
func validateManifest(b []byte) error {
var doc unknownDocument
if err := json.Unmarshal(b, &doc); err != nil {
return err
}
if doc.Manifests != nil {
return errors.New("ocimanifest: expected manifest but found index")
}
return nil
}

View file

@ -8,7 +8,9 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/manifest" "github.com/docker/distribution/manifest"
"github.com/opencontainers/image-spec/specs-go/v1" "github.com/docker/distribution/manifest/manifestlist"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
) )
var expectedManifestSerialization = []byte(`{ var expectedManifestSerialization = []byte(`{
@ -182,3 +184,33 @@ func TestMediaTypes(t *testing.T) {
mediaTypeTest(t, v1.MediaTypeImageManifest, false) mediaTypeTest(t, v1.MediaTypeImageManifest, false)
mediaTypeTest(t, v1.MediaTypeImageManifest+"XXX", true) mediaTypeTest(t, v1.MediaTypeImageManifest+"XXX", true)
} }
func TestValidateManifest(t *testing.T) {
manifest := Manifest{
Config: distribution.Descriptor{Size: 1},
Layers: []distribution.Descriptor{{Size: 2}},
}
index := manifestlist.ManifestList{
Manifests: []manifestlist.ManifestDescriptor{
{Descriptor: distribution.Descriptor{Size: 3}},
},
}
t.Run("valid", func(t *testing.T) {
b, err := json.Marshal(manifest)
if err != nil {
t.Fatal("unexpected error marshaling manifest", err)
}
if err := validateManifest(b); err != nil {
t.Error("manifest should be valid", err)
}
})
t.Run("invalid", func(t *testing.T) {
b, err := json.Marshal(index)
if err != nil {
t.Fatal("unexpected error marshaling index", err)
}
if err := validateManifest(b); err == nil {
t.Error("index should not be valid")
}
})
}

View file

@ -108,7 +108,7 @@ type SignedManifest struct {
// UnmarshalJSON populates a new SignedManifest struct from JSON data. // UnmarshalJSON populates a new SignedManifest struct from JSON data.
func (sm *SignedManifest) UnmarshalJSON(b []byte) error { func (sm *SignedManifest) UnmarshalJSON(b []byte) error {
sm.all = make([]byte, len(b), len(b)) sm.all = make([]byte, len(b))
// store manifest and signatures in all // store manifest and signatures in all
copy(sm.all, b) copy(sm.all, b)
@ -124,7 +124,7 @@ func (sm *SignedManifest) UnmarshalJSON(b []byte) error {
} }
// sm.Canonical stores the canonical manifest JSON // sm.Canonical stores the canonical manifest JSON
sm.Canonical = make([]byte, len(bytes), len(bytes)) sm.Canonical = make([]byte, len(bytes))
copy(sm.Canonical, bytes) copy(sm.Canonical, bytes)
// Unmarshal canonical JSON into Manifest object // Unmarshal canonical JSON into Manifest object

View file

@ -58,7 +58,7 @@ func (mb *referenceManifestBuilder) Build(ctx context.Context) (distribution.Man
func (mb *referenceManifestBuilder) AppendReference(d distribution.Describable) error { func (mb *referenceManifestBuilder) AppendReference(d distribution.Describable) error {
r, ok := d.(Reference) r, ok := d.(Reference)
if !ok { if !ok {
return fmt.Errorf("Unable to add non-reference type to v1 builder") return fmt.Errorf("unable to add non-reference type to v1 builder")
} }
// Entries need to be prepended // Entries need to be prepended

View file

@ -106,7 +106,7 @@ func FromStruct(m Manifest) (*DeserializedManifest, error) {
// UnmarshalJSON populates a new Manifest struct from JSON data. // UnmarshalJSON populates a new Manifest struct from JSON data.
func (m *DeserializedManifest) UnmarshalJSON(b []byte) error { func (m *DeserializedManifest) UnmarshalJSON(b []byte) error {
m.canonical = make([]byte, len(b), len(b)) m.canonical = make([]byte, len(b))
// store manifest in canonical // store manifest in canonical
copy(m.canonical, b) copy(m.canonical, b)

View file

@ -87,7 +87,7 @@ func ManifestMediaTypes() (mediaTypes []string) {
// UnmarshalFunc implements manifest unmarshalling a given MediaType // UnmarshalFunc implements manifest unmarshalling a given MediaType
type UnmarshalFunc func([]byte) (Manifest, Descriptor, error) type UnmarshalFunc func([]byte) (Manifest, Descriptor, error)
var mappings = make(map[string]UnmarshalFunc, 0) var mappings = make(map[string]UnmarshalFunc)
// UnmarshalManifest looks up manifest unmarshal functions based on // UnmarshalManifest looks up manifest unmarshal functions based on
// MediaType // MediaType

View file

@ -125,15 +125,6 @@ func (b *bridge) RepoDeleted(repo reference.Named) error {
return b.sink.Write(*event) return b.sink.Write(*event)
} }
func (b *bridge) createManifestEventAndWrite(action string, repo reference.Named, sm distribution.Manifest) error {
manifestEvent, err := b.createManifestEvent(action, repo, sm)
if err != nil {
return err
}
return b.sink.Write(*manifestEvent)
}
func (b *bridge) createManifestDeleteEventAndWrite(action string, repo reference.Named, dgst digest.Digest) error { func (b *bridge) createManifestDeleteEventAndWrite(action string, repo reference.Named, dgst digest.Digest) error {
event := b.createEvent(action) event := b.createEvent(action)
event.Target.Repository = repo.Name() event.Target.Repository = repo.Name()

View file

@ -6,7 +6,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/manifest/schema1" "github.com/docker/distribution/manifest/schema1"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/v2" v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/uuid" "github.com/docker/distribution/uuid"
"github.com/docker/libtrust" "github.com/docker/libtrust"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"

View file

@ -114,8 +114,7 @@ func TestEventEnvelopeJSONFormat(t *testing.T) {
prototype.Request.UserAgent = "test/0.1" prototype.Request.UserAgent = "test/0.1"
prototype.Source.Addr = "hostname.local:port" prototype.Source.Addr = "hostname.local:port"
var manifestPush Event var manifestPush = prototype
manifestPush = prototype
manifestPush.ID = "asdf-asdf-asdf-asdf-0" manifestPush.ID = "asdf-asdf-asdf-asdf-0"
manifestPush.Target.Digest = "sha256:0123456789abcdef0" manifestPush.Target.Digest = "sha256:0123456789abcdef0"
manifestPush.Target.Length = 1 manifestPush.Target.Length = 1
@ -124,8 +123,7 @@ func TestEventEnvelopeJSONFormat(t *testing.T) {
manifestPush.Target.Repository = "library/test" manifestPush.Target.Repository = "library/test"
manifestPush.Target.URL = "http://example.com/v2/library/test/manifests/latest" manifestPush.Target.URL = "http://example.com/v2/library/test/manifests/latest"
var layerPush0 Event var layerPush0 = prototype
layerPush0 = prototype
layerPush0.ID = "asdf-asdf-asdf-asdf-1" layerPush0.ID = "asdf-asdf-asdf-asdf-1"
layerPush0.Target.Digest = "sha256:3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d5" layerPush0.Target.Digest = "sha256:3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d5"
layerPush0.Target.Length = 2 layerPush0.Target.Length = 2
@ -134,8 +132,7 @@ func TestEventEnvelopeJSONFormat(t *testing.T) {
layerPush0.Target.Repository = "library/test" layerPush0.Target.Repository = "library/test"
layerPush0.Target.URL = "http://example.com/v2/library/test/manifests/latest" layerPush0.Target.URL = "http://example.com/v2/library/test/manifests/latest"
var layerPush1 Event var layerPush1 = prototype
layerPush1 = prototype
layerPush1.ID = "asdf-asdf-asdf-asdf-2" layerPush1.ID = "asdf-asdf-asdf-asdf-2"
layerPush1.Target.Digest = "sha256:3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d6" layerPush1.Target.Digest = "sha256:3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d6"
layerPush1.Target.Length = 3 layerPush1.Target.Length = 3

View file

@ -133,8 +133,7 @@ type headerRoundTripper struct {
} }
func (hrt *headerRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) { func (hrt *headerRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
var nreq http.Request var nreq = *req
nreq = *req
nreq.Header = make(http.Header) nreq.Header = make(http.Header)
merge := func(headers http.Header) { merge := func(headers http.Header) {

View file

@ -136,11 +136,10 @@ func checkExerciseRepository(t *testing.T, repository distribution.Repository, r
var blobDigests []digest.Digest var blobDigests []digest.Digest
blobs := repository.Blobs(ctx) blobs := repository.Blobs(ctx)
for i := 0; i < 2; i++ { for i := 0; i < 2; i++ {
rs, ds, err := testutil.CreateRandomTarFile() rs, dgst, err := testutil.CreateRandomTarFile()
if err != nil { if err != nil {
t.Fatalf("error creating test layer: %v", err) t.Fatalf("error creating test layer: %v", err)
} }
dgst := digest.Digest(ds)
blobDigests = append(blobDigests, dgst) blobDigests = append(blobDigests, dgst)
wr, err := blobs.Create(ctx) wr, err := blobs.Create(ctx)

View file

@ -284,11 +284,6 @@ type retryingSink struct {
} }
} }
type retryingSinkListener interface {
active(events ...Event)
retry(events ...Event)
}
// TODO(stevvooe): We are using circuit break here, which actually doesn't // TODO(stevvooe): We are using circuit break here, which actually doesn't
// make a whole lot of sense for this use case, since we always retry. Move // make a whole lot of sense for this use case, since we always retry. Move
// this to use bounded exponential backoff. // this to use bounded exponential backoff.

View file

@ -17,4 +17,4 @@ RUN wget https://golang.org/dl/go$GOLANG_VERSION.linux-amd64.tar.gz --quiet && \
tar -C /usr/local -xzf go$GOLANG_VERSION.linux-amd64.tar.gz && \ tar -C /usr/local -xzf go$GOLANG_VERSION.linux-amd64.tar.gz && \
rm go${GOLANG_VERSION}.linux-amd64.tar.gz rm go${GOLANG_VERSION}.linux-amd64.tar.gz
RUN go get github.com/axw/gocov/gocov github.com/mattn/goveralls github.com/golang/lint/golint RUN go install github.com/axw/gocov/gocov@latest github.com/mattn/goveralls@latest github.com/golang/lint/golint@latest

View file

@ -56,6 +56,35 @@ func ParseNormalizedNamed(s string) (Named, error) {
return named, nil return named, nil
} }
// ParseDockerRef normalizes the image reference following the docker convention. This is added
// mainly for backward compatibility.
// The reference returned can only be either tagged or digested. For reference contains both tag
// and digest, the function returns digested reference, e.g. docker.io/library/busybox:latest@
// sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa will be returned as
// docker.io/library/busybox@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa.
func ParseDockerRef(ref string) (Named, error) {
named, err := ParseNormalizedNamed(ref)
if err != nil {
return nil, err
}
if _, ok := named.(NamedTagged); ok {
if canonical, ok := named.(Canonical); ok {
// The reference is both tagged and digested, only
// return digested.
newNamed, err := WithName(canonical.Name())
if err != nil {
return nil, err
}
newCanonical, err := WithDigest(newNamed, canonical.Digest())
if err != nil {
return nil, err
}
return newCanonical, nil
}
}
return TagNameOnly(named), nil
}
// splitDockerDomain splits a repository name to domain and remotename string. // splitDockerDomain splits a repository name to domain and remotename string.
// If no valid domain is found, the default domain is used. Repository name // If no valid domain is found, the default domain is used. Repository name
// needs to be already validated before. // needs to be already validated before.

View file

@ -623,3 +623,83 @@ func TestMatch(t *testing.T) {
} }
} }
} }
func TestParseDockerRef(t *testing.T) {
testcases := []struct {
name string
input string
expected string
}{
{
name: "nothing",
input: "busybox",
expected: "docker.io/library/busybox:latest",
},
{
name: "tag only",
input: "busybox:latest",
expected: "docker.io/library/busybox:latest",
},
{
name: "digest only",
input: "busybox@sha256:e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582",
expected: "docker.io/library/busybox@sha256:e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582",
},
{
name: "path only",
input: "library/busybox",
expected: "docker.io/library/busybox:latest",
},
{
name: "hostname only",
input: "docker.io/busybox",
expected: "docker.io/library/busybox:latest",
},
{
name: "no tag",
input: "docker.io/library/busybox",
expected: "docker.io/library/busybox:latest",
},
{
name: "no path",
input: "docker.io/busybox:latest",
expected: "docker.io/library/busybox:latest",
},
{
name: "no hostname",
input: "library/busybox:latest",
expected: "docker.io/library/busybox:latest",
},
{
name: "full reference with tag",
input: "docker.io/library/busybox:latest",
expected: "docker.io/library/busybox:latest",
},
{
name: "gcr reference without tag",
input: "gcr.io/library/busybox",
expected: "gcr.io/library/busybox:latest",
},
{
name: "both tag and digest",
input: "gcr.io/library/busybox:latest@sha256:e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582",
expected: "gcr.io/library/busybox@sha256:e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582",
},
}
for _, test := range testcases {
t.Run(test.name, func(t *testing.T) {
normalized, err := ParseDockerRef(test.input)
if err != nil {
t.Fatal(err)
}
output := normalized.String()
if output != test.expected {
t.Fatalf("expected %q to be parsed as %v, got %v", test.input, test.expected, output)
}
_, err = Parse(output)
if err != nil {
t.Fatalf("%q should be a valid reference, but got an error: %v", output, err)
}
})
}
}

View file

@ -205,7 +205,7 @@ func Parse(s string) (Reference, error) {
var repo repository var repo repository
nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1]) nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1])
if nameMatch != nil && len(nameMatch) == 3 { if len(nameMatch) == 3 {
repo.domain = nameMatch[1] repo.domain = nameMatch[1]
repo.path = nameMatch[2] repo.path = nameMatch[2]
} else { } else {

View file

@ -639,7 +639,7 @@ func TestParseNamed(t *testing.T) {
failf("error parsing name: %s", err) failf("error parsing name: %s", err)
continue continue
} else if err == nil && testcase.err != nil { } else if err == nil && testcase.err != nil {
failf("parsing succeded: expected error %v", testcase.err) failf("parsing succeeded: expected error %v", testcase.err)
continue continue
} else if err != testcase.err { } else if err != testcase.err {
failf("unexpected error %v, expected %v", err, testcase.err) failf("unexpected error %v, expected %v", err, testcase.err)

View file

@ -207,11 +207,11 @@ func (errs Errors) MarshalJSON() ([]byte, error) {
for _, daErr := range errs { for _, daErr := range errs {
var err Error var err Error
switch daErr.(type) { switch daErr := daErr.(type) {
case ErrorCode: case ErrorCode:
err = daErr.(ErrorCode).WithDetail(nil) err = daErr.WithDetail(nil)
case Error: case Error:
err = daErr.(Error) err = daErr
default: default:
err = ErrorCodeUnknown.WithDetail(daErr) err = ErrorCodeUnknown.WithDetail(daErr)

View file

@ -134,6 +134,19 @@ var (
}, },
} }
invalidPaginationResponseDescriptor = ResponseDescriptor{
Name: "Invalid pagination number",
Description: "The received parameter n was invalid in some way, as described by the error code. The client should resolve the issue and retry the request.",
StatusCode: http.StatusBadRequest,
Body: BodyDescriptor{
ContentType: "application/json",
Format: errorsBody,
},
ErrorCodes: []errcode.ErrorCode{
ErrorCodePaginationNumberInvalid,
},
}
repositoryNotFoundResponseDescriptor = ResponseDescriptor{ repositoryNotFoundResponseDescriptor = ResponseDescriptor{
Name: "No Such Repository Error", Name: "No Such Repository Error",
StatusCode: http.StatusNotFound, StatusCode: http.StatusNotFound,
@ -490,6 +503,7 @@ var routeDescriptors = []RouteDescriptor{
}, },
}, },
Failures: []ResponseDescriptor{ Failures: []ResponseDescriptor{
invalidPaginationResponseDescriptor,
unauthorizedResponseDescriptor, unauthorizedResponseDescriptor,
repositoryNotFoundResponseDescriptor, repositoryNotFoundResponseDescriptor,
deniedResponseDescriptor, deniedResponseDescriptor,
@ -1578,6 +1592,9 @@ var routeDescriptors = []RouteDescriptor{
}, },
}, },
}, },
Failures: []ResponseDescriptor{
invalidPaginationResponseDescriptor,
},
}, },
}, },
}, },

View file

@ -133,4 +133,13 @@ var (
longer proceed.`, longer proceed.`,
HTTPStatusCode: http.StatusNotFound, HTTPStatusCode: http.StatusNotFound,
}) })
ErrorCodePaginationNumberInvalid = errcode.Register(errGroup, errcode.ErrorDescriptor{
Value: "PAGINATION_NUMBER_INVALID",
Message: "invalid number of results requested",
Description: `Returned when the "n" parameter (number of results
to return) is not an integer, "n" is negative or "n" is bigger than
the maximum allowed.`,
HTTPStatusCode: http.StatusBadRequest,
})
) )

View file

@ -252,15 +252,3 @@ func appendValuesURL(u *url.URL, values ...url.Values) *url.URL {
u.RawQuery = merged.Encode() u.RawQuery = merged.Encode()
return u return u
} }
// appendValues appends the parameters to the url. Panics if the string is not
// a url.
func appendValues(u string, values ...url.Values) string {
up, err := url.Parse(u)
if err != nil {
panic(err) // should never happen
}
return appendValuesURL(up, values...).String()
}

View file

@ -182,11 +182,6 @@ func TestURLBuilderWithPrefix(t *testing.T) {
doTest(false) doTest(false)
} }
type builderFromRequestTestCase struct {
request *http.Request
base string
}
func TestBuilderFromRequest(t *testing.T) { func TestBuilderFromRequest(t *testing.T) {
u, err := url.Parse("http://example.com") u, err := url.Parse("http://example.com")
if err != nil { if err != nil {

View file

@ -29,7 +29,6 @@
// } // }
// } // }
// } // }
//
package auth package auth
import ( import (

View file

@ -162,11 +162,14 @@ func checkOptions(options map[string]interface{}) (tokenAccessOptions, error) {
opts.realm, opts.issuer, opts.service, opts.rootCertBundle = vals[0], vals[1], vals[2], vals[3] opts.realm, opts.issuer, opts.service, opts.rootCertBundle = vals[0], vals[1], vals[2], vals[3]
autoRedirect, ok := options["autoredirect"].(bool) autoRedirectVal, ok := options["autoredirect"]
if ok {
autoRedirect, ok := autoRedirectVal.(bool)
if !ok { if !ok {
return opts, fmt.Errorf("token auth requires a valid option bool: autoredirect") return opts, fmt.Errorf("token auth requires a valid option bool: autoredirect")
} }
opts.autoRedirect = autoRedirect opts.autoRedirect = autoRedirect
}
return opts, nil return opts, nil
} }

View file

@ -185,6 +185,7 @@ func (t *Token) Verify(verifyOpts VerifyOptions) error {
// VerifySigningKey attempts to get the key which was used to sign this token. // VerifySigningKey attempts to get the key which was used to sign this token.
// The token header should contain either of these 3 fields: // The token header should contain either of these 3 fields:
//
// `x5c` - The x509 certificate chain for the signing key. Needs to be // `x5c` - The x509 certificate chain for the signing key. Needs to be
// verified. // verified.
// `jwk` - The JSON Web Key representation of the signing key. // `jwk` - The JSON Web Key representation of the signing key.
@ -192,6 +193,7 @@ func (t *Token) Verify(verifyOpts VerifyOptions) error {
// `kid` - The unique identifier for the key. This library interprets it // `kid` - The unique identifier for the key. This library interprets it
// as a libtrust fingerprint. The key itself can be looked up in // as a libtrust fingerprint. The key itself can be looked up in
// the trustedKeys field of the given verify options. // the trustedKeys field of the given verify options.
//
// Each of these methods are tried in that order of preference until the // Each of these methods are tried in that order of preference until the
// signing key is found or an error is returned. // signing key is found or an error is returned.
func (t *Token) VerifySigningKey(verifyOpts VerifyOptions) (signingKey libtrust.PublicKey, err error) { func (t *Token) VerifySigningKey(verifyOpts VerifyOptions) (signingKey libtrust.PublicKey, err error) {

View file

@ -117,8 +117,8 @@ func init() {
var t octetType var t octetType
isCtl := c <= 31 || c == 127 isCtl := c <= 31 || c == 127
isChar := 0 <= c && c <= 127 isChar := 0 <= c && c <= 127
isSeparator := strings.IndexRune(" \t\"(),/:;<=>?@[]\\{}", rune(c)) >= 0 isSeparator := strings.ContainsRune(" \t\"(),/:;<=>?@[]\\{}", rune(c))
if strings.IndexRune(" \t\r\n", rune(c)) >= 0 { if strings.ContainsRune(" \t\r\n", rune(c)) {
t |= isSpace t |= isSpace
} }
if isChar && !isCtl && !isSeparator { if isChar && !isCtl && !isSeparator {

View file

@ -466,7 +466,7 @@ func TestEndpointAuthorizeTokenBasic(t *testing.T) {
}, },
}) })
authenicate1 := fmt.Sprintf("Basic realm=localhost") authenicate1 := "Basic realm=localhost"
basicCheck := func(a string) bool { basicCheck := func(a string) bool {
return a == fmt.Sprintf("Basic %s", basicAuth(username, password)) return a == fmt.Sprintf("Basic %s", basicAuth(username, password))
} }
@ -546,7 +546,7 @@ func TestEndpointAuthorizeTokenBasicWithExpiresIn(t *testing.T) {
}, },
}) })
authenicate1 := fmt.Sprintf("Basic realm=localhost") authenicate1 := "Basic realm=localhost"
tokenExchanges := 0 tokenExchanges := 0
basicCheck := func(a string) bool { basicCheck := func(a string) bool {
tokenExchanges = tokenExchanges + 1 tokenExchanges = tokenExchanges + 1
@ -706,7 +706,7 @@ func TestEndpointAuthorizeTokenBasicWithExpiresInAndIssuedAt(t *testing.T) {
}, },
}) })
authenicate1 := fmt.Sprintf("Basic realm=localhost") authenicate1 := "Basic realm=localhost"
tokenExchanges := 0 tokenExchanges := 0
basicCheck := func(a string) bool { basicCheck := func(a string) bool {
tokenExchanges = tokenExchanges + 1 tokenExchanges = tokenExchanges + 1
@ -835,7 +835,7 @@ func TestEndpointAuthorizeBasic(t *testing.T) {
username := "user1" username := "user1"
password := "funSecretPa$$word" password := "funSecretPa$$word"
authenicate := fmt.Sprintf("Basic realm=localhost") authenicate := "Basic realm=localhost"
validCheck := func(a string) bool { validCheck := func(a string) bool {
return a == fmt.Sprintf("Basic %s", basicAuth(username, password)) return a == fmt.Sprintf("Basic %s", basicAuth(username, password))
} }

View file

@ -8,7 +8,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2" v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/testutil" "github.com/docker/distribution/testutil"
) )

View file

@ -55,6 +55,8 @@ func parseHTTPErrorResponse(statusCode int, r io.Reader) error {
switch statusCode { switch statusCode {
case http.StatusUnauthorized: case http.StatusUnauthorized:
return errcode.ErrorCodeUnauthorized.WithMessage(detailsErr.Details) return errcode.ErrorCodeUnauthorized.WithMessage(detailsErr.Details)
case http.StatusForbidden:
return errcode.ErrorCodeDenied.WithMessage(detailsErr.Details)
case http.StatusTooManyRequests: case http.StatusTooManyRequests:
return errcode.ErrorCodeTooManyRequests.WithMessage(detailsErr.Details) return errcode.ErrorCodeTooManyRequests.WithMessage(detailsErr.Details)
default: default:

View file

@ -102,3 +102,18 @@ func TestHandleErrorResponseUnexpectedStatusCode501(t *testing.T) {
t.Errorf("Expected \"%s\", got: \"%s\"", expectedMsg, err.Error()) t.Errorf("Expected \"%s\", got: \"%s\"", expectedMsg, err.Error())
} }
} }
func TestHandleErrorResponseInsufficientPrivileges403(t *testing.T) {
json := `{"details":"requesting higher privileges than access token allows"}`
response := &http.Response{
Status: "403 Forbidden",
StatusCode: 403,
Body: nopCloser{bytes.NewBufferString(json)},
}
err := HandleErrorResponse(response)
expectedMsg := "denied: requesting higher privileges than access token allows"
if !strings.Contains(err.Error(), expectedMsg) {
t.Errorf("Expected \"%s\", got: \"%s\"", expectedMsg, err.Error())
}
}

View file

@ -16,7 +16,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/v2" v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/client/transport" "github.com/docker/distribution/registry/client/transport"
"github.com/docker/distribution/registry/storage/cache" "github.com/docker/distribution/registry/storage/cache"
"github.com/docker/distribution/registry/storage/cache/memory" "github.com/docker/distribution/registry/storage/cache/memory"
@ -114,9 +114,7 @@ func (r *registry) Repositories(ctx context.Context, entries []string, last stri
return 0, err return 0, err
} }
for cnt := range ctlg.Repositories { copy(entries, ctlg.Repositories)
entries[cnt] = ctlg.Repositories[cnt]
}
numFilled = len(ctlg.Repositories) numFilled = len(ctlg.Repositories)
link := resp.Header.Get("Link") link := resp.Header.Get("Link")
@ -736,7 +734,12 @@ func (bs *blobs) Create(ctx context.Context, options ...distribution.BlobCreateO
return nil, err return nil, err
} }
resp, err := bs.client.Post(u, "", nil) req, err := http.NewRequest("POST", u, nil)
if err != nil {
return nil, err
}
resp, err := bs.client.Do(req)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View file

@ -22,7 +22,7 @@ import (
"github.com/docker/distribution/manifest/schema1" "github.com/docker/distribution/manifest/schema1"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2" v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/testutil" "github.com/docker/distribution/testutil"
"github.com/docker/distribution/uuid" "github.com/docker/distribution/uuid"
"github.com/docker/libtrust" "github.com/docker/libtrust"
@ -152,7 +152,7 @@ func TestBlobFetch(t *testing.T) {
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
if bytes.Compare(b, b1) != 0 { if !bytes.Equal(b, b1) {
t.Fatalf("Wrong bytes values fetched: [%d]byte != [%d]byte", len(b), len(b1)) t.Fatalf("Wrong bytes values fetched: [%d]byte != [%d]byte", len(b), len(b1))
} }

View file

@ -180,7 +180,6 @@ func (hrs *httpReadSeeker) reader() (io.Reader, error) {
// context.GetLogger(hrs.context).Infof("Range: %s", req.Header.Get("Range")) // context.GetLogger(hrs.context).Infof("Range: %s", req.Header.Get("Range"))
} }
req.Header.Add("Accept-Encoding", "identity")
resp, err := hrs.client.Do(req) resp, err := hrs.client.Do(req)
if err != nil { if err != nil {
return nil, err return nil, err

View file

@ -28,7 +28,7 @@ import (
"github.com/docker/distribution/manifest/schema2" "github.com/docker/distribution/manifest/schema2"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2" v2 "github.com/docker/distribution/registry/api/v2"
storagedriver "github.com/docker/distribution/registry/storage/driver" storagedriver "github.com/docker/distribution/registry/storage/driver"
"github.com/docker/distribution/registry/storage/driver/factory" "github.com/docker/distribution/registry/storage/driver/factory"
_ "github.com/docker/distribution/registry/storage/driver/testdriver" _ "github.com/docker/distribution/registry/storage/driver/testdriver"
@ -81,21 +81,23 @@ func TestCheckAPI(t *testing.T) {
// TestCatalogAPI tests the /v2/_catalog endpoint // TestCatalogAPI tests the /v2/_catalog endpoint
func TestCatalogAPI(t *testing.T) { func TestCatalogAPI(t *testing.T) {
chunkLen := 2
env := newTestEnv(t, false) env := newTestEnv(t, false)
defer env.Shutdown() defer env.Shutdown()
values := url.Values{ maxEntries := env.config.Catalog.MaxEntries
"last": []string{""}, allCatalog := []string{
"n": []string{strconv.Itoa(chunkLen)}} "foo/aaaa", "foo/bbbb", "foo/cccc", "foo/dddd", "foo/eeee", "foo/ffff",
}
catalogURL, err := env.builder.BuildCatalogURL(values) chunkLen := maxEntries - 1
catalogURL, err := env.builder.BuildCatalogURL()
if err != nil { if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err) t.Fatalf("unexpected error building catalog url: %v", err)
} }
// ----------------------------------- // -----------------------------------
// try to get an empty catalog // Case No. 1: Empty catalog
resp, err := http.Get(catalogURL) resp, err := http.Get(catalogURL)
if err != nil { if err != nil {
t.Fatalf("unexpected error issuing request: %v", err) t.Fatalf("unexpected error issuing request: %v", err)
@ -113,23 +115,22 @@ func TestCatalogAPI(t *testing.T) {
t.Fatalf("error decoding fetched manifest: %v", err) t.Fatalf("error decoding fetched manifest: %v", err)
} }
// we haven't pushed anything to the registry yet // No images pushed = no image returned
if len(ctlg.Repositories) != 0 { if len(ctlg.Repositories) != 0 {
t.Fatalf("repositories has unexpected values") t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", 0, len(ctlg.Repositories))
} }
// No pagination should be returned
if resp.Header.Get("Link") != "" { if resp.Header.Get("Link") != "" {
t.Fatalf("repositories has more data when none expected") t.Fatalf("repositories has more data when none expected")
} }
// ----------------------------------- for _, image := range allCatalog {
// push something to the registry and try again
images := []string{"foo/aaaa", "foo/bbbb", "foo/cccc"}
for _, image := range images {
createRepository(env, t, image, "sometag") createRepository(env, t, image, "sometag")
} }
// -----------------------------------
// Case No. 2: Catalog populated & n is not provided nil (n internally will be min(100, maxEntries))
resp, err = http.Get(catalogURL) resp, err = http.Get(catalogURL)
if err != nil { if err != nil {
t.Fatalf("unexpected error issuing request: %v", err) t.Fatalf("unexpected error issuing request: %v", err)
@ -143,27 +144,30 @@ func TestCatalogAPI(t *testing.T) {
t.Fatalf("error decoding fetched manifest: %v", err) t.Fatalf("error decoding fetched manifest: %v", err)
} }
if len(ctlg.Repositories) != chunkLen { // it must match max entries
t.Fatalf("repositories has unexpected values") if len(ctlg.Repositories) != maxEntries {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", maxEntries, len(ctlg.Repositories))
} }
for _, image := range images[:chunkLen] { // it must return the first maxEntries entries from the catalog
for _, image := range allCatalog[:maxEntries] {
if !contains(ctlg.Repositories, image) { if !contains(ctlg.Repositories, image) {
t.Fatalf("didn't find our repository '%s' in the catalog", image) t.Fatalf("didn't find our repository '%s' in the catalog", image)
} }
} }
// fail if there's no pagination
link := resp.Header.Get("Link") link := resp.Header.Get("Link")
if link == "" { if link == "" {
t.Fatalf("repositories has less data than expected") t.Fatalf("repositories has less data than expected")
} }
newValues := checkLink(t, link, chunkLen, ctlg.Repositories[len(ctlg.Repositories)-1])
// ----------------------------------- // -----------------------------------
// get the last chunk of data // Case No. 2.1: Second page (n internally will be min(100, maxEntries))
catalogURL, err = env.builder.BuildCatalogURL(newValues) // build pagination link
values := checkLink(t, link, maxEntries, ctlg.Repositories[len(ctlg.Repositories)-1])
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil { if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err) t.Fatalf("unexpected error building catalog url: %v", err)
} }
@ -181,18 +185,269 @@ func TestCatalogAPI(t *testing.T) {
t.Fatalf("error decoding fetched manifest: %v", err) t.Fatalf("error decoding fetched manifest: %v", err)
} }
if len(ctlg.Repositories) != 1 { expectedRemainder := len(allCatalog) - maxEntries
t.Fatalf("repositories has unexpected values") if len(ctlg.Repositories) != expectedRemainder {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", expectedRemainder, len(ctlg.Repositories))
} }
lastImage := images[len(images)-1] // -----------------------------------
if !contains(ctlg.Repositories, lastImage) { // Case No. 3: request n = maxentries
t.Fatalf("didn't find our repository '%s' in the catalog", lastImage) values = url.Values{
"last": []string{""},
"n": []string{strconv.Itoa(maxEntries)},
} }
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
if len(ctlg.Repositories) != maxEntries {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", maxEntries, len(ctlg.Repositories))
}
// fail if there's no pagination
link = resp.Header.Get("Link") link = resp.Header.Get("Link")
if link != "" { if link == "" {
t.Fatalf("catalog has unexpected data") t.Fatalf("repositories has less data than expected")
}
// -----------------------------------
// Case No. 3.1: Second (last) page
// build pagination link
values = checkLink(t, link, maxEntries, ctlg.Repositories[len(ctlg.Repositories)-1])
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
expectedRemainder = len(allCatalog) - maxEntries
if len(ctlg.Repositories) != expectedRemainder {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", expectedRemainder, len(ctlg.Repositories))
}
// -----------------------------------
// Case No. 4: request n < maxentries
values = url.Values{
"n": []string{strconv.Itoa(chunkLen)},
}
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
// returns the requested amount
if len(ctlg.Repositories) != chunkLen {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", expectedRemainder, len(ctlg.Repositories))
}
// fail if there's no pagination
link = resp.Header.Get("Link")
if link == "" {
t.Fatalf("repositories has less data than expected")
}
// -----------------------------------
// Case No. 4.1: request n < maxentries (second page)
// build pagination link
values = checkLink(t, link, chunkLen, ctlg.Repositories[len(ctlg.Repositories)-1])
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
expectedRemainder = len(allCatalog) - chunkLen
if len(ctlg.Repositories) != expectedRemainder {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", expectedRemainder, len(ctlg.Repositories))
}
// -----------------------------------
// Case No. 5: request n > maxentries | return err: ErrorCodePaginationNumberInvalid
values = url.Values{
"n": []string{strconv.Itoa(maxEntries + 10)},
}
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusBadRequest)
checkBodyHasErrorCodes(t, "invalid number of results requested", resp, v2.ErrorCodePaginationNumberInvalid)
// -----------------------------------
// Case No. 6: request n > maxentries but <= total catalog | return err: ErrorCodePaginationNumberInvalid
values = url.Values{
"n": []string{strconv.Itoa(len(allCatalog))},
}
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusBadRequest)
checkBodyHasErrorCodes(t, "invalid number of results requested", resp, v2.ErrorCodePaginationNumberInvalid)
// -----------------------------------
// Case No. 7: n = 0 | n is set to max(0, min(defaultEntries, maxEntries))
values = url.Values{
"n": []string{"0"},
}
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
// it must be empty
if len(ctlg.Repositories) != 0 {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", 0, len(ctlg.Repositories))
}
// -----------------------------------
// Case No. 8: n = -1 | n is set to max(0, min(defaultEntries, maxEntries))
values = url.Values{
"n": []string{"-1"},
}
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
// it must match max entries
if len(ctlg.Repositories) != maxEntries {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", expectedRemainder, len(ctlg.Repositories))
}
// -----------------------------------
// Case No. 9: n = 5, max = 5, total catalog = 4
values = url.Values{
"n": []string{strconv.Itoa(maxEntries)},
}
envWithLessImages := newTestEnv(t, false)
for _, image := range allCatalog[0:(maxEntries - 1)] {
createRepository(envWithLessImages, t, image, "sometag")
}
catalogURL, err = envWithLessImages.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
// it must match max entries
if len(ctlg.Repositories) != maxEntries-1 {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", maxEntries-1, len(ctlg.Repositories))
} }
} }
@ -207,7 +462,7 @@ func checkLink(t *testing.T, urlStr string, numEntries int, last string) url.Val
urlValues := linkURL.Query() urlValues := linkURL.Query()
if urlValues.Get("n") != strconv.Itoa(numEntries) { if urlValues.Get("n") != strconv.Itoa(numEntries) {
t.Fatalf("Catalog link entry size is incorrect") t.Fatalf("Catalog link entry size is incorrect (expected: %v, returned: %v)", urlValues.Get("n"), strconv.Itoa(numEntries))
} }
if urlValues.Get("last") != last { if urlValues.Get("last") != last {
@ -959,7 +1214,6 @@ func testManifestWithStorageError(t *testing.T, env *testEnv, imageName referenc
defer resp.Body.Close() defer resp.Body.Close()
checkResponse(t, "getting non-existent manifest", resp, expectedStatusCode) checkResponse(t, "getting non-existent manifest", resp, expectedStatusCode)
checkBodyHasErrorCodes(t, "getting non-existent manifest", resp, expectedErrorCode) checkBodyHasErrorCodes(t, "getting non-existent manifest", resp, expectedErrorCode)
return
} }
func testManifestAPISchema1(t *testing.T, env *testEnv, imageName reference.Named) manifestArgs { func testManifestAPISchema1(t *testing.T, env *testEnv, imageName reference.Named) manifestArgs {
@ -1066,12 +1320,11 @@ func testManifestAPISchema1(t *testing.T, env *testEnv, imageName reference.Name
expectedLayers := make(map[digest.Digest]io.ReadSeeker) expectedLayers := make(map[digest.Digest]io.ReadSeeker)
for i := range unsignedManifest.FSLayers { for i := range unsignedManifest.FSLayers {
rs, dgstStr, err := testutil.CreateRandomTarFile() rs, dgst, err := testutil.CreateRandomTarFile()
if err != nil { if err != nil {
t.Fatalf("error creating random layer %d: %v", i, err) t.Fatalf("error creating random layer %d: %v", i, err)
} }
dgst := digest.Digest(dgstStr)
expectedLayers[dgst] = rs expectedLayers[dgst] = rs
unsignedManifest.FSLayers[i].BlobSum = dgst unsignedManifest.FSLayers[i].BlobSum = dgst
@ -1405,12 +1658,11 @@ func testManifestAPISchema2(t *testing.T, env *testEnv, imageName reference.Name
expectedLayers := make(map[digest.Digest]io.ReadSeeker) expectedLayers := make(map[digest.Digest]io.ReadSeeker)
for i := range manifest.Layers { for i := range manifest.Layers {
rs, dgstStr, err := testutil.CreateRandomTarFile() rs, dgst, err := testutil.CreateRandomTarFile()
if err != nil { if err != nil {
t.Fatalf("error creating random layer %d: %v", i, err) t.Fatalf("error creating random layer %d: %v", i, err)
} }
dgst := digest.Digest(dgstStr)
expectedLayers[dgst] = rs expectedLayers[dgst] = rs
manifest.Layers[i].Digest = dgst manifest.Layers[i].Digest = dgst
@ -2026,6 +2278,9 @@ func newTestEnvMirror(t *testing.T, deleteEnabled bool) *testEnv {
Proxy: configuration.Proxy{ Proxy: configuration.Proxy{
RemoteURL: "http://example.com", RemoteURL: "http://example.com",
}, },
Catalog: configuration.Catalog{
MaxEntries: 5,
},
} }
config.Compatibility.Schema1.Enabled = true config.Compatibility.Schema1.Enabled = true
@ -2042,6 +2297,9 @@ func newTestEnv(t *testing.T, deleteEnabled bool) *testEnv {
"enabled": false, "enabled": false,
}}, }},
}, },
Catalog: configuration.Catalog{
MaxEntries: 5,
},
} }
config.Compatibility.Schema1.Enabled = true config.Compatibility.Schema1.Enabled = true
@ -2294,7 +2552,6 @@ func checkResponse(t *testing.T, msg string, resp *http.Response, expectedStatus
if resp.StatusCode != expectedStatus { if resp.StatusCode != expectedStatus {
t.Logf("unexpected status %s: %v != %v", msg, resp.StatusCode, expectedStatus) t.Logf("unexpected status %s: %v != %v", msg, resp.StatusCode, expectedStatus)
maybeDumpResponse(t, resp) maybeDumpResponse(t, resp)
t.FailNow() t.FailNow()
} }
@ -2357,7 +2614,7 @@ func checkBodyHasErrorCodes(t *testing.T, msg string, resp *http.Response, error
// Ensure that counts of expected errors were all non-zero // Ensure that counts of expected errors were all non-zero
for code := range expected { for code := range expected {
if counts[code] == 0 { if counts[code] == 0 {
t.Fatalf("expected error code %v not encounterd during %s: %s", code, msg, string(p)) t.Fatalf("expected error code %v not encountered during %s: %s", code, msg, string(p))
} }
} }
@ -2432,11 +2689,10 @@ func createRepository(env *testEnv, t *testing.T, imageName string, tag string)
expectedLayers := make(map[digest.Digest]io.ReadSeeker) expectedLayers := make(map[digest.Digest]io.ReadSeeker)
for i := range unsignedManifest.FSLayers { for i := range unsignedManifest.FSLayers {
rs, dgstStr, err := testutil.CreateRandomTarFile() rs, dgst, err := testutil.CreateRandomTarFile()
if err != nil { if err != nil {
t.Fatalf("error creating random layer %d: %v", i, err) t.Fatalf("error creating random layer %d: %v", i, err)
} }
dgst := digest.Digest(dgstStr)
expectedLayers[dgst] = rs expectedLayers[dgst] = rs
unsignedManifest.FSLayers[i].BlobSum = dgst unsignedManifest.FSLayers[i].BlobSum = dgst

View file

@ -2,10 +2,11 @@ package handlers
import ( import (
"context" "context"
cryptorand "crypto/rand" "crypto/rand"
"expvar" "expvar"
"fmt" "fmt"
"math/rand" "math"
"math/big"
"net" "net"
"net/http" "net/http"
"net/url" "net/url"
@ -24,7 +25,7 @@ import (
"github.com/docker/distribution/notifications" "github.com/docker/distribution/notifications"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2" v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/auth" "github.com/docker/distribution/registry/auth"
registrymiddleware "github.com/docker/distribution/registry/middleware/registry" registrymiddleware "github.com/docker/distribution/registry/middleware/registry"
repositorymiddleware "github.com/docker/distribution/registry/middleware/repository" repositorymiddleware "github.com/docker/distribution/registry/middleware/repository"
@ -610,7 +611,7 @@ func (app *App) configureLogHook(configuration *configuration.Configuration) {
func (app *App) configureSecret(configuration *configuration.Configuration) { func (app *App) configureSecret(configuration *configuration.Configuration) {
if configuration.HTTP.Secret == "" { if configuration.HTTP.Secret == "" {
var secretBytes [randomSecretSize]byte var secretBytes [randomSecretSize]byte
if _, err := cryptorand.Read(secretBytes[:]); err != nil { if _, err := rand.Read(secretBytes[:]); err != nil {
panic(fmt.Sprintf("could not generate random bytes for HTTP secret: %v", err)) panic(fmt.Sprintf("could not generate random bytes for HTTP secret: %v", err))
} }
configuration.HTTP.Secret = string(secretBytes[:]) configuration.HTTP.Secret = string(secretBytes[:])
@ -753,20 +754,18 @@ func (app *App) logError(ctx context.Context, errors errcode.Errors) {
for _, e1 := range errors { for _, e1 := range errors {
var c context.Context var c context.Context
switch e1.(type) { switch e := e1.(type) {
case errcode.Error: case errcode.Error:
e, _ := e1.(errcode.Error)
c = context.WithValue(ctx, errCodeKey{}, e.Code) c = context.WithValue(ctx, errCodeKey{}, e.Code)
c = context.WithValue(c, errMessageKey{}, e.Message) c = context.WithValue(c, errMessageKey{}, e.Message)
c = context.WithValue(c, errDetailKey{}, e.Detail) c = context.WithValue(c, errDetailKey{}, e.Detail)
case errcode.ErrorCode: case errcode.ErrorCode:
e, _ := e1.(errcode.ErrorCode)
c = context.WithValue(ctx, errCodeKey{}, e) c = context.WithValue(ctx, errCodeKey{}, e)
c = context.WithValue(c, errMessageKey{}, e.Message()) c = context.WithValue(c, errMessageKey{}, e.Message())
default: default:
// just normal go 'error' // just normal go 'error'
c = context.WithValue(ctx, errCodeKey{}, errcode.ErrorCodeUnknown) c = context.WithValue(ctx, errCodeKey{}, errcode.ErrorCodeUnknown)
c = context.WithValue(c, errMessageKey{}, e1.Error()) c = context.WithValue(c, errMessageKey{}, e.Error())
} }
c = dcontext.WithLogger(c, dcontext.GetLogger(c, c = dcontext.WithLogger(c, dcontext.GetLogger(c,
@ -1062,8 +1061,13 @@ func startUploadPurger(ctx context.Context, storageDriver storagedriver.StorageD
} }
go func() { go func() {
rand.Seed(time.Now().Unix()) randInt, err := rand.Int(rand.Reader, new(big.Int).SetInt64(math.MaxInt64))
jitter := time.Duration(rand.Int()%60) * time.Minute if err != nil {
log.Infof("Failed to generate random jitter: %v", err)
// sleep 30min for failure case
randInt = big.NewInt(30)
}
jitter := time.Duration(randInt.Int64()%60) * time.Minute
log.Infof("Starting upload purge in %s", jitter) log.Infof("Starting upload purge in %s", jitter)
time.Sleep(jitter) time.Sleep(jitter)

View file

@ -11,7 +11,7 @@ import (
"github.com/docker/distribution/configuration" "github.com/docker/distribution/configuration"
"github.com/docker/distribution/context" "github.com/docker/distribution/context"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2" v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/auth" "github.com/docker/distribution/registry/auth"
_ "github.com/docker/distribution/registry/auth/silly" _ "github.com/docker/distribution/registry/auth/silly"
"github.com/docker/distribution/registry/storage" "github.com/docker/distribution/registry/storage"

View file

@ -1,3 +1,4 @@
//go:build go1.4
// +build go1.4 // +build go1.4
package handlers package handlers

View file

@ -1,3 +1,4 @@
//go:build !go1.4
// +build !go1.4 // +build !go1.4
package handlers package handlers

View file

@ -6,7 +6,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/context" "github.com/docker/distribution/context"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2" v2 "github.com/docker/distribution/registry/api/v2"
"github.com/gorilla/handlers" "github.com/gorilla/handlers"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
) )

View file

@ -9,7 +9,7 @@ import (
dcontext "github.com/docker/distribution/context" dcontext "github.com/docker/distribution/context"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2" v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/storage" "github.com/docker/distribution/registry/storage"
"github.com/gorilla/handlers" "github.com/gorilla/handlers"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
@ -172,7 +172,7 @@ func (buh *blobUploadHandler) PatchBlobData(w http.ResponseWriter, r *http.Reque
ct := r.Header.Get("Content-Type") ct := r.Header.Get("Content-Type")
if ct != "" && ct != "application/octet-stream" { if ct != "" && ct != "application/octet-stream" {
buh.Errors = append(buh.Errors, errcode.ErrorCodeUnknown.WithDetail(fmt.Errorf("Bad Content-Type"))) buh.Errors = append(buh.Errors, errcode.ErrorCodeUnknown.WithDetail(fmt.Errorf("bad Content-Type")))
// TODO(dmcgowan): encode error // TODO(dmcgowan): encode error
return return
} }

View file

@ -9,11 +9,13 @@ import (
"strconv" "strconv"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/storage/driver" "github.com/docker/distribution/registry/storage/driver"
"github.com/gorilla/handlers" "github.com/gorilla/handlers"
) )
const maximumReturnedEntries = 100 const defaultReturnedEntries = 100
func catalogDispatcher(ctx *Context, r *http.Request) http.Handler { func catalogDispatcher(ctx *Context, r *http.Request) http.Handler {
catalogHandler := &catalogHandler{ catalogHandler := &catalogHandler{
@ -38,29 +40,55 @@ func (ch *catalogHandler) GetCatalog(w http.ResponseWriter, r *http.Request) {
q := r.URL.Query() q := r.URL.Query()
lastEntry := q.Get("last") lastEntry := q.Get("last")
maxEntries, err := strconv.Atoi(q.Get("n"))
if err != nil || maxEntries < 0 { entries := defaultReturnedEntries
maxEntries = maximumReturnedEntries maximumConfiguredEntries := ch.App.Config.Catalog.MaxEntries
// parse n, if n unparseable, or negative assign it to defaultReturnedEntries
if n := q.Get("n"); n != "" {
parsedMax, err := strconv.Atoi(n)
if err == nil {
if parsedMax > maximumConfiguredEntries {
ch.Errors = append(ch.Errors, v2.ErrorCodePaginationNumberInvalid.WithDetail(map[string]int{"n": parsedMax}))
return
} else if parsedMax >= 0 {
entries = parsedMax
}
}
} }
repos := make([]string, maxEntries) // then enforce entries to be between 0 & maximumConfiguredEntries
// max(0, min(entries, maximumConfiguredEntries))
if entries < 0 || entries > maximumConfiguredEntries {
entries = maximumConfiguredEntries
}
filled, err := ch.App.registry.Repositories(ch.Context, repos, lastEntry) repos := make([]string, entries)
_, pathNotFound := err.(driver.PathNotFoundError) filled := 0
if err == io.EOF || pathNotFound { // entries is guaranteed to be >= 0 and < maximumConfiguredEntries
if entries == 0 {
moreEntries = false moreEntries = false
} else if err != nil { } else {
returnedRepositories, err := ch.App.registry.Repositories(ch.Context, repos, lastEntry)
if err != nil {
_, pathNotFound := err.(driver.PathNotFoundError)
if err != io.EOF && !pathNotFound {
ch.Errors = append(ch.Errors, errcode.ErrorCodeUnknown.WithDetail(err)) ch.Errors = append(ch.Errors, errcode.ErrorCodeUnknown.WithDetail(err))
return return
} }
// err is either io.EOF or not PathNotFoundError
moreEntries = false
}
filled = returnedRepositories
}
w.Header().Set("Content-Type", "application/json; charset=utf-8") w.Header().Set("Content-Type", "application/json; charset=utf-8")
// Add a link header if there are more entries to retrieve // Add a link header if there are more entries to retrieve
if moreEntries { if moreEntries {
lastEntry = repos[len(repos)-1] lastEntry = repos[filled-1]
urlStr, err := createLinkEntry(r.URL.String(), maxEntries, lastEntry) urlStr, err := createLinkEntry(r.URL.String(), entries, lastEntry)
if err != nil { if err != nil {
ch.Errors = append(ch.Errors, errcode.ErrorCodeUnknown.WithDetail(err)) ch.Errors = append(ch.Errors, errcode.ErrorCodeUnknown.WithDetail(err))
return return

View file

@ -8,7 +8,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
dcontext "github.com/docker/distribution/context" dcontext "github.com/docker/distribution/context"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2" v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/auth" "github.com/docker/distribution/registry/auth"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
) )

View file

@ -20,7 +20,7 @@ type logHook struct {
func (hook *logHook) Fire(entry *logrus.Entry) error { func (hook *logHook) Fire(entry *logrus.Entry) error {
addr := strings.Split(hook.Mail.Addr, ":") addr := strings.Split(hook.Mail.Addr, ":")
if len(addr) != 2 { if len(addr) != 2 {
return errors.New("Invalid Mail Address") return errors.New("invalid Mail Address")
} }
host := addr[0] host := addr[0]
subject := fmt.Sprintf("[%s] %s: %s", entry.Level, host, entry.Message) subject := fmt.Sprintf("[%s] %s: %s", entry.Level, host, entry.Message)
@ -37,7 +37,7 @@ func (hook *logHook) Fire(entry *logrus.Entry) error {
if err := t.Execute(b, entry); err != nil { if err := t.Execute(b, entry); err != nil {
return err return err
} }
body := fmt.Sprintf("%s", b) body := b.String()
return hook.Mail.sendMail(subject, body) return hook.Mail.sendMail(subject, body)
} }

View file

@ -17,7 +17,7 @@ type mailer struct {
func (mail *mailer) sendMail(subject, message string) error { func (mail *mailer) sendMail(subject, message string) error {
addr := strings.Split(mail.Addr, ":") addr := strings.Split(mail.Addr, ":")
if len(addr) != 2 { if len(addr) != 2 {
return errors.New("Invalid Mail Address") return errors.New("invalid Mail Address")
} }
host := addr[0] host := addr[0]
msg := []byte("To:" + strings.Join(mail.To, ";") + msg := []byte("To:" + strings.Join(mail.To, ";") +

View file

@ -14,11 +14,11 @@ import (
"github.com/docker/distribution/manifest/schema2" "github.com/docker/distribution/manifest/schema2"
"github.com/docker/distribution/reference" "github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2" v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/auth" "github.com/docker/distribution/registry/auth"
"github.com/gorilla/handlers" "github.com/gorilla/handlers"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1" v1 "github.com/opencontainers/image-spec/specs-go/v1"
) )
// These constants determine which architecture and OS to choose from a // These constants determine which architecture and OS to choose from a

View file

@ -6,7 +6,7 @@ import (
"github.com/docker/distribution" "github.com/docker/distribution"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2" v2 "github.com/docker/distribution/registry/api/v2"
"github.com/gorilla/handlers" "github.com/gorilla/handlers"
) )

View file

@ -6,7 +6,6 @@ import (
"net/http" "net/http"
"strconv" "strconv"
"sync" "sync"
"time"
"github.com/docker/distribution" "github.com/docker/distribution"
dcontext "github.com/docker/distribution/context" dcontext "github.com/docker/distribution/context"
@ -15,9 +14,6 @@ import (
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
) )
// todo(richardscothern): from cache control header or config file
const blobTTL = 24 * 7 * time.Hour
type proxyBlobStore struct { type proxyBlobStore struct {
localStore distribution.BlobStore localStore distribution.BlobStore
remoteStore distribution.BlobService remoteStore distribution.BlobService

View file

@ -193,7 +193,7 @@ func makeTestEnv(t *testing.T, name string) *testEnv {
} }
func makeBlob(size int) []byte { func makeBlob(size int) []byte {
blob := make([]byte, size, size) blob := make([]byte, size)
for i := 0; i < size; i++ { for i := 0; i < size; i++ {
blob[i] = byte('A' + rand.Int()%48) blob[i] = byte('A' + rand.Int()%48)
} }
@ -204,16 +204,6 @@ func init() {
rand.Seed(42) rand.Seed(42)
} }
func perm(m []distribution.Descriptor) []distribution.Descriptor {
for i := 0; i < len(m); i++ {
j := rand.Intn(i + 1)
tmp := m[i]
m[i] = m[j]
m[j] = tmp
}
return m
}
func populate(t *testing.T, te *testEnv, blobCount, size, numUnique int) { func populate(t *testing.T, te *testEnv, blobCount, size, numUnique int) {
var inRemote []distribution.Descriptor var inRemote []distribution.Descriptor

View file

@ -165,11 +165,10 @@ func populateRepo(ctx context.Context, t *testing.T, repository distribution.Rep
t.Fatalf("unexpected error creating test upload: %v", err) t.Fatalf("unexpected error creating test upload: %v", err)
} }
rs, ts, err := testutil.CreateRandomTarFile() rs, dgst, err := testutil.CreateRandomTarFile()
if err != nil { if err != nil {
t.Fatalf("unexpected error generating test layer file") t.Fatalf("unexpected error generating test layer file")
} }
dgst := digest.Digest(ts)
if _, err := io.Copy(wr, rs); err != nil { if _, err := io.Copy(wr, rs); err != nil {
t.Fatalf("unexpected error copying to upload: %v", err) t.Fatalf("unexpected error copying to upload: %v", err)
} }

View file

@ -118,7 +118,7 @@ func (ttles *TTLExpirationScheduler) Start() error {
} }
if !ttles.stopped { if !ttles.stopped {
return fmt.Errorf("Scheduler already started") return fmt.Errorf("scheduler already started")
} }
dcontext.GetLogger(ttles.ctx).Infof("Starting cached object TTL expiration scheduler...") dcontext.GetLogger(ttles.ctx).Infof("Starting cached object TTL expiration scheduler...")
@ -126,7 +126,7 @@ func (ttles *TTLExpirationScheduler) Start() error {
// Start timer for each deserialized entry // Start timer for each deserialized entry
for _, entry := range ttles.entries { for _, entry := range ttles.entries {
entry.timer = ttles.startTimer(entry, entry.Expiry.Sub(time.Now())) entry.timer = ttles.startTimer(entry, time.Until(entry.Expiry))
} }
// Start a ticker to periodically save the entries index // Start a ticker to periodically save the entries index
@ -164,7 +164,7 @@ func (ttles *TTLExpirationScheduler) add(r reference.Reference, ttl time.Duratio
Expiry: time.Now().Add(ttl), Expiry: time.Now().Add(ttl),
EntryType: eType, EntryType: eType,
} }
dcontext.GetLogger(ttles.ctx).Infof("Adding new scheduler entry for %s with ttl=%s", entry.Key, entry.Expiry.Sub(time.Now())) dcontext.GetLogger(ttles.ctx).Infof("Adding new scheduler entry for %s with ttl=%s", entry.Key, time.Until(entry.Expiry))
if oldEntry, present := ttles.entries[entry.Key]; present && oldEntry.timer != nil { if oldEntry, present := ttles.entries[entry.Key]; present && oldEntry.timer != nil {
oldEntry.timer.Stop() oldEntry.timer.Stop()
} }

View file

@ -9,12 +9,14 @@ import (
"net/http" "net/http"
"os" "os"
"os/signal" "os/signal"
"strings"
"syscall" "syscall"
"time" "time"
"rsc.io/letsencrypt" "rsc.io/letsencrypt"
"github.com/Shopify/logrus-bugsnag" logrus_bugsnag "github.com/Shopify/logrus-bugsnag"
logstash "github.com/bshuster-repo/logrus-logstash-hook" logstash "github.com/bshuster-repo/logrus-logstash-hook"
"github.com/bugsnag/bugsnag-go" "github.com/bugsnag/bugsnag-go"
"github.com/docker/distribution/configuration" "github.com/docker/distribution/configuration"
@ -31,6 +33,60 @@ import (
"github.com/yvasiyarov/gorelic" "github.com/yvasiyarov/gorelic"
) )
// a map of TLS cipher suite names to constants in https://golang.org/pkg/crypto/tls/#pkg-constants
var cipherSuites = map[string]uint16{
// TLS 1.0 - 1.2 cipher suites
"TLS_RSA_WITH_RC4_128_SHA": tls.TLS_RSA_WITH_RC4_128_SHA,
"TLS_RSA_WITH_3DES_EDE_CBC_SHA": tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
"TLS_RSA_WITH_AES_128_CBC_SHA": tls.TLS_RSA_WITH_AES_128_CBC_SHA,
"TLS_RSA_WITH_AES_256_CBC_SHA": tls.TLS_RSA_WITH_AES_256_CBC_SHA,
"TLS_RSA_WITH_AES_128_CBC_SHA256": tls.TLS_RSA_WITH_AES_128_CBC_SHA256,
"TLS_RSA_WITH_AES_128_GCM_SHA256": tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
"TLS_RSA_WITH_AES_256_GCM_SHA384": tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
"TLS_ECDHE_ECDSA_WITH_RC4_128_SHA": tls.TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA": tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA": tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
"TLS_ECDHE_RSA_WITH_RC4_128_SHA": tls.TLS_ECDHE_RSA_WITH_RC4_128_SHA,
"TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA": tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA": tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA": tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256": tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256": tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256": tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256": tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384": tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384": tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256": tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256": tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
// TLS 1.3 cipher suites
"TLS_AES_128_GCM_SHA256": tls.TLS_AES_128_GCM_SHA256,
"TLS_AES_256_GCM_SHA384": tls.TLS_AES_256_GCM_SHA384,
"TLS_CHACHA20_POLY1305_SHA256": tls.TLS_CHACHA20_POLY1305_SHA256,
}
// a list of default ciphersuites to utilize
var defaultCipherSuites = []uint16{
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_AES_128_GCM_SHA256,
tls.TLS_CHACHA20_POLY1305_SHA256,
tls.TLS_AES_256_GCM_SHA384,
}
// maps tls version strings to constants
var defaultTLSVersionStr = "tls1.2"
var tlsVersions = map[string]uint16{
// user specified values
"tls1.0": tls.VersionTLS10,
"tls1.1": tls.VersionTLS11,
"tls1.2": tls.VersionTLS12,
"tls1.3": tls.VersionTLS13,
}
// this channel gets notified when process receives signal. It is global to ease unit testing // this channel gets notified when process receives signal. It is global to ease unit testing
var quit = make(chan os.Signal, 1) var quit = make(chan os.Signal, 1)
@ -125,6 +181,35 @@ func NewRegistry(ctx context.Context, config *configuration.Configuration) (*Reg
}, nil }, nil
} }
// takes a list of cipher suites and converts it to a list of respective tls constants
// if an empty list is provided, then the defaults will be used
func getCipherSuites(names []string) ([]uint16, error) {
if len(names) == 0 {
return defaultCipherSuites, nil
}
cipherSuiteConsts := make([]uint16, len(names))
for i, name := range names {
cipherSuiteConst, ok := cipherSuites[name]
if !ok {
return nil, fmt.Errorf("unknown TLS cipher suite '%s' specified for http.tls.cipherSuites", name)
}
cipherSuiteConsts[i] = cipherSuiteConst
}
return cipherSuiteConsts, nil
}
// takes a list of cipher suite ids and converts it to a list of respective names
func getCipherSuiteNames(ids []uint16) []string {
if len(ids) == 0 {
return nil
}
names := make([]string, len(ids))
for i, id := range ids {
names[i] = tls.CipherSuiteName(id)
}
return names
}
// ListenAndServe runs the registry's HTTP server. // ListenAndServe runs the registry's HTTP server.
func (registry *Registry) ListenAndServe() error { func (registry *Registry) ListenAndServe() error {
config := registry.config config := registry.config
@ -135,19 +220,27 @@ func (registry *Registry) ListenAndServe() error {
} }
if config.HTTP.TLS.Certificate != "" || config.HTTP.TLS.LetsEncrypt.CacheFile != "" { if config.HTTP.TLS.Certificate != "" || config.HTTP.TLS.LetsEncrypt.CacheFile != "" {
if config.HTTP.TLS.MinimumTLS == "" {
config.HTTP.TLS.MinimumTLS = defaultTLSVersionStr
}
tlsMinVersion, ok := tlsVersions[config.HTTP.TLS.MinimumTLS]
if !ok {
return fmt.Errorf("unknown minimum TLS level '%s' specified for http.tls.minimumtls", config.HTTP.TLS.MinimumTLS)
}
dcontext.GetLogger(registry.app).Infof("restricting TLS version to %s or higher", config.HTTP.TLS.MinimumTLS)
tlsCipherSuites, err := getCipherSuites(config.HTTP.TLS.CipherSuites)
if err != nil {
return err
}
dcontext.GetLogger(registry.app).Infof("restricting TLS cipher suites to: %s", strings.Join(getCipherSuiteNames(tlsCipherSuites), ","))
tlsConf := &tls.Config{ tlsConf := &tls.Config{
ClientAuth: tls.NoClientCert, ClientAuth: tls.NoClientCert,
NextProtos: nextProtos(config), NextProtos: nextProtos(config),
MinVersion: tls.VersionTLS10, MinVersion: tlsMinVersion,
PreferServerCipherSuites: true, PreferServerCipherSuites: true,
CipherSuites: []uint16{ CipherSuites: tlsCipherSuites,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
},
} }
if config.HTTP.TLS.LetsEncrypt.CacheFile != "" { if config.HTTP.TLS.LetsEncrypt.CacheFile != "" {
@ -185,7 +278,7 @@ func (registry *Registry) ListenAndServe() error {
} }
if ok := pool.AppendCertsFromPEM(caPem); !ok { if ok := pool.AppendCertsFromPEM(caPem); !ok {
return fmt.Errorf("Could not add CA to pool") return fmt.Errorf("could not add CA to pool")
} }
} }

View file

@ -3,12 +3,24 @@ package registry
import ( import (
"bufio" "bufio"
"context" "context"
"crypto"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"math/big"
"net" "net"
"net/http" "net/http"
"os" "os"
"path"
"reflect" "reflect"
"strings"
"testing" "testing"
"time" "time"
@ -38,18 +50,30 @@ func TestNextProtos(t *testing.T) {
} }
} }
func setupRegistry() (*Registry, error) { type registryTLSConfig struct {
cipherSuites []string
certificatePath string
privateKeyPath string
certificate *tls.Certificate
}
func setupRegistry(tlsCfg *registryTLSConfig, addr string) (*Registry, error) {
config := &configuration.Configuration{} config := &configuration.Configuration{}
// TODO: this needs to change to something ephemeral as the test will fail if there is any server // TODO: this needs to change to something ephemeral as the test will fail if there is any server
// already listening on port 5000 // already listening on port 5000
config.HTTP.Addr = ":5000" config.HTTP.Addr = addr
config.HTTP.DrainTimeout = time.Duration(10) * time.Second config.HTTP.DrainTimeout = time.Duration(10) * time.Second
if tlsCfg != nil {
config.HTTP.TLS.CipherSuites = tlsCfg.cipherSuites
config.HTTP.TLS.Certificate = tlsCfg.certificatePath
config.HTTP.TLS.Key = tlsCfg.privateKeyPath
}
config.Storage = map[string]configuration.Parameters{"inmemory": map[string]interface{}{}} config.Storage = map[string]configuration.Parameters{"inmemory": map[string]interface{}{}}
return NewRegistry(context.Background(), config) return NewRegistry(context.Background(), config)
} }
func TestGracefulShutdown(t *testing.T) { func TestGracefulShutdown(t *testing.T) {
registry, err := setupRegistry() registry, err := setupRegistry(nil, ":5000")
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -98,3 +122,227 @@ func TestGracefulShutdown(t *testing.T) {
t.Error("Body is not {}; ", string(body)) t.Error("Body is not {}; ", string(body))
} }
} }
func TestGetCipherSuite(t *testing.T) {
resp, err := getCipherSuites([]string{"TLS_RSA_WITH_AES_128_CBC_SHA"})
if err != nil || len(resp) != 1 || resp[0] != tls.TLS_RSA_WITH_AES_128_CBC_SHA {
t.Errorf("expected cipher suite %q, got %q",
"TLS_RSA_WITH_AES_128_CBC_SHA",
strings.Join(getCipherSuiteNames(resp), ","),
)
}
resp, err = getCipherSuites([]string{"TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_AES_128_GCM_SHA256"})
if err != nil || len(resp) != 2 ||
resp[0] != tls.TLS_RSA_WITH_AES_128_CBC_SHA || resp[1] != tls.TLS_AES_128_GCM_SHA256 {
t.Errorf("expected cipher suites %q, got %q",
"TLS_RSA_WITH_AES_128_CBC_SHA,TLS_AES_128_GCM_SHA256",
strings.Join(getCipherSuiteNames(resp), ","),
)
}
_, err = getCipherSuites([]string{"TLS_RSA_WITH_AES_128_CBC_SHA", "bad_input"})
if err == nil {
t.Error("did not return expected error about unknown cipher suite")
}
}
func buildRegistryTLSConfig(name, keyType string, cipherSuites []string) (*registryTLSConfig, error) {
var priv interface{}
var pub crypto.PublicKey
var err error
switch keyType {
case "rsa":
priv, err = rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return nil, fmt.Errorf("failed to create rsa private key: %v", err)
}
rsaKey := priv.(*rsa.PrivateKey)
pub = rsaKey.Public()
case "ecdsa":
priv, err = ecdsa.GenerateKey(elliptic.P384(), rand.Reader)
if err != nil {
return nil, fmt.Errorf("failed to create ecdsa private key: %v", err)
}
ecdsaKey := priv.(*ecdsa.PrivateKey)
pub = ecdsaKey.Public()
default:
return nil, fmt.Errorf("unsupported key type: %v", keyType)
}
notBefore := time.Now()
notAfter := notBefore.Add(time.Minute)
serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128)
serialNumber, err := rand.Int(rand.Reader, serialNumberLimit)
if err != nil {
return nil, fmt.Errorf("failed to create serial number: %v", err)
}
cert := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
Organization: []string{"registry_test"},
},
NotBefore: notBefore,
NotAfter: notAfter,
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
BasicConstraintsValid: true,
IPAddresses: []net.IP{net.ParseIP("127.0.0.1")},
DNSNames: []string{"localhost"},
IsCA: true,
}
derBytes, err := x509.CreateCertificate(rand.Reader, &cert, &cert, pub, priv)
if err != nil {
return nil, fmt.Errorf("failed to create certificate: %v", err)
}
if _, err := os.Stat(os.TempDir()); os.IsNotExist(err) {
os.Mkdir(os.TempDir(), 1777)
}
certPath := path.Join(os.TempDir(), name+".pem")
certOut, err := os.Create(certPath)
if err != nil {
return nil, fmt.Errorf("failed to create pem: %v", err)
}
if err := pem.Encode(certOut, &pem.Block{Type: "CERTIFICATE", Bytes: derBytes}); err != nil {
return nil, fmt.Errorf("failed to write data to %s: %v", certPath, err)
}
if err := certOut.Close(); err != nil {
return nil, fmt.Errorf("error closing %s: %v", certPath, err)
}
keyPath := path.Join(os.TempDir(), name+".key")
keyOut, err := os.OpenFile(keyPath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600)
if err != nil {
return nil, fmt.Errorf("failed to open %s for writing: %v", keyPath, err)
}
privBytes, err := x509.MarshalPKCS8PrivateKey(priv)
if err != nil {
return nil, fmt.Errorf("unable to marshal private key: %v", err)
}
if err := pem.Encode(keyOut, &pem.Block{Type: "PRIVATE KEY", Bytes: privBytes}); err != nil {
return nil, fmt.Errorf("failed to write data to key.pem: %v", err)
}
if err := keyOut.Close(); err != nil {
return nil, fmt.Errorf("error closing %s: %v", keyPath, err)
}
tlsCert := tls.Certificate{
Certificate: [][]byte{derBytes},
PrivateKey: priv,
}
tlsTestCfg := registryTLSConfig{
cipherSuites: cipherSuites,
certificatePath: certPath,
privateKeyPath: keyPath,
certificate: &tlsCert,
}
return &tlsTestCfg, nil
}
func TestRegistrySupportedCipherSuite(t *testing.T) {
name := "registry_test_server_supported_cipher"
cipherSuites := []string{"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"}
serverTLS, err := buildRegistryTLSConfig(name, "rsa", cipherSuites)
if err != nil {
t.Fatal(err)
}
registry, err := setupRegistry(serverTLS, ":5001")
if err != nil {
t.Fatal(err)
}
// run registry server
var errchan chan error
go func() {
errchan <- registry.ListenAndServe()
}()
select {
case err = <-errchan:
t.Fatalf("Error listening: %v", err)
default:
}
// Wait for some unknown random time for server to start listening
time.Sleep(3 * time.Second)
// send tls request with server supported cipher suite
clientCipherSuites, err := getCipherSuites(cipherSuites)
if err != nil {
t.Fatal(err)
}
clientTLS := tls.Config{
InsecureSkipVerify: true,
CipherSuites: clientCipherSuites,
}
dialer := net.Dialer{
Timeout: time.Second * 5,
}
conn, err := tls.DialWithDialer(&dialer, "tcp", "127.0.0.1:5001", &clientTLS)
if err != nil {
t.Fatal(err)
}
fmt.Fprintf(conn, "GET /v2/ HTTP/1.1\r\nHost: 127.0.0.1\r\n\r\n")
resp, err := http.ReadResponse(bufio.NewReader(conn), nil)
if err != nil {
t.Fatal(err)
}
if resp.Status != "200 OK" {
t.Error("response status is not 200 OK: ", resp.Status)
}
if body, err := ioutil.ReadAll(resp.Body); err != nil || string(body) != "{}" {
t.Error("Body is not {}; ", string(body))
}
// send stop signal
quit <- os.Interrupt
time.Sleep(100 * time.Millisecond)
}
func TestRegistryUnsupportedCipherSuite(t *testing.T) {
name := "registry_test_server_unsupported_cipher"
serverTLS, err := buildRegistryTLSConfig(name, "rsa", []string{"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA358"})
if err != nil {
t.Fatal(err)
}
registry, err := setupRegistry(serverTLS, ":5002")
if err != nil {
t.Fatal(err)
}
// run registry server
var errchan chan error
go func() {
errchan <- registry.ListenAndServe()
}()
select {
case err = <-errchan:
t.Fatalf("Error listening: %v", err)
default:
}
// Wait for some unknown random time for server to start listening
time.Sleep(3 * time.Second)
// send tls request with server unsupported cipher suite
clientTLS := tls.Config{
InsecureSkipVerify: true,
CipherSuites: []uint16{tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256},
}
dialer := net.Dialer{
Timeout: time.Second * 5,
}
_, err = tls.DialWithDialer(&dialer, "tcp", "127.0.0.1:5002", &clientTLS)
if err == nil {
t.Error("expected TLS connection to timeout")
}
// send stop signal
quit <- os.Interrupt
time.Sleep(100 * time.Millisecond)
}

View file

@ -418,7 +418,7 @@ func TestBlobMount(t *testing.T) {
bs := repository.Blobs(ctx) bs := repository.Blobs(ctx)
// Test destination for existence. // Test destination for existence.
statDesc, err = bs.Stat(ctx, desc.Digest) _, err = bs.Stat(ctx, desc.Digest)
if err == nil { if err == nil {
t.Fatalf("unexpected non-error stating unmounted blob: %v", desc) t.Fatalf("unexpected non-error stating unmounted blob: %v", desc)
} }
@ -478,12 +478,12 @@ func TestBlobMount(t *testing.T) {
t.Fatalf("Unexpected error deleting blob") t.Fatalf("Unexpected error deleting blob")
} }
d, err := bs.Stat(ctx, desc.Digest) _, err = bs.Stat(ctx, desc.Digest)
if err != nil { if err != nil {
t.Fatalf("unexpected error stating blob deleted from source repository: %v", err) t.Fatalf("unexpected error stating blob deleted from source repository: %v", err)
} }
d, err = sbs.Stat(ctx, desc.Digest) d, err := sbs.Stat(ctx, desc.Digest)
if err == nil { if err == nil {
t.Fatalf("unexpected non-error stating deleted blob: %v", d) t.Fatalf("unexpected non-error stating deleted blob: %v", d)
} }

View file

@ -152,16 +152,6 @@ func (bs *blobStore) readlink(ctx context.Context, path string) (digest.Digest,
return linked, nil return linked, nil
} }
// resolve reads the digest link at path and returns the blob store path.
func (bs *blobStore) resolve(ctx context.Context, path string) (string, error) {
dgst, err := bs.readlink(ctx, path)
if err != nil {
return "", err
}
return bs.path(dgst)
}
type blobStatter struct { type blobStatter struct {
driver driver.StorageDriver driver driver.StorageDriver
} }

View file

@ -1,3 +1,4 @@
//go:build noresumabledigest
// +build noresumabledigest // +build noresumabledigest
package storage package storage

View file

@ -1,3 +1,4 @@
//go:build !noresumabledigest
// +build !noresumabledigest // +build !noresumabledigest
package storage package storage

View file

@ -173,8 +173,7 @@ func checkBlobDescriptorCacheClear(ctx context.Context, t *testing.T, provider c
t.Error(err) t.Error(err)
} }
desc, err = cache.Stat(ctx, localDigest) if _, err = cache.Stat(ctx, localDigest); err == nil {
if err == nil {
t.Fatalf("expected error statting deleted blob: %v", err) t.Fatalf("expected error statting deleted blob: %v", err)
} }
} }

View file

@ -55,17 +55,17 @@ func (factory *azureDriverFactory) Create(parameters map[string]interface{}) (st
func FromParameters(parameters map[string]interface{}) (*Driver, error) { func FromParameters(parameters map[string]interface{}) (*Driver, error) {
accountName, ok := parameters[paramAccountName] accountName, ok := parameters[paramAccountName]
if !ok || fmt.Sprint(accountName) == "" { if !ok || fmt.Sprint(accountName) == "" {
return nil, fmt.Errorf("No %s parameter provided", paramAccountName) return nil, fmt.Errorf("no %s parameter provided", paramAccountName)
} }
accountKey, ok := parameters[paramAccountKey] accountKey, ok := parameters[paramAccountKey]
if !ok || fmt.Sprint(accountKey) == "" { if !ok || fmt.Sprint(accountKey) == "" {
return nil, fmt.Errorf("No %s parameter provided", paramAccountKey) return nil, fmt.Errorf("no %s parameter provided", paramAccountKey)
} }
container, ok := parameters[paramContainer] container, ok := parameters[paramContainer]
if !ok || fmt.Sprint(container) == "" { if !ok || fmt.Sprint(container) == "" {
return nil, fmt.Errorf("No %s parameter provided", paramContainer) return nil, fmt.Errorf("no %s parameter provided", paramContainer)
} }
realm, ok := parameters[paramRealm] realm, ok := parameters[paramRealm]

View file

@ -145,7 +145,7 @@ func (r *regulator) Stat(ctx context.Context, path string) (storagedriver.FileIn
} }
// List returns a list of the objects that are direct descendants of the // List returns a list of the objects that are direct descendants of the
//given path. // given path.
func (r *regulator) List(ctx context.Context, path string) ([]string, error) { func (r *regulator) List(ctx context.Context, path string) ([]string, error) {
r.enter() r.enter()
defer r.exit() defer r.exit()

View file

@ -36,7 +36,7 @@ func init() {
func TestFromParametersImpl(t *testing.T) { func TestFromParametersImpl(t *testing.T) {
tests := []struct { tests := []struct {
params map[string]interface{} // techincally the yaml can contain anything params map[string]interface{} // technically the yaml can contain anything
expected DriverParameters expected DriverParameters
pass bool pass bool
}{ }{

View file

@ -1,17 +1,17 @@
//go:build include_gcs
// +build include_gcs
// Package gcs provides a storagedriver.StorageDriver implementation to // Package gcs provides a storagedriver.StorageDriver implementation to
// store blobs in Google cloud storage. // store blobs in Google cloud storage.
// //
// This package leverages the google.golang.org/cloud/storage client library // This package leverages the google.golang.org/cloud/storage client library
//for interfacing with gcs. // for interfacing with gcs.
// //
// Because gcs is a key, value store the Stat call does not support last modification // Because gcs is a key, value store the Stat call does not support last modification
// time for directories (directories are an abstraction for key, value stores) // time for directories (directories are an abstraction for key, value stores)
// //
// Note that the contents of incomplete uploads are not accessible even though // Note that the contents of incomplete uploads are not accessible even though
// Stat returns their length // Stat returns their length
//
// +build include_gcs
package gcs package gcs
import ( import (
@ -61,7 +61,6 @@ var rangeHeader = regexp.MustCompile(`^bytes=([0-9])+-([0-9]+)$`)
// driverParameters is a struct that encapsulates all of the driver parameters after all values have been set // driverParameters is a struct that encapsulates all of the driver parameters after all values have been set
type driverParameters struct { type driverParameters struct {
bucket string bucket string
config *jwt.Config
email string email string
privateKey []byte privateKey []byte
client *http.Client client *http.Client
@ -87,6 +86,8 @@ func (factory *gcsDriverFactory) Create(parameters map[string]interface{}) (stor
return FromParameters(parameters) return FromParameters(parameters)
} }
var _ storagedriver.StorageDriver = &driver{}
// driver is a storagedriver.StorageDriver implementation backed by GCS // driver is a storagedriver.StorageDriver implementation backed by GCS
// Objects are stored at absolute keys in the provided bucket. // Objects are stored at absolute keys in the provided bucket.
type driver struct { type driver struct {
@ -297,7 +298,7 @@ func (d *driver) Reader(context context.Context, path string, offset int64) (io.
if err != nil { if err != nil {
return nil, err return nil, err
} }
if offset == int64(obj.Size) { if offset == obj.Size {
return ioutil.NopCloser(bytes.NewReader([]byte{})), nil return ioutil.NopCloser(bytes.NewReader([]byte{})), nil
} }
return nil, storagedriver.InvalidOffsetError{Path: path, Offset: offset} return nil, storagedriver.InvalidOffsetError{Path: path, Offset: offset}
@ -433,7 +434,6 @@ func putContentsClose(wc *storage.Writer, contents []byte) error {
} }
} }
if err != nil { if err != nil {
wc.CloseWithError(err)
return err return err
} }
return wc.Close() return wc.Close()
@ -613,10 +613,10 @@ func (d *driver) Stat(context context.Context, path string) (storagedriver.FileI
//try to get as folder //try to get as folder
dirpath := d.pathToDirKey(path) dirpath := d.pathToDirKey(path)
var query *storage.Query query := &storage.Query{
query = &storage.Query{} Prefix: dirpath,
query.Prefix = dirpath MaxResults: 1,
query.MaxResults = 1 }
objects, err := storageListObjects(gcsContext, d.bucket, query) objects, err := storageListObjects(gcsContext, d.bucket, query)
if err != nil { if err != nil {
@ -638,12 +638,12 @@ func (d *driver) Stat(context context.Context, path string) (storagedriver.FileI
} }
// List returns a list of the objects that are direct descendants of the // List returns a list of the objects that are direct descendants of the
//given path. // given path.
func (d *driver) List(context context.Context, path string) ([]string, error) { func (d *driver) List(context context.Context, path string) ([]string, error) {
var query *storage.Query query := &storage.Query{
query = &storage.Query{} Delimiter: "/",
query.Delimiter = "/" Prefix: d.pathToDirKey(path),
query.Prefix = d.pathToDirKey(path) }
list := make([]string, 0, 64) list := make([]string, 0, 64)
for { for {
objects, err := storageListObjects(d.context(context), d.bucket, query) objects, err := storageListObjects(d.context(context), d.bucket, query)

View file

@ -1,3 +1,4 @@
//go:build include_gcs
// +build include_gcs // +build include_gcs
package gcs package gcs
@ -58,7 +59,7 @@ func init() {
panic(fmt.Sprintf("Error reading JWT config : %s", err)) panic(fmt.Sprintf("Error reading JWT config : %s", err))
} }
email = jwtConfig.Email email = jwtConfig.Email
privateKey = []byte(jwtConfig.PrivateKey) privateKey = jwtConfig.PrivateKey
if len(privateKey) == 0 { if len(privateKey) == 0 {
panic("Error reading JWT config : missing private_key property") panic("Error reading JWT config : missing private_key property")
} }
@ -259,6 +260,9 @@ func TestEmptyRootList(t *testing.T) {
} }
}() }()
keys, err := emptyRootDriver.List(ctx, "/") keys, err := emptyRootDriver.List(ctx, "/")
if err != nil {
t.Fatalf("unexpected error listing empty root content: %v", err)
}
for _, path := range keys { for _, path := range keys {
if !storagedriver.PathRegexp.MatchString(path) { if !storagedriver.PathRegexp.MatchString(path) {
t.Fatalf("unexpected string in path: %q != %q", path, storagedriver.PathRegexp) t.Fatalf("unexpected string in path: %q != %q", path, storagedriver.PathRegexp)
@ -266,6 +270,9 @@ func TestEmptyRootList(t *testing.T) {
} }
keys, err = slashRootDriver.List(ctx, "/") keys, err = slashRootDriver.List(ctx, "/")
if err != nil {
t.Fatalf("unexpected error listing slash root content: %v", err)
}
for _, path := range keys { for _, path := range keys {
if !storagedriver.PathRegexp.MatchString(path) { if !storagedriver.PathRegexp.MatchString(path) {
t.Fatalf("unexpected string in path: %q != %q", path, storagedriver.PathRegexp) t.Fatalf("unexpected string in path: %q != %q", path, storagedriver.PathRegexp)

View file

@ -252,20 +252,6 @@ func (d *dir) delete(p string) error {
return nil return nil
} }
// dump outputs a primitive directory structure to stdout.
func (d *dir) dump(indent string) {
fmt.Println(indent, d.name()+"/")
for _, child := range d.children {
if child.isdir() {
child.(*dir).dump(indent + "\t")
} else {
fmt.Println(indent, child.name())
}
}
}
func (d *dir) String() string { func (d *dir) String() string {
return fmt.Sprintf("&dir{path: %v, children: %v}", d.p, d.children) return fmt.Sprintf("&dir{path: %v, children: %v}", d.p, d.children)
} }
@ -293,6 +279,9 @@ func (f *file) sectionReader(offset int64) io.Reader {
} }
func (f *file) ReadAt(p []byte, offset int64) (n int, err error) { func (f *file) ReadAt(p []byte, offset int64) (n int, err error) {
if offset >= int64(len(f.data)) {
return 0, io.EOF
}
return copy(p, f.data[offset:]), nil return copy(p, f.data[offset:]), nil
} }

View file

@ -1,6 +1,5 @@
// Package middleware - cloudfront wrapper for storage libs // Package middleware - cloudfront wrapper for storage libs
// N.B. currently only works with S3, not arbitrary sites // N.B. currently only works with S3, not arbitrary sites
//
package middleware package middleware
import ( import (
@ -16,7 +15,7 @@ import (
"github.com/aws/aws-sdk-go/service/cloudfront/sign" "github.com/aws/aws-sdk-go/service/cloudfront/sign"
dcontext "github.com/docker/distribution/context" dcontext "github.com/docker/distribution/context"
storagedriver "github.com/docker/distribution/registry/storage/driver" storagedriver "github.com/docker/distribution/registry/storage/driver"
"github.com/docker/distribution/registry/storage/driver/middleware" storagemiddleware "github.com/docker/distribution/registry/storage/driver/middleware"
) )
// cloudFrontStorageMiddleware provides a simple implementation of layerHandler that // cloudFrontStorageMiddleware provides a simple implementation of layerHandler that
@ -38,7 +37,9 @@ var _ storagedriver.StorageDriver = &cloudFrontStorageMiddleware{}
// Optional options: ipFilteredBy, awsregion // Optional options: ipFilteredBy, awsregion
// ipfilteredby: valid value "none|aws|awsregion". "none", do not filter any IP, default value. "aws", only aws IP goes // ipfilteredby: valid value "none|aws|awsregion". "none", do not filter any IP, default value. "aws", only aws IP goes
//
// to S3 directly. "awsregion", only regions listed in awsregion options goes to S3 directly // to S3 directly. "awsregion", only regions listed in awsregion options goes to S3 directly
//
// awsregion: a comma separated string of AWS regions. // awsregion: a comma separated string of AWS regions.
func newCloudFrontStorageMiddleware(storageDriver storagedriver.StorageDriver, options map[string]interface{}) (storagedriver.StorageDriver, error) { func newCloudFrontStorageMiddleware(storageDriver storagedriver.StorageDriver, options map[string]interface{}) (storagedriver.StorageDriver, error) {
// parse baseurl // parse baseurl
@ -138,15 +139,17 @@ func newCloudFrontStorageMiddleware(storageDriver storagedriver.StorageDriver, o
// parse ipfilteredby // parse ipfilteredby
var awsIPs *awsIPs var awsIPs *awsIPs
if ipFilteredBy := options["ipfilteredby"].(string); ok { if i, ok := options["ipfilteredby"]; ok {
if ipFilteredBy, ok := i.(string); ok {
switch strings.ToLower(strings.TrimSpace(ipFilteredBy)) { switch strings.ToLower(strings.TrimSpace(ipFilteredBy)) {
case "", "none": case "", "none":
awsIPs = nil awsIPs = nil
case "aws": case "aws":
newAWSIPs(ipRangesURL, updateFrequency, nil) awsIPs = newAWSIPs(ipRangesURL, updateFrequency, nil)
case "awsregion": case "awsregion":
var awsRegion []string var awsRegion []string
if regions, ok := options["awsregion"].(string); ok { if i, ok := options["awsregion"]; ok {
if regions, ok := i.(string); ok {
for _, awsRegions := range strings.Split(regions, ",") { for _, awsRegions := range strings.Split(regions, ",") {
awsRegion = append(awsRegion, strings.ToLower(strings.TrimSpace(awsRegions))) awsRegion = append(awsRegion, strings.ToLower(strings.TrimSpace(awsRegions)))
} }
@ -154,12 +157,16 @@ func newCloudFrontStorageMiddleware(storageDriver storagedriver.StorageDriver, o
} else { } else {
return nil, fmt.Errorf("awsRegion must be a comma separated string of valid aws regions") return nil, fmt.Errorf("awsRegion must be a comma separated string of valid aws regions")
} }
} else {
return nil, fmt.Errorf("awsRegion is not defined")
}
default: default:
return nil, fmt.Errorf("ipfilteredby only allows a string the following value: none|aws|awsregion") return nil, fmt.Errorf("ipfilteredby only allows a string the following value: none|aws|awsregion")
} }
} else { } else {
return nil, fmt.Errorf("ipfilteredby only allows a string with the following value: none|aws|awsregion") return nil, fmt.Errorf("ipfilteredby only allows a string with the following value: none|aws|awsregion")
} }
}
return &cloudFrontStorageMiddleware{ return &cloudFrontStorageMiddleware{
StorageDriver: storageDriver, StorageDriver: storageDriver,

View file

@ -1,3 +1,6 @@
//go:build include_oss
// +build include_oss
// Package oss provides a storagedriver.StorageDriver implementation to // Package oss provides a storagedriver.StorageDriver implementation to
// store blobs in Aliyun OSS cloud storage. // store blobs in Aliyun OSS cloud storage.
// //
@ -6,9 +9,6 @@
// //
// Because OSS is a key, value store the Stat call does not support last modification // Because OSS is a key, value store the Stat call does not support last modification
// time for directories (directories are an abstraction for key, value stores) // time for directories (directories are an abstraction for key, value stores)
//
// +build include_oss
package oss package oss
import ( import (
@ -37,12 +37,11 @@ const driverName = "oss"
const minChunkSize = 5 << 20 const minChunkSize = 5 << 20
const defaultChunkSize = 2 * minChunkSize const defaultChunkSize = 2 * minChunkSize
const defaultTimeout = 2 * time.Minute // 2 minute timeout per chunk
// listMax is the largest amount of objects you can request from OSS in a list call // listMax is the largest amount of objects you can request from OSS in a list call
const listMax = 1000 const listMax = 1000
//DriverParameters A struct that encapsulates all of the driver parameters after all values have been set // DriverParameters A struct that encapsulates all of the driver parameters after all values have been set
type DriverParameters struct { type DriverParameters struct {
AccessKeyID string AccessKeyID string
AccessKeySecret string AccessKeySecret string
@ -67,6 +66,8 @@ func (factory *ossDriverFactory) Create(parameters map[string]interface{}) (stor
return FromParameters(parameters) return FromParameters(parameters)
} }
var _ storagedriver.StorageDriver = &driver{}
type driver struct { type driver struct {
Client *oss.Client Client *oss.Client
Bucket *oss.Bucket Bucket *oss.Bucket
@ -497,11 +498,6 @@ func parseError(path string, err error) error {
return err return err
} }
func hasCode(err error, code string) bool {
ossErr, ok := err.(*oss.Error)
return ok && ossErr.Code == code
}
func (d *driver) getOptions() oss.Options { func (d *driver) getOptions() oss.Options {
return oss.Options{ServerSideEncryption: d.Encrypt} return oss.Options{ServerSideEncryption: d.Encrypt}
} }

View file

@ -1,3 +1,4 @@
//go:build include_oss
// +build include_oss // +build include_oss
package oss package oss
@ -127,6 +128,9 @@ func TestEmptyRootList(t *testing.T) {
defer rootedDriver.Delete(ctx, filename) defer rootedDriver.Delete(ctx, filename)
keys, err := emptyRootDriver.List(ctx, "/") keys, err := emptyRootDriver.List(ctx, "/")
if err != nil {
t.Fatalf("unexpected error listing empty root content: %v", err)
}
for _, path := range keys { for _, path := range keys {
if !storagedriver.PathRegexp.MatchString(path) { if !storagedriver.PathRegexp.MatchString(path) {
t.Fatalf("unexpected string in path: %q != %q", path, storagedriver.PathRegexp) t.Fatalf("unexpected string in path: %q != %q", path, storagedriver.PathRegexp)
@ -134,6 +138,9 @@ func TestEmptyRootList(t *testing.T) {
} }
keys, err = slashRootDriver.List(ctx, "/") keys, err = slashRootDriver.List(ctx, "/")
if err != nil {
t.Fatalf("unexpected error listing slash root content: %v", err)
}
for _, path := range keys { for _, path := range keys {
if !storagedriver.PathRegexp.MatchString(path) { if !storagedriver.PathRegexp.MatchString(path) {
t.Fatalf("unexpected string in path: %q != %q", path, storagedriver.PathRegexp) t.Fatalf("unexpected string in path: %q != %q", path, storagedriver.PathRegexp)

View file

@ -82,7 +82,7 @@ var validRegions = map[string]struct{}{}
// validObjectACLs contains known s3 object Acls // validObjectACLs contains known s3 object Acls
var validObjectACLs = map[string]struct{}{} var validObjectACLs = map[string]struct{}{}
//DriverParameters A struct that encapsulates all of the driver parameters after all values have been set // DriverParameters A struct that encapsulates all of the driver parameters after all values have been set
type DriverParameters struct { type DriverParameters struct {
AccessKey string AccessKey string
SecretKey string SecretKey string
@ -137,6 +137,8 @@ func (factory *s3DriverFactory) Create(parameters map[string]interface{}) (stora
return FromParameters(parameters) return FromParameters(parameters)
} }
var _ storagedriver.StorageDriver = &driver{}
type driver struct { type driver struct {
S3 *s3.S3 S3 *s3.S3
Bucket string Bucket string
@ -188,19 +190,19 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
regionName := parameters["region"] regionName := parameters["region"]
if regionName == nil || fmt.Sprint(regionName) == "" { if regionName == nil || fmt.Sprint(regionName) == "" {
return nil, fmt.Errorf("No region parameter provided") return nil, fmt.Errorf("no region parameter provided")
} }
region := fmt.Sprint(regionName) region := fmt.Sprint(regionName)
// Don't check the region value if a custom endpoint is provided. // Don't check the region value if a custom endpoint is provided.
if regionEndpoint == "" { if regionEndpoint == "" {
if _, ok := validRegions[region]; !ok { if _, ok := validRegions[region]; !ok {
return nil, fmt.Errorf("Invalid region provided: %v", region) return nil, fmt.Errorf("invalid region provided: %v", region)
} }
} }
bucket := parameters["bucket"] bucket := parameters["bucket"]
if bucket == nil || fmt.Sprint(bucket) == "" { if bucket == nil || fmt.Sprint(bucket) == "" {
return nil, fmt.Errorf("No bucket parameter provided") return nil, fmt.Errorf("no bucket parameter provided")
} }
encryptBool := false encryptBool := false
@ -209,7 +211,7 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
case string: case string:
b, err := strconv.ParseBool(encrypt) b, err := strconv.ParseBool(encrypt)
if err != nil { if err != nil {
return nil, fmt.Errorf("The encrypt parameter should be a boolean") return nil, fmt.Errorf("the encrypt parameter should be a boolean")
} }
encryptBool = b encryptBool = b
case bool: case bool:
@ -217,7 +219,7 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
case nil: case nil:
// do nothing // do nothing
default: default:
return nil, fmt.Errorf("The encrypt parameter should be a boolean") return nil, fmt.Errorf("the encrypt parameter should be a boolean")
} }
secureBool := true secureBool := true
@ -226,7 +228,7 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
case string: case string:
b, err := strconv.ParseBool(secure) b, err := strconv.ParseBool(secure)
if err != nil { if err != nil {
return nil, fmt.Errorf("The secure parameter should be a boolean") return nil, fmt.Errorf("the secure parameter should be a boolean")
} }
secureBool = b secureBool = b
case bool: case bool:
@ -234,7 +236,7 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
case nil: case nil:
// do nothing // do nothing
default: default:
return nil, fmt.Errorf("The secure parameter should be a boolean") return nil, fmt.Errorf("the secure parameter should be a boolean")
} }
skipVerifyBool := false skipVerifyBool := false
@ -243,7 +245,7 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
case string: case string:
b, err := strconv.ParseBool(skipVerify) b, err := strconv.ParseBool(skipVerify)
if err != nil { if err != nil {
return nil, fmt.Errorf("The skipVerify parameter should be a boolean") return nil, fmt.Errorf("the skipVerify parameter should be a boolean")
} }
skipVerifyBool = b skipVerifyBool = b
case bool: case bool:
@ -251,7 +253,7 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
case nil: case nil:
// do nothing // do nothing
default: default:
return nil, fmt.Errorf("The skipVerify parameter should be a boolean") return nil, fmt.Errorf("the skipVerify parameter should be a boolean")
} }
v4Bool := true v4Bool := true
@ -260,7 +262,7 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
case string: case string:
b, err := strconv.ParseBool(v4auth) b, err := strconv.ParseBool(v4auth)
if err != nil { if err != nil {
return nil, fmt.Errorf("The v4auth parameter should be a boolean") return nil, fmt.Errorf("the v4auth parameter should be a boolean")
} }
v4Bool = b v4Bool = b
case bool: case bool:
@ -268,7 +270,7 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
case nil: case nil:
// do nothing // do nothing
default: default:
return nil, fmt.Errorf("The v4auth parameter should be a boolean") return nil, fmt.Errorf("the v4auth parameter should be a boolean")
} }
keyID := parameters["keyid"] keyID := parameters["keyid"]
@ -306,7 +308,7 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
if storageClassParam != nil { if storageClassParam != nil {
storageClassString, ok := storageClassParam.(string) storageClassString, ok := storageClassParam.(string)
if !ok { if !ok {
return nil, fmt.Errorf("The storageclass parameter must be one of %v, %v invalid", return nil, fmt.Errorf("the storageclass parameter must be one of %v, %v invalid",
[]string{s3.StorageClassStandard, s3.StorageClassReducedRedundancy}, storageClassParam) []string{s3.StorageClassStandard, s3.StorageClassReducedRedundancy}, storageClassParam)
} }
// All valid storage class parameters are UPPERCASE, so be a bit more flexible here // All valid storage class parameters are UPPERCASE, so be a bit more flexible here
@ -314,7 +316,7 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
if storageClassString != noStorageClass && if storageClassString != noStorageClass &&
storageClassString != s3.StorageClassStandard && storageClassString != s3.StorageClassStandard &&
storageClassString != s3.StorageClassReducedRedundancy { storageClassString != s3.StorageClassReducedRedundancy {
return nil, fmt.Errorf("The storageclass parameter must be one of %v, %v invalid", return nil, fmt.Errorf("the storageclass parameter must be one of %v, %v invalid",
[]string{noStorageClass, s3.StorageClassStandard, s3.StorageClassReducedRedundancy}, storageClassParam) []string{noStorageClass, s3.StorageClassStandard, s3.StorageClassReducedRedundancy}, storageClassParam)
} }
storageClass = storageClassString storageClass = storageClassString
@ -330,11 +332,11 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
if objectACLParam != nil { if objectACLParam != nil {
objectACLString, ok := objectACLParam.(string) objectACLString, ok := objectACLParam.(string)
if !ok { if !ok {
return nil, fmt.Errorf("Invalid value for objectacl parameter: %v", objectACLParam) return nil, fmt.Errorf("invalid value for objectacl parameter: %v", objectACLParam)
} }
if _, ok = validObjectACLs[objectACLString]; !ok { if _, ok = validObjectACLs[objectACLString]; !ok {
return nil, fmt.Errorf("Invalid value for objectacl parameter: %v", objectACLParam) return nil, fmt.Errorf("invalid value for objectacl parameter: %v", objectACLParam)
} }
objectACL = objectACLString objectACL = objectACLString
} }
@ -366,7 +368,7 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
return New(params) return New(params)
} }
// getParameterAsInt64 converts paramaters[name] to an int64 value (using // getParameterAsInt64 converts parameters[name] to an int64 value (using
// defaultt if nil), verifies it is no smaller than min, and returns it. // defaultt if nil), verifies it is no smaller than min, and returns it.
func getParameterAsInt64(parameters map[string]interface{}, name string, defaultt int64, min int64, max int64) (int64, error) { func getParameterAsInt64(parameters map[string]interface{}, name string, defaultt int64, min int64, max int64) (int64, error) {
rv := defaultt rv := defaultt
@ -389,7 +391,7 @@ func getParameterAsInt64(parameters map[string]interface{}, name string, default
} }
if rv < min || rv > max { if rv < min || rv > max {
return 0, fmt.Errorf("The %s %#v parameter should be a number between %d and %d (inclusive)", name, rv, min, max) return 0, fmt.Errorf("the %s %#v parameter should be a number between %d and %d (inclusive)", name, rv, min, max)
} }
return rv, nil return rv, nil
@ -401,7 +403,7 @@ func New(params DriverParameters) (*Driver, error) {
if !params.V4Auth && if !params.V4Auth &&
(params.RegionEndpoint == "" || (params.RegionEndpoint == "" ||
strings.Contains(params.RegionEndpoint, "s3.amazonaws.com")) { strings.Contains(params.RegionEndpoint, "s3.amazonaws.com")) {
return nil, fmt.Errorf("On Amazon S3 this storage driver can only be used with v4 authentication") return nil, fmt.Errorf("on Amazon S3 this storage driver can only be used with v4 authentication")
} }
awsConfig := aws.NewConfig() awsConfig := aws.NewConfig()
@ -549,9 +551,9 @@ func (d *driver) Reader(ctx context.Context, path string, offset int64) (io.Read
// Writer returns a FileWriter which will store the content written to it // Writer returns a FileWriter which will store the content written to it
// at the location designated by "path" after the call to Commit. // at the location designated by "path" after the call to Commit.
func (d *driver) Writer(ctx context.Context, path string, append bool) (storagedriver.FileWriter, error) { func (d *driver) Writer(ctx context.Context, path string, appendParam bool) (storagedriver.FileWriter, error) {
key := d.s3Path(path) key := d.s3Path(path)
if !append { if !appendParam {
// TODO (brianbland): cancel other uploads at this path // TODO (brianbland): cancel other uploads at this path
resp, err := d.S3.CreateMultipartUpload(&s3.CreateMultipartUploadInput{ resp, err := d.S3.CreateMultipartUpload(&s3.CreateMultipartUploadInput{
Bucket: aws.String(d.Bucket), Bucket: aws.String(d.Bucket),
@ -574,7 +576,7 @@ func (d *driver) Writer(ctx context.Context, path string, append bool) (storaged
if err != nil { if err != nil {
return nil, parseError(path, err) return nil, parseError(path, err)
} }
var allParts []*s3.Part
for _, multi := range resp.Uploads { for _, multi := range resp.Uploads {
if key != *multi.Key { if key != *multi.Key {
continue continue
@ -587,11 +589,20 @@ func (d *driver) Writer(ctx context.Context, path string, append bool) (storaged
if err != nil { if err != nil {
return nil, parseError(path, err) return nil, parseError(path, err)
} }
var multiSize int64 allParts = append(allParts, resp.Parts...)
for _, part := range resp.Parts { for *resp.IsTruncated {
multiSize += *part.Size resp, err = d.S3.ListParts(&s3.ListPartsInput{
Bucket: aws.String(d.Bucket),
Key: aws.String(key),
UploadId: multi.UploadId,
PartNumberMarker: resp.NextPartNumberMarker,
})
if err != nil {
return nil, parseError(path, err)
} }
return d.newWriter(key, *multi.UploadId, resp.Parts), nil allParts = append(allParts, resp.Parts...)
}
return d.newWriter(key, *multi.UploadId, allParts), nil
} }
return nil, storagedriver.PathNotFoundError{Path: path} return nil, storagedriver.PathNotFoundError{Path: path}
} }
@ -878,7 +889,7 @@ func (d *driver) URLFor(ctx context.Context, path string, options map[string]int
if ok { if ok {
et, ok := expires.(time.Time) et, ok := expires.(time.Time)
if ok { if ok {
expiresIn = et.Sub(time.Now()) expiresIn = time.Until(et)
} }
} }
@ -970,8 +981,19 @@ func (d *driver) doWalk(parentCtx context.Context, objectCount *int64, path, pre
defer done("s3aws.ListObjectsV2Pages(%s)", path) defer done("s3aws.ListObjectsV2Pages(%s)", path)
listObjectErr := d.S3.ListObjectsV2PagesWithContext(ctx, listObjectsInput, func(objects *s3.ListObjectsV2Output, lastPage bool) bool { listObjectErr := d.S3.ListObjectsV2PagesWithContext(ctx, listObjectsInput, func(objects *s3.ListObjectsV2Output, lastPage bool) bool {
var count int64
// KeyCount was introduced with version 2 of the GET Bucket operation in S3.
// Some S3 implementations don't support V2 now, so we fall back to manual
// calculation of the key count if required
if objects.KeyCount != nil {
count = *objects.KeyCount
*objectCount += *objects.KeyCount *objectCount += *objects.KeyCount
walkInfos := make([]walkInfoContainer, 0, *objects.KeyCount) } else {
count = int64(len(objects.Contents) + len(objects.CommonPrefixes))
*objectCount += count
}
walkInfos := make([]walkInfoContainer, 0, count)
for _, dir := range objects.CommonPrefixes { for _, dir := range objects.CommonPrefixes {
commonPrefix := *dir.Prefix commonPrefix := *dir.Prefix

View file

@ -39,12 +39,6 @@ import (
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
) )
const (
signatureVersion = "2"
signatureMethod = "HmacSHA1"
timeFormat = "2006-01-02T15:04:05Z"
)
type signer struct { type signer struct {
// Values that must be populated from the request // Values that must be populated from the request
Request *http.Request Request *http.Request

View file

@ -160,23 +160,23 @@ func FromParameters(parameters map[string]interface{}) (*Driver, error) {
} }
if params.Username == "" { if params.Username == "" {
return nil, fmt.Errorf("No username parameter provided") return nil, fmt.Errorf("no username parameter provided")
} }
if params.Password == "" { if params.Password == "" {
return nil, fmt.Errorf("No password parameter provided") return nil, fmt.Errorf("no password parameter provided")
} }
if params.AuthURL == "" { if params.AuthURL == "" {
return nil, fmt.Errorf("No authurl parameter provided") return nil, fmt.Errorf("no authurl parameter provided")
} }
if params.Container == "" { if params.Container == "" {
return nil, fmt.Errorf("No container parameter provided") return nil, fmt.Errorf("no container parameter provided")
} }
if params.ChunkSize < minChunkSize { if params.ChunkSize < minChunkSize {
return nil, fmt.Errorf("The chunksize %#v parameter should be a number that is larger than or equal to %d", params.ChunkSize, minChunkSize) return nil, fmt.Errorf("the chunksize %#v parameter should be a number that is larger than or equal to %d", params.ChunkSize, minChunkSize)
} }
return New(params) return New(params)
@ -211,15 +211,15 @@ func New(params Parameters) (*Driver, error) {
} }
err := ct.Authenticate() err := ct.Authenticate()
if err != nil { if err != nil {
return nil, fmt.Errorf("Swift authentication failed: %s", err) return nil, fmt.Errorf("swift authentication failed: %s", err)
} }
if _, _, err := ct.Container(params.Container); err == swift.ContainerNotFound { if _, _, err := ct.Container(params.Container); err == swift.ContainerNotFound {
if err := ct.ContainerCreate(params.Container, nil); err != nil { if err := ct.ContainerCreate(params.Container, nil); err != nil {
return nil, fmt.Errorf("Failed to create container %s (%s)", params.Container, err) return nil, fmt.Errorf("failed to create container %s (%s)", params.Container, err)
} }
} else if err != nil { } else if err != nil {
return nil, fmt.Errorf("Failed to retrieve info about container %s (%s)", params.Container, err) return nil, fmt.Errorf("failed to retrieve info about container %s (%s)", params.Container, err)
} }
d := &driver{ d := &driver{
@ -258,7 +258,7 @@ func New(params Parameters) (*Driver, error) {
if d.TempURLContainerKey { if d.TempURLContainerKey {
_, containerHeaders, err := d.Conn.Container(d.Container) _, containerHeaders, err := d.Conn.Container(d.Container)
if err != nil { if err != nil {
return nil, fmt.Errorf("Failed to fetch container info %s (%s)", d.Container, err) return nil, fmt.Errorf("failed to fetch container info %s (%s)", d.Container, err)
} }
d.SecretKey = containerHeaders["X-Container-Meta-Temp-Url-Key"] d.SecretKey = containerHeaders["X-Container-Meta-Temp-Url-Key"]
@ -273,7 +273,7 @@ func New(params Parameters) (*Driver, error) {
// Use the account secret key // Use the account secret key
_, accountHeaders, err := d.Conn.Account() _, accountHeaders, err := d.Conn.Account()
if err != nil { if err != nil {
return nil, fmt.Errorf("Failed to fetch account info (%s)", err) return nil, fmt.Errorf("failed to fetch account info (%s)", err)
} }
d.SecretKey = accountHeaders["X-Account-Meta-Temp-Url-Key"] d.SecretKey = accountHeaders["X-Account-Meta-Temp-Url-Key"]
@ -350,7 +350,7 @@ func (d *driver) Reader(ctx context.Context, path string, offset int64) (io.Read
} }
if isDLO && size == 0 { if isDLO && size == 0 {
if time.Now().Add(waitingTime).After(endTime) { if time.Now().Add(waitingTime).After(endTime) {
return nil, fmt.Errorf("Timeout expired while waiting for segments of %s to show up", path) return nil, fmt.Errorf("timeout expired while waiting for segments of %s to show up", path)
} }
time.Sleep(waitingTime) time.Sleep(waitingTime)
waitingTime *= 2 waitingTime *= 2
@ -456,7 +456,7 @@ func (d *driver) Stat(ctx context.Context, path string) (storagedriver.FileInfo,
_, isDLO := headers["X-Object-Manifest"] _, isDLO := headers["X-Object-Manifest"]
if isDLO && info.Bytes == 0 { if isDLO && info.Bytes == 0 {
if time.Now().Add(waitingTime).After(endTime) { if time.Now().Add(waitingTime).After(endTime) {
return nil, fmt.Errorf("Timeout expired while waiting for segments of %s to show up", path) return nil, fmt.Errorf("timeout expired while waiting for segments of %s to show up", path)
} }
time.Sleep(waitingTime) time.Sleep(waitingTime)
waitingTime *= 2 waitingTime *= 2
@ -755,7 +755,7 @@ func chunkFilenames(slice []string, maxSize int) (chunks [][]string, err error)
chunks = append(chunks, slice[offset:offset+chunkSize]) chunks = append(chunks, slice[offset:offset+chunkSize])
} }
} else { } else {
return nil, fmt.Errorf("Max chunk size must be > 0") return nil, fmt.Errorf("max chunk size must be > 0")
} }
return return
} }
@ -894,7 +894,7 @@ func (w *writer) waitForSegmentsToShowUp() error {
if info.Bytes == w.size { if info.Bytes == w.size {
break break
} }
err = fmt.Errorf("Timeout expired while waiting for segments of %s to show up", w.path) err = fmt.Errorf("timeout expired while waiting for segments of %s to show up", w.path)
} }
if time.Now().Add(waitingTime).After(endTime) { if time.Now().Add(waitingTime).After(endTime) {
break break

Some files were not shown because too many files have changed in this diff Show more