Compare commits

...

110 commits

Author SHA1 Message Date
Milos Gajdos
27206bcd3b
Merge pull request #4009 from thaJeztah/2.8_backport_enable_build_tags
[release/2.8 backport] Enable Go build tags
2023-08-22 15:10:59 +01:00
Milos Gajdos
110cb7538d
Enable build tags in 2.8
It would appear we were missing the Go build tags on 2.8.X branch so the
images would not have the necessary support for some storage drivers
causing breakages to end users trying to use them.

This commit fixes both the build and linting issues.

Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-08-21 13:58:10 +02:00
Sebastiaan van Stijn
2d62a4027a
s3: add interface assertion
This was added for the other drivers in 6b388b1ba6,
but it missed the s3 storage driver.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5b3be39870)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-08-21 13:57:02 +02:00
Milos Gajdos
2548973b1d
Enable Go build tags
This enables go build tags so the GCS and OSS driver support is
available in the binary distributed via the image build by Dockerfile.

This led to quite a few fixes in the GCS and OSS packages raised as
warning by golang-ci linter.

Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
(cherry picked from commit 6b388b1ba6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-08-21 13:50:24 +02:00
Milos Gajdos
8728c52ef2
Merge pull request #3926 from marcusirgens/use-build-tags
Pass `BUILDTAGS` argument to `go build`
2023-06-07 09:53:15 +01:00
Marcus Pettersen Irgens
ab7178cc0a
Pass BUILDTAGS argument to go build
Signed-off-by: Marcus Pettersen Irgens <m@mrcus.dev>
2023-05-19 18:38:27 +02:00
Milos Gajdos
7c354a4b40
Merge pull request #3915 from distribution/2.8.2-release-notes
Add v2.8.2 release notes
2023-05-11 11:11:57 +01:00
Milos Gajdos
a173a9c625
Add v2.8.2 release notes
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-05-11 10:47:17 +01:00
Milos Gajdos
4894d35ecc
Merge pull request #3914 from vvoland/handle-forbidden-28
[release/2.8 backport] registry/errors: Parse http forbidden as denied
2023-05-11 10:00:25 +01:00
Milos Gajdos
f067f66d3d
Merge pull request #3783 from ndeloof/accept-encoding-28
[release/2.8 backport] revert "registry/client: set Accept: identity header when getting layers
2023-05-11 09:54:18 +01:00
Paweł Gronowski
483ad69da3
registry/errors: Parse http forbidden as denied
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 5f1df02149)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2023-05-11 10:45:46 +02:00
Nicolas De Loof
2b0f84df21
Revert "registry/client: set Accept: identity header when getting layers"
This reverts commit 16f086a0ec.

Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
2023-05-10 23:00:15 +02:00
Milos Gajdos
320d6a141f
Merge pull request #3912 from distribution/2.8.2-beta.2-release-notes
Add 2.8.2 beta.2 release notes
2023-05-10 00:16:38 +01:00
Milos Gajdos
5f3ca1b2fb
Add release notes for 2.8.2-beta.2 release
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-05-10 00:12:20 +01:00
Milos Gajdos
cb840f63b3
Merge pull request #3911 from thaJeztah/2.8_backport_fix_releaser_filenames
[release/2.8 backport] Dockerfile: fix filenames of artifacts
2023-05-09 23:43:34 +01:00
Sebastiaan van Stijn
e884644fff
Dockerfile: fix filenames of artifacts
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 435c7b9a7b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-10 00:27:45 +02:00
Milos Gajdos
963c19952a
Merge pull request #3909 from distribution/2.8.2-beta-release-notes
Add 2.8.2-beta.1 release notes
2023-05-09 22:39:59 +01:00
Milos Gajdos
ac6c72b25f
Add 2.8.2-beta.1 release notes
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-05-09 22:22:05 +01:00
Milos Gajdos
dcb637d6ea
Merge pull request from GHSA-hqxw-f8mx-cpmw
[release/2.8] Fix runaway allocation on /v2/_catalog
2023-05-09 21:21:54 +01:00
Milos Gajdos
08f5645587
Merge pull request #3893 from pluralsh/part-pagination
[release/2.8] Add code to handle pagination of parts. Fixes max layer size of 10GB bug
2023-05-09 20:58:24 +01:00
Milos Gajdos
4a35c451a0
Merge pull request #3908 from thaJeztah/2.8_backport_bump_go1.19.9
[release/2.8 backport] update to go1.19.9
2023-05-09 19:16:47 +01:00
Milos Gajdos
ae58bde985
Fix gofmt warnings
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-05-09 18:58:38 +01:00
Sebastiaan van Stijn
3f2a4e24a7
update to go1.19.9
Added back minor versions in these, so that we have a somewhat more
reproducible state in the repository when tagging releases.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 322eb4eecf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 17:57:57 +02:00
Sebastiaan van Stijn
9c04409fdb
[release/2.8] ignore deprecation of io/ioutil
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 17:57:28 +02:00
Milos Gajdos
b791fdc2c6
Merge pull request #3907 from thaJeztah/2.8_backport_update_xx
[release/2.8 backport] Dockerfile: update xx to v1.2.1
2023-05-09 15:58:05 +01:00
Sebastiaan van Stijn
3d8f3cc4a5
Dockerfile: update xx to v1.2.1
full diff: https://github.com/tonistiigi/xx/compare/v1.1.1...v1.2.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8c4d2b9d65)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 15:32:28 +02:00
Milos Gajdos
d3fac541b1
Merge pull request #3903 from thaJeztah/2.8_bump_go_118
[release/2.8] bump up golang version (alternative)
2023-05-09 13:59:02 +01:00
Wang Yan
70db3a46d9
bump up golang version
upgrade go version to v1.18.8

Signed-off-by: Wang Yan <wangyan@vmware.com>
2023-05-09 10:59:43 +02:00
CrazyMax
db1389e043
dockerfiles: formatting
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 0e17e54091)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 10:59:43 +02:00
CrazyMax
018472de2d
dockerfiles: set ALPINE_VERSION
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit b066451b40)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 10:59:42 +02:00
CrazyMax
19b3feb5df
Update to xx 1.1.1
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 52a88c596b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 10:59:42 +02:00
CrazyMax
14bd72bcf8
Dockerfile: switch to xx
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit 87f93ede9e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 10:59:42 +02:00
Wang Yan
2392893bcf
bump up golang v1.17
Signed-off-by: Wang Yan <wangyan@vmware.com>
(cherry picked from commit 3f4c558dac)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 10:59:38 +02:00
Sebastiaan van Stijn
092a2197ff
[release/2.8] fix package name in Dockerfile
The 2.8 release is still named github.com/docker/distribution.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-05-09 10:53:15 +02:00
David van der Spek
22a805033a fix(ci): use go install instead of go get
Signed-off-by: David van der Spek <vanderspek.david@gmail.com>
2023-05-08 23:21:18 -05:00
Derek McGowan
1d52366d2c Merge pull request #2815 from bainsy88/issue_2814
Add code to handle pagination of parts. Fixes max layer size of 10GB bug

Signed-off-by: David van der Spek <vanderspek.david@gmail.com>
2023-05-08 23:21:18 -05:00
Jose D. Gomez R
521ea3d973
Fix runaway allocation on /v2/_catalog
Introduced a Catalog entry in the configuration struct. With it,
it's possible to control the maximum amount of entries returned
by /v2/catalog (`GetCatalog` in registry/handlers/catalog.go).

It's set to a default value of 1000.

`GetCatalog` returns 100 entries by default if no `n` is
provided. When provided it will be validated to be between `0`
and `MaxEntries` defined in Configuration. When `n` is outside
the aforementioned boundary, ErrorCodePaginationNumberInvalid is
returned.

`GetCatalog` now handles `n=0` gracefully with an empty response
as well.

Signed-off-by: José D. Gómez R. <1josegomezr@gmail.com>
Co-authored-by: Cory Snider <corhere@gmail.com>
2023-04-24 18:53:43 +02:00
Milos Gajdos
82d6c3d007
Merge pull request #3815 from wy65701436/release/2.8-cp-3615
[release/2.8] Fix panic in inmemory driver
2023-04-17 15:58:21 +01:00
Shengjing Zhu
ad5991de09 Fix panic in inmemory driver
Signed-off-by: Shengjing Zhu <zhsj@debian.org>
2022-12-04 22:47:15 +08:00
Hayley Swimelar
dc5b207fdd
Merge pull request #3650 from thaJeztah/2.8_bump_alpine
[release/2.8 backport] Fix CVE-2022-28391 by bumping alpine from 3.14 to 3.16
2022-05-26 09:32:25 -07:00
Silvin Lubecki
38018aeb5d
Fix CVE-2022-28391 by bumping alpine from 3.15 to 3.16
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9f2bc25b7a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2022-05-26 13:25:35 +02:00
Milos Gajdos
b5ca020cfb
Merge pull request #3605 from milosgajdos/update-release-notes
Update 2.8.1. release notes
2022-03-08 17:52:36 +00:00
Milos Gajdos
1b5f094086
Merge pull request #3604 from crazy-max/2.8-go-1.16.15
go 1.16.15
2022-03-08 17:15:10 +00:00
Milos Gajdos
96cc1fdb3c
FIx typo
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2022-03-08 17:14:24 +00:00
Milos Gajdos
e744906f09
Update 2.8.1. release notes
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2022-03-08 17:11:29 +00:00
CrazyMax
3df9fce2be
go 1.16.15
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
2022-03-08 17:54:16 +01:00
Milos Gajdos
9a0196b801
Merge pull request #3596 from milosgajdos/fix-go-mod-v2.8.1
Prepare for v2.8.1 release
2022-03-01 11:37:47 +00:00
Milos Gajdos
6736d1881a
Prepare for v2.8.1 release
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2022-02-24 13:44:40 +00:00
Milos Gajdos
e4a447d0d7
Merge pull request #3595 from crazy-max/2.8-ci-gitref
[2.8 backport] ci: use proper git ref for versioning
2022-02-23 08:59:59 +00:00
CrazyMax
80acbdf0a2
ci: use proper git ref for versioning
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
(cherry picked from commit fabf9cd4e9)
2022-02-22 22:05:10 +01:00
Milos Gajdos
dcf66392d6
Update README so the release pipeline works properly.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2022-02-07 15:40:21 +00:00
Milos Gajdos
212b38ed22
Merge pull request #3552 from milosgajdos/v2.8.0-release
Prepare for v2.8.0 release
2022-01-21 12:46:32 +00:00
Milos Gajdos
359b97a75a
Merge pull request #3568 from crazy-max/2.8-artifacts
[2.8] Release artifacts
2022-01-21 12:11:22 +00:00
Milos Gajdos
d5d89a46a3
Make this releaes a beta release first.
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2022-01-21 11:36:41 +00:00
CrazyMax
6241e099e1
[2.8] Release artifacts
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
2022-01-19 16:54:30 +01:00
Milos Gajdos
1840415ca8
Merge pull request #3565 from crazy-max/2.8-gha
[2.8] Release workflow
2022-01-13 16:56:37 +00:00
CrazyMax
65ca39e605
release workflow
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
2022-01-12 16:34:14 +01:00
Milos Gajdos
1ddad0bad8
Apply suggestions from code review
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2021-12-22 09:13:32 +00:00
Milos Gajdos
3960a560bb
Prepare for v2.8.0 release
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2021-12-21 13:24:39 +00:00
Milos Gajdos
3b7b534569
Merge pull request from GHSA-qq97-vm5h-rrhg
[release/2.7] manifest: validate document type before unmarshal
2021-11-23 19:16:40 +00:00
Milos Gajdos
afe85428bb
Merge pull request #3466 from thaJeztah/2.7_update_jwt
[release/2.7] github.com/golang-jwt/jwt v3.2.2
2021-11-23 09:10:53 +00:00
Milos Gajdos
f7365390ef
Merge pull request #3535 from thaJeztah/2.7_bump_oci_specs 2021-11-18 08:34:49 +00:00
Sebastiaan van Stijn
97f6daced4
[release/2.7] vendor: github.com/opencontainers/image-spec v1.0.2
(previous version vendored was v1.0.0)

full diff: ab7389ef9f...v1.0.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-11-17 22:31:14 +01:00
Milos Gajdos
4313c14723
Merge pull request #3531 from wy65701436/fix-rand
[release/2.7]fix go check issues
2021-11-17 20:14:46 +00:00
Wang Yan
9a3ff11330 fix go check issues
G404: Replace math rand with crypto rand

Signed-off-by: Wang Yan <wangyan@vmware.com>
2021-11-16 17:46:08 +08:00
Samuel Karp
10ade61de9
manifest: validate document type before unmarshal
Signed-off-by: Samuel Karp <skarp@amazon.com>
2021-11-05 10:16:09 -07:00
Milos Gajdos
691e62e7ef
Merge pull request #3495 from thaJeztah/2.7_backport_must
[release/2.7 backport] Change should to must in v2 spec
2021-09-08 14:44:47 +01:00
Justin Cormack
19b573a6f7
Change should to must in v2 spec
We found some examples of manifests with URLs specififed that did
not provide a digest or size. This breaks the security model by allowing
the content to change, as it no longer provides a Merkle tree. This
was not intended, so explicitly disallow by tightening wording.

Signed-off-by: Justin Cormack <justin.cormack@docker.com>
(cherry picked from commit 1660df4b60)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-09-08 15:24:07 +02:00
Sebastiaan van Stijn
c5679da3a1
[release/2.7] vendor: github.com/golang-jwt/jwt v3.2.1
to address CVE-2020-26160

full diff: a601269ab7...v3.2.2

3.2.1 release notes
---------------------------------------

- Import Path Change: See MIGRATION_GUIDE.md for tips on updating your code
  Changed the import path from github.com/dgrijalva/jwt-go to github.com/golang-jwt/jwt
- Fixed type confusion issue between string and []string in VerifyAudience.
  This fixes CVE-2020-26160

3.2.2 release notes
---------------------------------------

- Starting from this release, we are adopting the policy to support the most 2
  recent versions of Go currently available. By the time of this release, this
  is Go 1.15 and 1.16.
- Fixed a potential issue that could occur when the verification of exp, iat
  or nbf was not required and contained invalid contents, i.e. non-numeric/date.
  Thanks for @thaJeztah for making us aware of that and @giorgos-f3 for originally
  reporting it to the formtech fork.
- Added support for EdDSA / ED25519.
- Optimized allocations.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-08-10 13:05:39 +02:00
Wang Yan
61e7e20823
Merge pull request #3472 from thaJeztah/2.7_update_go116
[release/2.7] update to go1.16
2021-08-10 18:59:49 +08:00
Sebastiaan van Stijn
d836b23fc2
[release/2.7] update to go1.16
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-08-10 11:32:03 +02:00
Milos Gajdos
18230b7b34
Merge pull request #3384 from wy65701436/release/2.7-cp-3169
[backport release/2.7]Added flag for user configurable cipher suites
2021-03-23 15:23:04 +00:00
Milos Gajdos
51636a6711
Merge pull request #3385 from wy65701436/release/2.7-ci
enable ci for release/2.7
2021-03-23 15:22:46 +00:00
Derek McGowan
09109ab50a Fix gosimple checks
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Signed-off-by: Wang Yan <wangyan@vmware.com>
2021-03-23 21:03:20 +08:00
Manish Tomar
89e6568e34 Remove err nil check
since type checking nil will not panic and return appropriately

Signed-off-by: Manish Tomar <manish.tomar@docker.com>
Signed-off-by: wang yan <wangyan@vmware.com>
2021-03-23 21:03:16 +08:00
Manish Tomar
3c64ff10bb Fix gometalint errors
Signed-off-by: Manish Tomar <manish.tomar@docker.com>
Signed-off-by: wang yan <wangyan@vmware.com>
2021-03-23 21:03:10 +08:00
sayboras
f807afbf85 Migrate to golangci-lint
Signed-off-by: Tam Mach <sayboras@yahoo.com>
Signed-off-by: wang yan <wangyan@vmware.com>
2021-03-23 21:02:54 +08:00
Wang Yan
9142de99fa enable ci for release/2.7
Signed-off-by: Wang Yan <wangyan@vmware.com>
2021-03-23 18:46:17 +08:00
David Luu
cc341b0110 Added flag for user configurable cipher suites
Configuration of list of cipher suites allows a user to disable use
of weak ciphers or continue to support them for legacy usage if they
so choose.

List of available cipher suites at:
https://golang.org/pkg/crypto/tls/#pkg-constants

Default cipher suites have been updated to:
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_AES_128_GCM_SHA256
- TLS_CHACHA20_POLY1305_SHA256
- TLS_AES_256_GCM_SHA384

MinimumTLS has also been updated to include TLS 1.3 as an option
and now defaults to TLS 1.2 since 1.0 and 1.1 have been deprecated.

Signed-off-by: David Luu <david@davidluu.info>
2021-03-23 18:42:12 +08:00
Milos Gajdos
cc866a5bf3
Merge pull request #3370 from wy65701436/release/2.7-cp-3309
[cherry pick]close the io.ReadCloser from storage driver
2021-02-26 09:00:00 +00:00
Wang Yan
3fe1d67ace close the io.ReadCloser from storage driver
Backport PR #3309 to release/2.7

Signed-off-by: Wang Yan <wangyan@vmware.com>
2021-02-23 18:48:00 +08:00
Wang Yan
6300300270
Merge pull request #3347 from wy65701436/release/2.7-cp-ci
[backport release/2.7] First draft of actions based ci
2021-02-16 23:19:12 +08:00
Chris Patterson
f1bd655119 First draft of actions based ci
Signed-off-by: Chris Patterson <chrispat@github.com>
2021-02-01 11:04:54 +08:00
João Pereira
d7362d7e3a
Merge pull request #3297 from thaJeztah/2.7_backport_fix_header
Remove empty Content-Type header
2021-01-30 10:28:10 +00:00
Smasherr
cf8615dedf
Remove empty Content-Type header
Fixes #3288

Signed-off-by: Smasherr <soundcracker@gmail.com>
(cherry picked from commit c8d90f904f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-11-16 11:15:10 +01:00
Derek McGowan
70e0022e42
Merge pull request #3197 from thaJeztah/2.7_backport_add_redirect
[release/2.7 backport] docs: add redirect for old URL
2020-07-08 16:08:40 -07:00
Sebastiaan van Stijn
48eeac88e9
docs: add redirect for old URL
Looks like there's some projects refering to this old URL:
https://grep.app/search?q=https%3A//docs.docker.com/reference/api/registry_api/

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7728c5e445)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-07-08 12:22:22 +02:00
Derek McGowan
a45a401e97
Merge pull request #3119 from wy65701436/release/2.7-cp-2879
[release/2.7] Fix s3 driver for supporting ceph radosgw
2020-03-10 20:48:21 -07:00
Thomas Berger
e2f006ac2b S3 Driver: added comment for missing KeyCount workaround
Signed-off-by: Thomas Berger <loki@lokis-chaos.de>
Signed-off-by: wang yan <wangyan@vmware.com>
2020-03-10 22:41:10 +08:00
Eohyung Lee
0a1e4a57e2 Fix s3 driver for supporting ceph radosgw
Radosgw does not support S3 `GET Bucket` API v2 API but v1.
This API has backward compatibility, so most of this API is working
correctly but we can not get `KeyCount` in v1 API and which is only
for v2 API.

Signed-off-by: Eohyung Lee <liquidnuker@gmail.com>
2020-03-10 22:35:31 +08:00
Derek McGowan
bdf503a444
Merge pull request #3088 from thaJeztah/2.7_backport_fix_cloudfront_middleware
[release/2.7 backport] Bugfix: Make ipfilteredby not required
2020-02-23 00:07:58 -08:00
Derek McGowan
be75da0ef2
Merge pull request #3002 from thaJeztah/2.7_backport_add_normalize_util
[release/2.7 backport] Add reference.ParseDockerRef utility function
2020-02-21 10:13:42 -08:00
Vishesh Jindal
afa91463d6
Bugfix: Make ipfilteredby not required
Signed-off-by: Vishesh Jindal <vishesh92@gmail.com>
(cherry picked from commit f9a0506191)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-01-28 19:41:02 +01:00
Sebastiaan van Stijn
fad36ed1a1
Add reference.ParseDockerRef utility function
ParseDockerRef normalizes the image reference following the docker
convention. This is added mainly for backward compatibility. The reference
returned can only be either tagged or digested. For reference contains both tag
and digest, the function returns digested reference, e.g.

    docker.io/library/busybox:latest@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa

will be returned as

    docker.io/library/busybox@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0ac367fd6b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-20 13:50:06 +01:00
Derek McGowan
cfd1309845
Merge pull request #3073 from thaJeztah/2.7_backport_table_fix
[release/2.7 backport] fix markdown issues on configuration page
2019-12-16 22:19:04 -08:00
Derek McGowan
a85caead04
Merge pull request #3001 from dmcgowan/2.7-fix-vndr-checks
[release/2.7] Fix vndr and check
2019-12-16 21:51:28 -08:00
Adrian Plata
f999f540d3
Fixing broken table
Signed-off-by: Adrian Plata <adrian.plata@docker.com>
(cherry picked from commit b4694b0d2d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-16 13:22:39 +01:00
Vishesh Jindal
c636ed788a
Fix cloudfront documentation formatting
Signed-off-by: Vishesh Jindal <vishesh92@gmail.com>
(cherry picked from commit e1e72e9563)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-12-16 13:22:13 +01:00
Derek McGowan
5883e2d935
Fix vndr and check
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
2019-09-03 13:19:34 -07:00
Derek McGowan
269d18d9a8
Merge pull request #2987 from adrian-plata/release/2.7
[release/2.7] Adding deprecated schema v1 page
2019-09-03 12:08:26 -07:00
Adrian Plata
a3c027e626
Adding deprecated schema instructions
Signed-off-by: Adrian Plata <adrian.plata@docker.com>
(cherry picked from commit 07a50201c9)
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
2019-09-03 11:56:53 -07:00
Derek McGowan
2461543d98
Merge pull request #2824 from dmcgowan/update-version-file-2.7.1
Update version file for 2.7.1
2019-01-17 15:19:26 -08:00
Derek McGowan
5b98226afe
Update version file for 2.7.1
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
2019-01-17 15:16:54 -08:00
Derek McGowan
2eab12df9b
Merge pull request #2805 from dmcgowan/release-2.7.1
Release notes for 2.7.1
2019-01-17 15:10:29 -08:00
Derek McGowan
445ef068dd
Release notes for 2.7.1
Release notes for single fix release

Signed-off-by: Derek McGowan <derek@mcgstyle.net>
2019-01-17 15:07:35 -08:00
Ryan Abrams
cbc30be414
Merge pull request #2821 from caervs/ISS-2819
Use same env var in Dockerfile and Makefile
2019-01-17 09:53:49 -08:00
Ryan Abrams
bf74e4f91d Use same env var in Dockerfile and Makefile
Ensures that build tags get set in the Dockerfile so that OSS and GCS drivers
are built into the official registry binary.

Closes #2819

Signed-off-by: Ryan Abrams <rdabrams@gmail.com>
2019-01-16 11:16:11 -08:00
Ryan Abrams
62994fdd12
Merge pull request #2804 from caervs/ISS-2793-2.7
[2.7] Add docs for autoredirect config parameter
2019-01-07 14:35:16 -08:00
Derek McGowan
e702d95cfd
Merge pull request #2802 from davidswu/2.7-autoredirect
[2.7] default autoredirect to false
2019-01-07 10:32:14 -08:00
David Wu
caf43bbcc2 default autoredirect to false
Signed-off-by: David Wu <david.wu@docker.com>
2019-01-04 13:47:17 -08:00
147 changed files with 2407 additions and 823 deletions

1
.dockerignore Normal file
View file

@ -0,0 +1 @@
bin/

92
.github/workflows/build.yml vendored Normal file
View file

@ -0,0 +1,92 @@
name: build
on:
push:
branches:
- 'release/*'
tags:
- 'v*'
pull_request:
env:
DOCKERHUB_SLUG: distribution/distribution
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
-
name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
images: |
${{ env.DOCKERHUB_SLUG }}
### versioning strategy
### push semver tag v2.9.0 on main (default branch)
# distribution/distribution:2.9.0
# distribution/distribution:latest
### push semver tag v2.8.0 on release/2.8 branch
# distribution/distribution:2.8.0
### push on main
# distribution/distribution:edge
tags: |
type=semver,pattern={{version}}
type=ref,event=pr
# don't create latest tag on release/2.x
flavor: |
latest=false
labels: |
org.opencontainers.image.title=Distribution
org.opencontainers.image.description=The toolkit to pack, ship, store, and deliver container content
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
-
name: Build artifacts
uses: docker/bake-action@v1
with:
targets: artifact-all
-
name: Move artifacts
run: |
mv ./bin/**/* ./bin/
-
name: Upload artifacts
uses: actions/upload-artifact@v2
with:
name: registry
path: ./bin/*
if-no-files-found: error
-
name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build image
uses: docker/bake-action@v1
with:
files: |
./docker-bake.hcl
${{ steps.meta.outputs.bake-file }}
targets: image-all
push: ${{ startsWith(github.ref, 'refs/tags/') }}
-
name: GitHub Release
uses: softprops/action-gh-release@v1
if: startsWith(github.ref, 'refs/tags/')
with:
draft: true
files: |
bin/*.tar.gz
bin/*.sha256
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

50
.github/workflows/ci.yml vendored Normal file
View file

@ -0,0 +1,50 @@
name: CI
on:
push:
pull_request:
jobs:
build:
runs-on: ubuntu-latest
env:
BUILDTAGS: "include_oss,include_gcs"
CGO_ENABLED: 1
GO111MODULE: "auto"
GOPATH: ${{ github.workspace }}
GOOS: linux
COMMIT_RANGE: ${{ github.event_name == 'pull_request' && format('{0}..{1}',github.event.pull_request.base.sha, github.event.pull_request.head.sha) || github.sha }}
steps:
- uses: actions/checkout@v2
with:
path: src/github.com/docker/distribution
fetch-depth: 50
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.19.9
- name: Dependencies
run: |
sudo apt-get -q update
sudo -E apt-get -yq --no-install-suggests --no-install-recommends install python2-minimal
cd /tmp && go install github.com/vbatts/git-validation@latest
- name: Build
working-directory: ./src/github.com/docker/distribution
run: |
DCO_VERBOSITY=-q script/validate/dco
GO111MODULE=on script/setup/install-dev-tools
script/validate/vendor
go build .
make check
make build
make binaries
if [ "$GOOS" = "linux" ]; then make coverage ; fi
- uses: codecov/codecov-action@v1
with:
directory: ./src/github.com/docker/distribution

27
.golangci.yml Normal file
View file

@ -0,0 +1,27 @@
linters:
enable:
- structcheck
- varcheck
- staticcheck
- unconvert
- gofmt
- goimports
- golint
- ineffassign
- vet
- unused
- misspell
disable:
- errcheck
run:
deadline: 2m
skip-dirs:
- vendor
issues:
exclude-rules:
# io/ioutil is deprecated, but won't be removed until Go v2. It's safe to ignore for the release/2.8 branch.
- text: "SA1019: \"io/ioutil\" has been deprecated since Go 1.16"
linters:
- staticcheck

View file

@ -1,16 +0,0 @@
{
"Vendor": true,
"Deadline": "2m",
"Sort": ["linter", "severity", "path", "line"],
"EnableGC": true,
"Enable": [
"structcheck",
"staticcheck",
"unconvert",
"gofmt",
"goimports",
"golint",
"vet"
]
}

View file

@ -30,3 +30,22 @@ Helen Xie <xieyulin821@harmonycloud.cn> Helen-xie <xieyulin821@harmonycloud.cn>
Mike Brown <brownwm@us.ibm.com> Mike Brown <mikebrow@users.noreply.github.com>
Manish Tomar <manish.tomar@docker.com> Manish Tomar <manishtomar@users.noreply.github.com>
Sakeven Jiang <jc5930@sina.cn> sakeven <jc5930@sina.cn>
Milos Gajdos <milosgajdos83@gmail.com> Milos Gajdos <milosgajdos@users.noreply.github.com>
Derek McGowan <derek@mcgstyle.net> Derek McGowa <dmcgowan@users.noreply.github.com>
Adrian Plata <adrian.plata@docker.com> Adrian Plata <@users.noreply.github.com>
Sebastiaan van Stijn <github@gone.nl> Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>
Vishesh Jindal <vishesh92@gmail.com> Vishesh Jindal <vishesh92@users.noreply.github.com>
Wang Yan <wangyan@vmware.com> Wang Yan <wy65701436@users.noreply.github.com>
Chris Patterson <chrispat@github.com> Chris Patterson <chrispat@users.noreply.github.com>
Eohyung Lee <liquidnuker@gmail.com> Eohyung Lee <leoh0@users.noreply.github.com>
João Pereira <484633+joaodrp@users.noreply.github.com>
Smasherr <soundcracker@gmail.com> Smasherr <Smasherr@users.noreply.github.com>
Thomas Berger <loki@lokis-chaos.de> Thomas Berger <tbe@users.noreply.github.com>
Samuel Karp <skarp@amazon.com> Samuel Karp <samuelkarp@users.noreply.github.com>
Justin Cormack <justin.cormack@docker.com>
sayboras <sayboras@yahoo.com>
CrazyMax <github@crazymax.dev> <1951866+crazy-max@users.noreply.github.com>
Hayley Swimelar <hswimelar@gmail.com>
Jose D. Gomez R <jose.gomez@suse.com>
Shengjing Zhu <zhsj@debian.org>
Silvin Lubecki <31478878+silvin-lubecki@users.noreply.github.com>

View file

@ -1,51 +0,0 @@
dist: trusty
sudo: required
# setup travis so that we can run containers for integration tests
services:
- docker
language: go
go:
- "1.11.x"
go_import_path: github.com/docker/distribution
addons:
apt:
packages:
- python-minimal
env:
- TRAVIS_GOOS=linux DOCKER_BUILDTAGS="include_oss include_gcs" TRAVIS_CGO_ENABLED=1
before_install:
- uname -r
- sudo apt-get -q update
install:
- go get -u github.com/vbatts/git-validation
# TODO: Add enforcement of license
# - go get -u github.com/kunalkushwaha/ltag
- cd $TRAVIS_BUILD_DIR
script:
- export GOOS=$TRAVIS_GOOS
- export CGO_ENABLED=$TRAVIS_CGO_ENABLED
- DCO_VERBOSITY=-q script/validate/dco
- GOOS=linux script/setup/install-dev-tools
- script/validate/vendor
- go build -i .
- make check
- make build
- make binaries
# Currently takes too long
#- if [ "$GOOS" = "linux" ]; then make test-race ; fi
- if [ "$GOOS" = "linux" ]; then make coverage ; fi
after_success:
- bash <(curl -s https://codecov.io/bash) -F linux
before_deploy:
# Run tests with storage driver configurations

View file

@ -114,4 +114,4 @@ the registry binary generated in the "./bin" directory:
### Optional build tags
Optional [build tags](http://golang.org/pkg/go/build/) can be provided using
the environment variable `DOCKER_BUILDTAGS`.
the environment variable `BUILDTAGS`.

View file

@ -1,22 +1,59 @@
FROM golang:1.11-alpine AS build
# syntax=docker/dockerfile:1
ENV DISTRIBUTION_DIR /go/src/github.com/docker/distribution
ENV DOCKER_BUILDTAGS include_oss include_gcs
ARG GO_VERSION=1.19.9
ARG ALPINE_VERSION=3.16
ARG XX_VERSION=1.2.1
ARG GOOS=linux
ARG GOARCH=amd64
ARG GOARM=6
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
FROM --platform=$BUILDPLATFORM golang:${GO_VERSION}-alpine${ALPINE_VERSION} AS base
COPY --from=xx / /
RUN apk add --no-cache bash coreutils file git
ENV GO111MODULE=auto
ENV CGO_ENABLED=0
WORKDIR /go/src/github.com/docker/distribution
RUN set -ex \
&& apk add --no-cache make git file
FROM base AS version
ARG PKG="github.com/docker/distribution"
RUN --mount=target=. \
VERSION=$(git describe --match 'v[0-9]*' --dirty='.m' --always --tags) REVISION=$(git rev-parse HEAD)$(if ! git diff --no-ext-diff --quiet --exit-code; then echo .m; fi); \
echo "-X ${PKG}/version.Version=${VERSION#v} -X ${PKG}/version.Revision=${REVISION} -X ${PKG}/version.Package=${PKG}" | tee /tmp/.ldflags; \
echo -n "${VERSION}" | tee /tmp/.version;
WORKDIR $DISTRIBUTION_DIR
COPY . $DISTRIBUTION_DIR
RUN CGO_ENABLED=0 make PREFIX=/go clean binaries && file ./bin/registry | grep "statically linked"
FROM base AS build
ARG TARGETPLATFORM
ARG LDFLAGS="-s -w"
ARG BUILDTAGS="include_oss,include_gcs"
RUN --mount=type=bind,target=/go/src/github.com/docker/distribution,rw \
--mount=type=cache,target=/root/.cache/go-build \
--mount=target=/go/pkg/mod,type=cache \
--mount=type=bind,source=/tmp/.ldflags,target=/tmp/.ldflags,from=version \
set -x ; xx-go build -tags "${BUILDTAGS}" -trimpath -ldflags "$(cat /tmp/.ldflags) ${LDFLAGS}" -o /usr/bin/registry ./cmd/registry \
&& xx-verify --static /usr/bin/registry
FROM alpine
FROM scratch AS binary
COPY --from=build /usr/bin/registry /
FROM base AS releaser
ARG TARGETOS
ARG TARGETARCH
ARG TARGETVARIANT
WORKDIR /work
RUN --mount=from=binary,target=/build \
--mount=type=bind,target=/src \
--mount=type=bind,source=/tmp/.version,target=/tmp/.version,from=version \
VERSION=$(cat /tmp/.version) \
&& mkdir -p /out \
&& cp /build/registry /src/README.md /src/LICENSE . \
&& tar -czvf "/out/registry_${VERSION#v}_${TARGETOS}_${TARGETARCH}${TARGETVARIANT}.tar.gz" * \
&& sha256sum -z "/out/registry_${VERSION#v}_${TARGETOS}_${TARGETARCH}${TARGETVARIANT}.tar.gz" | awk '{ print $1 }' > "/out/registry_${VERSION#v}_${TARGETOS}_${TARGETARCH}${TARGETVARIANT}.tar.gz.sha256"
FROM scratch AS artifact
COPY --from=releaser /out /
FROM alpine:${ALPINE_VERSION}
RUN apk add --no-cache ca-certificates
COPY cmd/registry/config-dev.yml /etc/docker/registry/config.yml
COPY --from=build /go/src/github.com/docker/distribution/bin/registry /bin/registry
COPY --from=binary /registry /bin/registry
VOLUME ["/var/lib/registry"]
EXPOSE 5000
ENTRYPOINT ["registry"]

View file

@ -50,7 +50,7 @@ version/version.go:
check: ## run all linters (TODO: enable "unused", "varcheck", "ineffassign", "unconvert", "staticheck", "goimports", "structcheck")
@echo "$(WHALE) $@"
gometalinter --config .gometalinter.json ./...
@GO111MODULE=off golangci-lint --build-tags "${BUILDTAGS}" run
test: ## run tests, except integration test with test.short
@echo "$(WHALE) $@"

View file

@ -2,7 +2,7 @@
The Docker toolset to pack, ship, store, and deliver content.
This repository's main product is the Docker Registry 2.0 implementation
This repository provides the Docker Registry 2.0 implementation
for storing and distributing Docker images. It supersedes the
[docker/docker-registry](https://github.com/docker/docker-registry)
project with a new API design, focused around security and performance.

View file

@ -10,7 +10,7 @@ import (
"github.com/docker/distribution/reference"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
)
var (

View file

@ -4,7 +4,7 @@
// For example, to generate a new API specification, one would execute the
// following command from the repo root:
//
// $ registry-api-descriptor-template docs/spec/api.md.tmpl > docs/spec/api.md
// $ registry-api-descriptor-template docs/spec/api.md.tmpl > docs/spec/api.md
//
// The templates are passed in the api/v2.APIDescriptor object. Please see the
// package documentation for fields available on that object. The template
@ -21,7 +21,7 @@ import (
"text/template"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
v2 "github.com/docker/distribution/registry/api/v2"
)
var spaceRegex = regexp.MustCompile(`\n\s*`)

View file

@ -108,6 +108,12 @@ type Configuration struct {
// A file may contain multiple CA certificates encoded as PEM
ClientCAs []string `yaml:"clientcas,omitempty"`
// Specifies the lowest TLS version allowed
MinimumTLS string `yaml:"minimumtls,omitempty"`
// Specifies a list of cipher suites allowed
CipherSuites []string `yaml:"ciphersuites,omitempty"`
// LetsEncrypt is used to configuration setting up TLS through
// Let's Encrypt instead of manually specifying certificate and
// key. If a TLS certificate is specified, the Let's Encrypt
@ -187,7 +193,8 @@ type Configuration struct {
} `yaml:"pool,omitempty"`
} `yaml:"redis,omitempty"`
Health Health `yaml:"health,omitempty"`
Health Health `yaml:"health,omitempty"`
Catalog Catalog `yaml:"catalog,omitempty"`
Proxy Proxy `yaml:"proxy,omitempty"`
@ -238,6 +245,16 @@ type Configuration struct {
} `yaml:"policy,omitempty"`
}
// Catalog is composed of MaxEntries.
// Catalog endpoint (/v2/_catalog) configuration, it provides the configuration
// options to control the maximum number of entries returned by the catalog endpoint.
type Catalog struct {
// Max number of entries returned by the catalog endpoint. Requesting n entries
// to the catalog endpoint will return at most MaxEntries entries.
// An empty or a negative value will set a default of 1000 maximum entries by default.
MaxEntries int `yaml:"maxentries,omitempty"`
}
// LogHook is composed of hook Level and Type.
// After hooks configuration, it can execute the next handling automatically,
// when defined levels of log message emitted.
@ -388,7 +405,7 @@ func (loglevel *Loglevel) UnmarshalYAML(unmarshal func(interface{}) error) error
switch loglevelString {
case "error", "warn", "info", "debug":
default:
return fmt.Errorf("Invalid loglevel %s Must be one of [error, warn, info, debug]", loglevelString)
return fmt.Errorf("invalid loglevel %s Must be one of [error, warn, info, debug]", loglevelString)
}
*loglevel = Loglevel(loglevelString)
@ -463,7 +480,7 @@ func (storage *Storage) UnmarshalYAML(unmarshal func(interface{}) error) error {
}
if len(types) > 1 {
return fmt.Errorf("Must provide exactly one storage type. Provided: %v", types)
return fmt.Errorf("must provide exactly one storage type. Provided: %v", types)
}
}
*storage = storageMap
@ -578,7 +595,7 @@ type Events struct {
IncludeReferences bool `yaml:"includereferences"` // include reference data in manifest events
}
//Ignore configures mediaTypes and actions of the event, that it won't be propagated
// Ignore configures mediaTypes and actions of the event, that it won't be propagated
type Ignore struct {
MediaTypes []string `yaml:"mediatypes"` // target media types to ignore
Actions []string `yaml:"actions"` // ignore action types
@ -664,12 +681,17 @@ func Parse(rd io.Reader) (*Configuration, error) {
if v0_1.Loglevel != Loglevel("") {
v0_1.Loglevel = Loglevel("")
}
if v0_1.Catalog.MaxEntries <= 0 {
v0_1.Catalog.MaxEntries = 1000
}
if v0_1.Storage.Type() == "" {
return nil, errors.New("No storage configuration provided")
return nil, errors.New("no storage configuration provided")
}
return (*Configuration)(v0_1), nil
}
return nil, fmt.Errorf("Expected *v0_1Configuration, received %#v", c)
return nil, fmt.Errorf("expected *v0_1Configuration, received %#v", c)
},
},
})

View file

@ -71,6 +71,9 @@ var configStruct = Configuration{
},
},
},
Catalog: Catalog{
MaxEntries: 1000,
},
HTTP: struct {
Addr string `yaml:"addr,omitempty"`
Net string `yaml:"net,omitempty"`
@ -80,10 +83,12 @@ var configStruct = Configuration{
RelativeURLs bool `yaml:"relativeurls,omitempty"`
DrainTimeout time.Duration `yaml:"draintimeout,omitempty"`
TLS struct {
Certificate string `yaml:"certificate,omitempty"`
Key string `yaml:"key,omitempty"`
ClientCAs []string `yaml:"clientcas,omitempty"`
LetsEncrypt struct {
Certificate string `yaml:"certificate,omitempty"`
Key string `yaml:"key,omitempty"`
ClientCAs []string `yaml:"clientcas,omitempty"`
MinimumTLS string `yaml:"minimumtls,omitempty"`
CipherSuites []string `yaml:"ciphersuites,omitempty"`
LetsEncrypt struct {
CacheFile string `yaml:"cachefile,omitempty"`
Email string `yaml:"email,omitempty"`
Hosts []string `yaml:"hosts,omitempty"`
@ -102,10 +107,12 @@ var configStruct = Configuration{
} `yaml:"http2,omitempty"`
}{
TLS: struct {
Certificate string `yaml:"certificate,omitempty"`
Key string `yaml:"key,omitempty"`
ClientCAs []string `yaml:"clientcas,omitempty"`
LetsEncrypt struct {
Certificate string `yaml:"certificate,omitempty"`
Key string `yaml:"key,omitempty"`
ClientCAs []string `yaml:"clientcas,omitempty"`
MinimumTLS string `yaml:"minimumtls,omitempty"`
CipherSuites []string `yaml:"ciphersuites,omitempty"`
LetsEncrypt struct {
CacheFile string `yaml:"cachefile,omitempty"`
Email string `yaml:"email,omitempty"`
Hosts []string `yaml:"hosts,omitempty"`
@ -520,6 +527,7 @@ func copyConfig(config Configuration) *Configuration {
configCopy.Version = MajorMinorVersion(config.Version.Major(), config.Version.Minor())
configCopy.Loglevel = config.Loglevel
configCopy.Log = config.Log
configCopy.Catalog = config.Catalog
configCopy.Log.Fields = make(map[string]interface{}, len(config.Log.Fields))
for k, v := range config.Log.Fields {
configCopy.Log.Fields[k] = v
@ -540,9 +548,7 @@ func copyConfig(config Configuration) *Configuration {
}
configCopy.Notifications = Notifications{Endpoints: []Endpoint{}}
for _, v := range config.Notifications.Endpoints {
configCopy.Notifications.Endpoints = append(configCopy.Notifications.Endpoints, v)
}
configCopy.Notifications.Endpoints = append(configCopy.Notifications.Endpoints, config.Notifications.Endpoints...)
configCopy.HTTP.Headers = make(http.Header)
for k, v := range config.HTTP.Headers {

View file

@ -122,7 +122,7 @@ func (p *Parser) Parse(in []byte, v interface{}) error {
parseInfo, ok := p.mapping[versionedStruct.Version]
if !ok {
return fmt.Errorf("Unsupported version: %q", versionedStruct.Version)
return fmt.Errorf("unsupported version: %q", versionedStruct.Version)
}
parseAs := reflect.New(parseInfo.ParseAs)

View file

@ -4,68 +4,68 @@
//
// The easiest way to get started is to get the background context:
//
// ctx := context.Background()
// ctx := context.Background()
//
// The returned context should be passed around your application and be the
// root of all other context instances. If the application has a version, this
// line should be called before anything else:
//
// ctx := context.WithVersion(context.Background(), version)
// ctx := context.WithVersion(context.Background(), version)
//
// The above will store the version in the context and will be available to
// the logger.
//
// Logging
// # Logging
//
// The most useful aspect of this package is GetLogger. This function takes
// any context.Context interface and returns the current logger from the
// context. Canonical usage looks like this:
//
// GetLogger(ctx).Infof("something interesting happened")
// GetLogger(ctx).Infof("something interesting happened")
//
// GetLogger also takes optional key arguments. The keys will be looked up in
// the context and reported with the logger. The following example would
// return a logger that prints the version with each log message:
//
// ctx := context.Context(context.Background(), "version", version)
// GetLogger(ctx, "version").Infof("this log message has a version field")
// ctx := context.Context(context.Background(), "version", version)
// GetLogger(ctx, "version").Infof("this log message has a version field")
//
// The above would print out a log message like this:
//
// INFO[0000] this log message has a version field version=v2.0.0-alpha.2.m
// INFO[0000] this log message has a version field version=v2.0.0-alpha.2.m
//
// When used with WithLogger, we gain the ability to decorate the context with
// loggers that have information from disparate parts of the call stack.
// Following from the version example, we can build a new context with the
// configured logger such that we always print the version field:
//
// ctx = WithLogger(ctx, GetLogger(ctx, "version"))
// ctx = WithLogger(ctx, GetLogger(ctx, "version"))
//
// Since the logger has been pushed to the context, we can now get the version
// field for free with our log messages. Future calls to GetLogger on the new
// context will have the version field:
//
// GetLogger(ctx).Infof("this log message has a version field")
// GetLogger(ctx).Infof("this log message has a version field")
//
// This becomes more powerful when we start stacking loggers. Let's say we
// have the version logger from above but also want a request id. Using the
// context above, in our request scoped function, we place another logger in
// the context:
//
// ctx = context.WithValue(ctx, "http.request.id", "unique id") // called when building request context
// ctx = WithLogger(ctx, GetLogger(ctx, "http.request.id"))
// ctx = context.WithValue(ctx, "http.request.id", "unique id") // called when building request context
// ctx = WithLogger(ctx, GetLogger(ctx, "http.request.id"))
//
// When GetLogger is called on the new context, "http.request.id" will be
// included as a logger field, along with the original "version" field:
//
// INFO[0000] this log message has a version field http.request.id=unique id version=v2.0.0-alpha.2.m
// INFO[0000] this log message has a version field http.request.id=unique id version=v2.0.0-alpha.2.m
//
// Note that this only affects the new context, the previous context, with the
// version field, can be used independently. Put another way, the new logger,
// added to the request context, is unique to that context and can have
// request scoped variables.
//
// HTTP Requests
// # HTTP Requests
//
// This package also contains several methods for working with http requests.
// The concepts are very similar to those described above. We simply place the
@ -73,13 +73,13 @@
// available. GetRequestLogger can then be called to get request specific
// variables in a log line:
//
// ctx = WithRequest(ctx, req)
// GetRequestLogger(ctx).Infof("request variables")
// ctx = WithRequest(ctx, req)
// GetRequestLogger(ctx).Infof("request variables")
//
// Like above, if we want to include the request data in all log messages in
// the context, we push the logger to a new context and use that one:
//
// ctx = WithLogger(ctx, GetRequestLogger(ctx))
// ctx = WithLogger(ctx, GetRequestLogger(ctx))
//
// The concept is fairly powerful and ensures that calls throughout the stack
// can be traced in log messages. Using the fields like "http.request.id", one

View file

@ -246,11 +246,7 @@ func (ctx *muxVarsContext) Value(key interface{}) interface{} {
return ctx.vars
}
if strings.HasPrefix(keyStr, "vars.") {
keyStr = strings.TrimPrefix(keyStr, "vars.")
}
if v, ok := ctx.vars[keyStr]; ok {
if v, ok := ctx.vars[strings.TrimPrefix(keyStr, "vars.")]; ok {
return v
}
}

View file

@ -24,16 +24,16 @@ import (
//
// Here is an example of the usage:
//
// func timedOperation(ctx Context) {
// ctx, done := WithTrace(ctx)
// defer done("this will be the log message")
// // ... function body ...
// }
// func timedOperation(ctx Context) {
// ctx, done := WithTrace(ctx)
// defer done("this will be the log message")
// // ... function body ...
// }
//
// If the function ran for roughly 1s, such a usage would emit a log message
// as follows:
//
// INFO[0001] this will be the log message trace.duration=1.004575763s trace.func=github.com/docker/distribution/context.traceOperation trace.id=<id> ...
// INFO[0001] this will be the log message trace.duration=1.004575763s trace.func=github.com/docker/distribution/context.traceOperation trace.id=<id> ...
//
// Notice that the function name is automatically resolved, along with the
// package and a trace id is emitted that can be linked with parent ids.

View file

@ -2,9 +2,10 @@ package main
import (
"context"
"crypto/rand"
"encoding/json"
"flag"
"math/rand"
"math/big"
"net/http"
"strconv"
"strings"
@ -141,8 +142,15 @@ const refreshTokenLength = 15
func newRefreshToken() string {
s := make([]rune, refreshTokenLength)
max := int64(len(refreshCharacters))
for i := range s {
s[i] = refreshCharacters[rand.Intn(len(refreshCharacters))]
randInt, err := rand.Int(rand.Reader, big.NewInt(max))
// let '0' serves the failure case
if err != nil {
logrus.Infof("Error on making refersh token: %v", err)
randInt = big.NewInt(0)
}
s[i] = refreshCharacters[randInt.Int64()]
}
return string(s)
}

56
docker-bake.hcl Normal file
View file

@ -0,0 +1,56 @@
group "default" {
targets = ["image-local"]
}
// Special target: https://github.com/docker/metadata-action#bake-definition
target "docker-metadata-action" {
tags = ["registry:local"]
}
target "binary" {
target = "binary"
output = ["./bin"]
}
target "artifact" {
target = "artifact"
output = ["./bin"]
}
target "artifact-all" {
inherits = ["artifact"]
platforms = [
"linux/amd64",
"linux/arm/v6",
"linux/arm/v7",
"linux/arm64",
"linux/ppc64le",
"linux/s390x"
]
}
// Special target: https://github.com/docker/metadata-action#bake-definition
target "docker-metadata-action" {
tags = ["registry:local"]
}
target "image" {
inherits = ["docker-metadata-action"]
}
target "image-local" {
inherits = ["image"]
output = ["type=docker"]
}
target "image-all" {
inherits = ["image"]
platforms = [
"linux/amd64",
"linux/arm/v6",
"linux/arm/v7",
"linux/arm64",
"linux/ppc64le",
"linux/s390x"
]
}

View file

@ -703,15 +703,20 @@ interpretation of the options.
| `baseurl` | yes | The `SCHEME://HOST[/PATH]` at which Cloudfront is served. |
| `privatekey` | yes | The private key for Cloudfront, provided by AWS. |
| `keypairid` | yes | The key pair ID provided by AWS. |
| `duration` | no | An integer and unit for the duration of the Cloudfront session. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, or `h`. For example, `3000s` is valid, but `3000 s` is not. If you do not specify a `duration` or you specify an integer without a time unit, the duration defaults to `20m` (20 minutes).|
|`ipfilteredby`|no | A string with the following value `none|aws|awsregion`. |
|`awsregion`|no | A comma separated string of AWS regions, only available when `ipfilteredby` is `awsregion`. For example, `us-east-1, us-west-2`|
|`updatefrenquency`|no | The frequency to update AWS IP regions, default: `12h`|
|`iprangesurl`|no | The URL contains the AWS IP ranges information, default: `https://ip-ranges.amazonaws.com/ip-ranges.json`|
Then value of ipfilteredby:
`none`: default, do not filter by IP
`aws`: IP from AWS goes to S3 directly
`awsregion`: IP from certain AWS regions goes to S3 directly, use together with `awsregion`
| `duration` | no | An integer and unit for the duration of the Cloudfront session. Valid time units are `ns`, `us` (or `µs`), `ms`, `s`, `m`, or `h`. For example, `3000s` is valid, but `3000 s` is not. If you do not specify a `duration` or you specify an integer without a time unit, the duration defaults to `20m` (20 minutes). |
| `ipfilteredby` | no | A string with the following value `none`, `aws` or `awsregion`. |
| `awsregion` | no | A comma separated string of AWS regions, only available when `ipfilteredby` is `awsregion`. For example, `us-east-1, us-west-2` |
| `updatefrenquency` | no | The frequency to update AWS IP regions, default: `12h` |
| `iprangesurl` | no | The URL contains the AWS IP ranges information, default: `https://ip-ranges.amazonaws.com/ip-ranges.json` |
Value of `ipfilteredby` can be:
| Value | Description |
|-------------|------------------------------------|
| `none` | default, do not filter by IP |
| `aws` | IP from AWS goes to S3 directly |
| `awsregion` | IP from certain AWS regions goes to S3 directly, use together with `awsregion`. |
### `redirect`
@ -777,6 +782,10 @@ http:
clientcas:
- /path/to/ca.pem
- /path/to/another/ca.pem
minimumtls: tls1.2
ciphersuites:
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
letsencrypt:
cachefile: /path/to/cache-file
email: emailused@letsencrypt.com
@ -812,9 +821,49 @@ and proxy connections to the registry server.
| Parameter | Required | Description |
|-----------|----------|-------------------------------------------------------|
| `certificate` | yes | Absolute path to the x509 certificate file. |
| `key` | yes | Absolute path to the x509 private key file. |
| `clientcas` | no | An array of absolute paths to x509 CA files. |
| `certificate` | yes | Absolute path to the x509 certificate file. |
| `key` | yes | Absolute path to the x509 private key file. |
| `clientcas` | no | An array of absolute paths to x509 CA files. |
| `minimumtls` | no | Minimum TLS version allowed (tls1.0, tls1.1, tls1.2, tls1.3). Defaults to tls1.2 |
| `ciphersuites` | no | Cipher suites allowed. Please see below for allowed values and default. |
Available cipher suites:
- TLS_RSA_WITH_RC4_128_SHA
- TLS_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_128_CBC_SHA256
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_RC4_128_SHA
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
- TLS_ECDHE_RSA_WITH_RC4_128_SHA
- TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_AES_128_GCM_SHA256
- TLS_AES_256_GCM_SHA384
- TLS_CHACHA20_POLY1305_SHA256
Default cipher suites:
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_AES_128_GCM_SHA256
- TLS_CHACHA20_POLY1305_SHA256
- TLS_AES_256_GCM_SHA384
### `letsencrypt`

View file

@ -2,6 +2,8 @@
title: "HTTP API V2"
description: "Specification for the Registry API."
keywords: registry, on-prem, images, tags, repository, distribution, api, advanced
redirect_from:
- /reference/api/registry_api/
---
# Docker Registry HTTP API V2

View file

@ -2,6 +2,8 @@
title: "HTTP API V2"
description: "Specification for the Registry API."
keywords: registry, on-prem, images, tags, repository, distribution, api, advanced
redirect_from:
- /reference/api/registry_api/
---
# Docker Registry HTTP API V2

View file

@ -0,0 +1,41 @@
---
title: Update deprecated schema image manifest version 2, v1 images
description: Update deprecated schema v1 iamges
keywords: registry, on-prem, images, tags, repository, distribution, api, advanced, manifest
---
## Image manifest version 2, schema 1
With the release of image manifest version 2, schema 2, image manifest version
2, schema 1 has been deprecated. This could lead to compatibility and
vulnerability issues in images that haven't been updated to image manifest
version 2, schema 2.
This page contains information on how to update from image manifest version 2,
schema 1. However, these instructions will not ensure your new image will run
successfully. There may be several other issues to troubleshoot that are
associated with the deprecated image manifest that will block your image from
running succesfully. A list of possible methods to help update your image is
also included below.
### Update to image manifest version 2, schema 2
One way to upgrade an image from image manifest version 2, schema 1 to
schema 2 is to `docker pull` the image and then `docker push` the image with a
current version of Docker. Doing so will automatically convert the image to use
the latest image manifest specification.
Converting an image to image manifest version 2, schema 2 converts the
manifest format, but does not update the contents within the image. Images
using manifest version 2, schema 1 may contain unpatched vulnerabilities. We
recommend looking for an alternative image or rebuilding it.
### Update FROM statement
You can rebuild the image by updating the `FROM` statement in your
`Dockerfile`. If your image manifest is out-of-date, there is a chance the
image pulled from your `FROM` statement in your `Dockerfile` is also
out-of-date. See the [Dockerfile reference](https://docs.docker.com/engine/reference/builder/#from)
and the [Dockerfile best practices guide](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/)
for more information on how to update the `FROM` statement in your
`Dockerfile`.

View file

@ -220,7 +220,7 @@ image. It's the direct replacement for the schema-1 manifest.
- **`urls`** *array*
Provides a list of URLs from which the content may be fetched. Content
should be verified against the `digest` and `size`. This field is
must be verified against the `digest` and `size`. This field is
optional and uncommon.
## Example Image Manifest

View file

@ -14,7 +14,7 @@ var (
// DownHandler registers a manual_http_status that always returns an Error
func DownHandler(w http.ResponseWriter, r *http.Request) {
if r.Method == "POST" {
updater.Update(errors.New("Manual Check"))
updater.Update(errors.New("manual Check"))
} else {
w.WriteHeader(http.StatusNotFound)
}

View file

@ -13,29 +13,29 @@
// particularly useful for checks that verify upstream connectivity or
// database status, since they might take a long time to return/timeout.
//
// Installing
// # Installing
//
// To install health, just import it in your application:
//
// import "github.com/docker/distribution/health"
// import "github.com/docker/distribution/health"
//
// You can also (optionally) import "health/api" that will add two convenience
// endpoints: "/debug/health/down" and "/debug/health/up". These endpoints add
// "manual" checks that allow the service to quickly be brought in/out of
// rotation.
//
// import _ "github.com/docker/distribution/health/api"
// import _ "github.com/docker/distribution/health/api"
//
// # curl localhost:5001/debug/health
// {}
// # curl -X POST localhost:5001/debug/health/down
// # curl localhost:5001/debug/health
// {"manual_http_status":"Manual Check"}
// # curl localhost:5001/debug/health
// {}
// # curl -X POST localhost:5001/debug/health/down
// # curl localhost:5001/debug/health
// {"manual_http_status":"Manual Check"}
//
// After importing these packages to your main application, you can start
// registering checks.
//
// Registering Checks
// # Registering Checks
//
// The recommended way of registering checks is using a periodic Check.
// PeriodicChecks run on a certain schedule and asynchronously update the
@ -45,22 +45,22 @@
// A trivial example of a check that runs every 5 seconds and shuts down our
// server if the current minute is even, could be added as follows:
//
// func currentMinuteEvenCheck() error {
// m := time.Now().Minute()
// if m%2 == 0 {
// return errors.New("Current minute is even!")
// }
// return nil
// }
// func currentMinuteEvenCheck() error {
// m := time.Now().Minute()
// if m%2 == 0 {
// return errors.New("Current minute is even!")
// }
// return nil
// }
//
// health.RegisterPeriodicFunc("minute_even", currentMinuteEvenCheck, time.Second*5)
// health.RegisterPeriodicFunc("minute_even", currentMinuteEvenCheck, time.Second*5)
//
// Alternatively, you can also make use of "RegisterPeriodicThresholdFunc" to
// implement the exact same check, but add a threshold of failures after which
// the check will be unhealthy. This is particularly useful for flaky Checks,
// ensuring some stability of the service when handling them.
//
// health.RegisterPeriodicThresholdFunc("minute_even", currentMinuteEvenCheck, time.Second*5, 4)
// health.RegisterPeriodicThresholdFunc("minute_even", currentMinuteEvenCheck, time.Second*5, 4)
//
// The lowest-level way to interact with the health package is calling
// "Register" directly. Register allows you to pass in an arbitrary string and
@ -72,7 +72,7 @@
// Assuming you wish to register a method called "currentMinuteEvenCheck()
// error" you could do that by doing:
//
// health.Register("even_minute", health.CheckFunc(currentMinuteEvenCheck))
// health.Register("even_minute", health.CheckFunc(currentMinuteEvenCheck))
//
// CheckFunc is a convenience type that implements Checker.
//
@ -80,11 +80,11 @@
// and the convenience method RegisterFunc. An example that makes the status
// endpoint always return an error:
//
// health.RegisterFunc("my_check", func() error {
// return Errors.new("This is an error!")
// }))
// health.RegisterFunc("my_check", func() error {
// return Errors.new("This is an error!")
// }))
//
// Examples
// # Examples
//
// You could also use the health checker mechanism to ensure your application
// only comes up if certain conditions are met, or to allow the developer to
@ -92,35 +92,35 @@
// database connectivity and immediately takes the server out of rotation on
// err:
//
// updater = health.NewStatusUpdater()
// health.RegisterFunc("database_check", func() error {
// return updater.Check()
// }))
// updater = health.NewStatusUpdater()
// health.RegisterFunc("database_check", func() error {
// return updater.Check()
// }))
//
// conn, err := Connect(...) // database call here
// if err != nil {
// updater.Update(errors.New("Error connecting to the database: " + err.Error()))
// }
// conn, err := Connect(...) // database call here
// if err != nil {
// updater.Update(errors.New("Error connecting to the database: " + err.Error()))
// }
//
// You can also use the predefined Checkers that come included with the health
// package. First, import the checks:
//
// import "github.com/docker/distribution/health/checks
// import "github.com/docker/distribution/health/checks
//
// After that you can make use of any of the provided checks. An example of
// using a `FileChecker` to take the application out of rotation if a certain
// file exists can be done as follows:
//
// health.Register("fileChecker", health.PeriodicChecker(checks.FileChecker("/tmp/disable"), time.Second*5))
// health.Register("fileChecker", health.PeriodicChecker(checks.FileChecker("/tmp/disable"), time.Second*5))
//
// After registering the check, it is trivial to take an application out of
// rotation from the console:
//
// # curl localhost:5001/debug/health
// {}
// # touch /tmp/disable
// # curl localhost:5001/debug/health
// {"fileChecker":"file exists"}
// # curl localhost:5001/debug/health
// {}
// # touch /tmp/disable
// # curl localhost:5001/debug/health
// {"fileChecker":"file exists"}
//
// FileChecker only accepts absolute or relative file path. It does not work
// properly with tilde(~). You should make sure that the application has
@ -132,5 +132,5 @@
// "HTTPChecker", but ensure that you only mark the test unhealthy if there
// are a minimum of two failures in a row:
//
// health.Register("httpChecker", health.PeriodicThresholdChecker(checks.HTTPChecker("https://www.google.pt"), time.Second*5, 2))
// health.Register("httpChecker", health.PeriodicThresholdChecker(checks.HTTPChecker("https://www.google.pt"), time.Second*5, 2))
package health

View file

@ -8,7 +8,7 @@ import (
"github.com/docker/distribution"
"github.com/docker/distribution/manifest"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
)
const (
@ -54,6 +54,9 @@ func init() {
}
imageIndexFunc := func(b []byte) (distribution.Manifest, distribution.Descriptor, error) {
if err := validateIndex(b); err != nil {
return nil, distribution.Descriptor{}, err
}
m := new(DeserializedManifestList)
err := m.UnmarshalJSON(b)
if err != nil {
@ -163,7 +166,7 @@ func FromDescriptorsWithMediaType(descriptors []ManifestDescriptor, mediaType st
},
}
m.Manifests = make([]ManifestDescriptor, len(descriptors), len(descriptors))
m.Manifests = make([]ManifestDescriptor, len(descriptors))
copy(m.Manifests, descriptors)
deserialized := DeserializedManifestList{
@ -177,7 +180,7 @@ func FromDescriptorsWithMediaType(descriptors []ManifestDescriptor, mediaType st
// UnmarshalJSON populates a new ManifestList struct from JSON data.
func (m *DeserializedManifestList) UnmarshalJSON(b []byte) error {
m.canonical = make([]byte, len(b), len(b))
m.canonical = make([]byte, len(b))
// store manifest list in canonical
copy(m.canonical, b)
@ -214,3 +217,23 @@ func (m DeserializedManifestList) Payload() (string, []byte, error) {
return mediaType, m.canonical, nil
}
// unknownDocument represents a manifest, manifest list, or index that has not
// yet been validated
type unknownDocument struct {
Config interface{} `json:"config,omitempty"`
Layers interface{} `json:"layers,omitempty"`
}
// validateIndex returns an error if the byte slice is invalid JSON or if it
// contains fields that belong to a manifest
func validateIndex(b []byte) error {
var doc unknownDocument
if err := json.Unmarshal(b, &doc); err != nil {
return err
}
if doc.Config != nil || doc.Layers != nil {
return errors.New("index: expected index but found manifest")
}
return nil
}

View file

@ -7,7 +7,9 @@ import (
"testing"
"github.com/docker/distribution"
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/docker/distribution/manifest/ocischema"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
)
var expectedManifestListSerialization = []byte(`{
@ -303,3 +305,33 @@ func TestMediaTypes(t *testing.T) {
mediaTypeTest(t, v1.MediaTypeImageIndex, v1.MediaTypeImageIndex, false)
mediaTypeTest(t, v1.MediaTypeImageIndex, v1.MediaTypeImageIndex+"XXX", true)
}
func TestValidateManifest(t *testing.T) {
manifest := ocischema.Manifest{
Config: distribution.Descriptor{Size: 1},
Layers: []distribution.Descriptor{{Size: 2}},
}
index := ManifestList{
Manifests: []ManifestDescriptor{
{Descriptor: distribution.Descriptor{Size: 3}},
},
}
t.Run("valid", func(t *testing.T) {
b, err := json.Marshal(index)
if err != nil {
t.Fatal("unexpected error marshaling index", err)
}
if err := validateIndex(b); err != nil {
t.Error("index should be valid", err)
}
})
t.Run("invalid", func(t *testing.T) {
b, err := json.Marshal(manifest)
if err != nil {
t.Fatal("unexpected error marshaling manifest", err)
}
if err := validateIndex(b); err == nil {
t.Error("manifest should not be valid")
}
})
}

View file

@ -7,7 +7,7 @@ import (
"github.com/docker/distribution"
"github.com/docker/distribution/manifest"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
)
// Builder is a type for constructing manifests.
@ -48,7 +48,7 @@ func NewManifestBuilder(bs distribution.BlobService, configJSON []byte, annotati
// valid media type for oci image manifests currently: "" or "application/vnd.oci.image.manifest.v1+json"
func (mb *Builder) SetMediaType(mediaType string) error {
if mediaType != "" && mediaType != v1.MediaTypeImageManifest {
return errors.New("Invalid media type for OCI image manifest")
return errors.New("invalid media type for OCI image manifest")
}
mb.mediaType = mediaType

View file

@ -7,7 +7,7 @@ import (
"github.com/docker/distribution"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
)
type mockBlobService struct {

View file

@ -8,7 +8,7 @@ import (
"github.com/docker/distribution"
"github.com/docker/distribution/manifest"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
)
var (
@ -22,6 +22,9 @@ var (
func init() {
ocischemaFunc := func(b []byte) (distribution.Manifest, distribution.Descriptor, error) {
if err := validateManifest(b); err != nil {
return nil, distribution.Descriptor{}, err
}
m := new(DeserializedManifest)
err := m.UnmarshalJSON(b)
if err != nil {
@ -87,7 +90,7 @@ func FromStruct(m Manifest) (*DeserializedManifest, error) {
// UnmarshalJSON populates a new Manifest struct from JSON data.
func (m *DeserializedManifest) UnmarshalJSON(b []byte) error {
m.canonical = make([]byte, len(b), len(b))
m.canonical = make([]byte, len(b))
// store manifest in canonical
copy(m.canonical, b)
@ -122,3 +125,22 @@ func (m *DeserializedManifest) MarshalJSON() ([]byte, error) {
func (m DeserializedManifest) Payload() (string, []byte, error) {
return v1.MediaTypeImageManifest, m.canonical, nil
}
// unknownDocument represents a manifest, manifest list, or index that has not
// yet been validated
type unknownDocument struct {
Manifests interface{} `json:"manifests,omitempty"`
}
// validateManifest returns an error if the byte slice is invalid JSON or if it
// contains fields that belong to a index
func validateManifest(b []byte) error {
var doc unknownDocument
if err := json.Unmarshal(b, &doc); err != nil {
return err
}
if doc.Manifests != nil {
return errors.New("ocimanifest: expected manifest but found index")
}
return nil
}

View file

@ -8,7 +8,9 @@ import (
"github.com/docker/distribution"
"github.com/docker/distribution/manifest"
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/docker/distribution/manifest/manifestlist"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
)
var expectedManifestSerialization = []byte(`{
@ -182,3 +184,33 @@ func TestMediaTypes(t *testing.T) {
mediaTypeTest(t, v1.MediaTypeImageManifest, false)
mediaTypeTest(t, v1.MediaTypeImageManifest+"XXX", true)
}
func TestValidateManifest(t *testing.T) {
manifest := Manifest{
Config: distribution.Descriptor{Size: 1},
Layers: []distribution.Descriptor{{Size: 2}},
}
index := manifestlist.ManifestList{
Manifests: []manifestlist.ManifestDescriptor{
{Descriptor: distribution.Descriptor{Size: 3}},
},
}
t.Run("valid", func(t *testing.T) {
b, err := json.Marshal(manifest)
if err != nil {
t.Fatal("unexpected error marshaling manifest", err)
}
if err := validateManifest(b); err != nil {
t.Error("manifest should be valid", err)
}
})
t.Run("invalid", func(t *testing.T) {
b, err := json.Marshal(index)
if err != nil {
t.Fatal("unexpected error marshaling index", err)
}
if err := validateManifest(b); err == nil {
t.Error("index should not be valid")
}
})
}

View file

@ -108,7 +108,7 @@ type SignedManifest struct {
// UnmarshalJSON populates a new SignedManifest struct from JSON data.
func (sm *SignedManifest) UnmarshalJSON(b []byte) error {
sm.all = make([]byte, len(b), len(b))
sm.all = make([]byte, len(b))
// store manifest and signatures in all
copy(sm.all, b)
@ -124,7 +124,7 @@ func (sm *SignedManifest) UnmarshalJSON(b []byte) error {
}
// sm.Canonical stores the canonical manifest JSON
sm.Canonical = make([]byte, len(bytes), len(bytes))
sm.Canonical = make([]byte, len(bytes))
copy(sm.Canonical, bytes)
// Unmarshal canonical JSON into Manifest object

View file

@ -58,7 +58,7 @@ func (mb *referenceManifestBuilder) Build(ctx context.Context) (distribution.Man
func (mb *referenceManifestBuilder) AppendReference(d distribution.Describable) error {
r, ok := d.(Reference)
if !ok {
return fmt.Errorf("Unable to add non-reference type to v1 builder")
return fmt.Errorf("unable to add non-reference type to v1 builder")
}
// Entries need to be prepended

View file

@ -106,7 +106,7 @@ func FromStruct(m Manifest) (*DeserializedManifest, error) {
// UnmarshalJSON populates a new Manifest struct from JSON data.
func (m *DeserializedManifest) UnmarshalJSON(b []byte) error {
m.canonical = make([]byte, len(b), len(b))
m.canonical = make([]byte, len(b))
// store manifest in canonical
copy(m.canonical, b)

View file

@ -87,7 +87,7 @@ func ManifestMediaTypes() (mediaTypes []string) {
// UnmarshalFunc implements manifest unmarshalling a given MediaType
type UnmarshalFunc func([]byte) (Manifest, Descriptor, error)
var mappings = make(map[string]UnmarshalFunc, 0)
var mappings = make(map[string]UnmarshalFunc)
// UnmarshalManifest looks up manifest unmarshal functions based on
// MediaType

View file

@ -125,15 +125,6 @@ func (b *bridge) RepoDeleted(repo reference.Named) error {
return b.sink.Write(*event)
}
func (b *bridge) createManifestEventAndWrite(action string, repo reference.Named, sm distribution.Manifest) error {
manifestEvent, err := b.createManifestEvent(action, repo, sm)
if err != nil {
return err
}
return b.sink.Write(*manifestEvent)
}
func (b *bridge) createManifestDeleteEventAndWrite(action string, repo reference.Named, dgst digest.Digest) error {
event := b.createEvent(action)
event.Target.Repository = repo.Name()

View file

@ -6,7 +6,7 @@ import (
"github.com/docker/distribution"
"github.com/docker/distribution/manifest/schema1"
"github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/v2"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/uuid"
"github.com/docker/libtrust"
"github.com/opencontainers/go-digest"

View file

@ -114,8 +114,7 @@ func TestEventEnvelopeJSONFormat(t *testing.T) {
prototype.Request.UserAgent = "test/0.1"
prototype.Source.Addr = "hostname.local:port"
var manifestPush Event
manifestPush = prototype
var manifestPush = prototype
manifestPush.ID = "asdf-asdf-asdf-asdf-0"
manifestPush.Target.Digest = "sha256:0123456789abcdef0"
manifestPush.Target.Length = 1
@ -124,8 +123,7 @@ func TestEventEnvelopeJSONFormat(t *testing.T) {
manifestPush.Target.Repository = "library/test"
manifestPush.Target.URL = "http://example.com/v2/library/test/manifests/latest"
var layerPush0 Event
layerPush0 = prototype
var layerPush0 = prototype
layerPush0.ID = "asdf-asdf-asdf-asdf-1"
layerPush0.Target.Digest = "sha256:3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d5"
layerPush0.Target.Length = 2
@ -134,8 +132,7 @@ func TestEventEnvelopeJSONFormat(t *testing.T) {
layerPush0.Target.Repository = "library/test"
layerPush0.Target.URL = "http://example.com/v2/library/test/manifests/latest"
var layerPush1 Event
layerPush1 = prototype
var layerPush1 = prototype
layerPush1.ID = "asdf-asdf-asdf-asdf-2"
layerPush1.Target.Digest = "sha256:3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d6"
layerPush1.Target.Length = 3

View file

@ -133,8 +133,7 @@ type headerRoundTripper struct {
}
func (hrt *headerRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
var nreq http.Request
nreq = *req
var nreq = *req
nreq.Header = make(http.Header)
merge := func(headers http.Header) {

View file

@ -136,11 +136,10 @@ func checkExerciseRepository(t *testing.T, repository distribution.Repository, r
var blobDigests []digest.Digest
blobs := repository.Blobs(ctx)
for i := 0; i < 2; i++ {
rs, ds, err := testutil.CreateRandomTarFile()
rs, dgst, err := testutil.CreateRandomTarFile()
if err != nil {
t.Fatalf("error creating test layer: %v", err)
}
dgst := digest.Digest(ds)
blobDigests = append(blobDigests, dgst)
wr, err := blobs.Create(ctx)

View file

@ -284,11 +284,6 @@ type retryingSink struct {
}
}
type retryingSinkListener interface {
active(events ...Event)
retry(events ...Event)
}
// TODO(stevvooe): We are using circuit break here, which actually doesn't
// make a whole lot of sense for this use case, since we always retry. Move
// this to use bounded exponential backoff.

View file

@ -17,4 +17,4 @@ RUN wget https://golang.org/dl/go$GOLANG_VERSION.linux-amd64.tar.gz --quiet && \
tar -C /usr/local -xzf go$GOLANG_VERSION.linux-amd64.tar.gz && \
rm go${GOLANG_VERSION}.linux-amd64.tar.gz
RUN go get github.com/axw/gocov/gocov github.com/mattn/goveralls github.com/golang/lint/golint
RUN go install github.com/axw/gocov/gocov@latest github.com/mattn/goveralls@latest github.com/golang/lint/golint@latest

View file

@ -56,6 +56,35 @@ func ParseNormalizedNamed(s string) (Named, error) {
return named, nil
}
// ParseDockerRef normalizes the image reference following the docker convention. This is added
// mainly for backward compatibility.
// The reference returned can only be either tagged or digested. For reference contains both tag
// and digest, the function returns digested reference, e.g. docker.io/library/busybox:latest@
// sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa will be returned as
// docker.io/library/busybox@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa.
func ParseDockerRef(ref string) (Named, error) {
named, err := ParseNormalizedNamed(ref)
if err != nil {
return nil, err
}
if _, ok := named.(NamedTagged); ok {
if canonical, ok := named.(Canonical); ok {
// The reference is both tagged and digested, only
// return digested.
newNamed, err := WithName(canonical.Name())
if err != nil {
return nil, err
}
newCanonical, err := WithDigest(newNamed, canonical.Digest())
if err != nil {
return nil, err
}
return newCanonical, nil
}
}
return TagNameOnly(named), nil
}
// splitDockerDomain splits a repository name to domain and remotename string.
// If no valid domain is found, the default domain is used. Repository name
// needs to be already validated before.

View file

@ -623,3 +623,83 @@ func TestMatch(t *testing.T) {
}
}
}
func TestParseDockerRef(t *testing.T) {
testcases := []struct {
name string
input string
expected string
}{
{
name: "nothing",
input: "busybox",
expected: "docker.io/library/busybox:latest",
},
{
name: "tag only",
input: "busybox:latest",
expected: "docker.io/library/busybox:latest",
},
{
name: "digest only",
input: "busybox@sha256:e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582",
expected: "docker.io/library/busybox@sha256:e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582",
},
{
name: "path only",
input: "library/busybox",
expected: "docker.io/library/busybox:latest",
},
{
name: "hostname only",
input: "docker.io/busybox",
expected: "docker.io/library/busybox:latest",
},
{
name: "no tag",
input: "docker.io/library/busybox",
expected: "docker.io/library/busybox:latest",
},
{
name: "no path",
input: "docker.io/busybox:latest",
expected: "docker.io/library/busybox:latest",
},
{
name: "no hostname",
input: "library/busybox:latest",
expected: "docker.io/library/busybox:latest",
},
{
name: "full reference with tag",
input: "docker.io/library/busybox:latest",
expected: "docker.io/library/busybox:latest",
},
{
name: "gcr reference without tag",
input: "gcr.io/library/busybox",
expected: "gcr.io/library/busybox:latest",
},
{
name: "both tag and digest",
input: "gcr.io/library/busybox:latest@sha256:e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582",
expected: "gcr.io/library/busybox@sha256:e6693c20186f837fc393390135d8a598a96a833917917789d63766cab6c59582",
},
}
for _, test := range testcases {
t.Run(test.name, func(t *testing.T) {
normalized, err := ParseDockerRef(test.input)
if err != nil {
t.Fatal(err)
}
output := normalized.String()
if output != test.expected {
t.Fatalf("expected %q to be parsed as %v, got %v", test.input, test.expected, output)
}
_, err = Parse(output)
if err != nil {
t.Fatalf("%q should be a valid reference, but got an error: %v", output, err)
}
})
}
}

View file

@ -3,13 +3,13 @@
//
// Grammar
//
// reference := name [ ":" tag ] [ "@" digest ]
// reference := name [ ":" tag ] [ "@" digest ]
// name := [domain '/'] path-component ['/' path-component]*
// domain := domain-component ['.' domain-component]* [':' port-number]
// domain-component := /([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])/
// port-number := /[0-9]+/
// path-component := alpha-numeric [separator alpha-numeric]*
// alpha-numeric := /[a-z0-9]+/
// alpha-numeric := /[a-z0-9]+/
// separator := /[_.]|__|[-]*/
//
// tag := /[\w][\w.-]{0,127}/
@ -205,7 +205,7 @@ func Parse(s string) (Reference, error) {
var repo repository
nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1])
if nameMatch != nil && len(nameMatch) == 3 {
if len(nameMatch) == 3 {
repo.domain = nameMatch[1]
repo.path = nameMatch[2]
} else {

View file

@ -639,7 +639,7 @@ func TestParseNamed(t *testing.T) {
failf("error parsing name: %s", err)
continue
} else if err == nil && testcase.err != nil {
failf("parsing succeded: expected error %v", testcase.err)
failf("parsing succeeded: expected error %v", testcase.err)
continue
} else if err != testcase.err {
failf("unexpected error %v, expected %v", err, testcase.err)

View file

@ -207,11 +207,11 @@ func (errs Errors) MarshalJSON() ([]byte, error) {
for _, daErr := range errs {
var err Error
switch daErr.(type) {
switch daErr := daErr.(type) {
case ErrorCode:
err = daErr.(ErrorCode).WithDetail(nil)
err = daErr.WithDetail(nil)
case Error:
err = daErr.(Error)
err = daErr
default:
err = ErrorCodeUnknown.WithDetail(daErr)

View file

@ -134,6 +134,19 @@ var (
},
}
invalidPaginationResponseDescriptor = ResponseDescriptor{
Name: "Invalid pagination number",
Description: "The received parameter n was invalid in some way, as described by the error code. The client should resolve the issue and retry the request.",
StatusCode: http.StatusBadRequest,
Body: BodyDescriptor{
ContentType: "application/json",
Format: errorsBody,
},
ErrorCodes: []errcode.ErrorCode{
ErrorCodePaginationNumberInvalid,
},
}
repositoryNotFoundResponseDescriptor = ResponseDescriptor{
Name: "No Such Repository Error",
StatusCode: http.StatusNotFound,
@ -490,6 +503,7 @@ var routeDescriptors = []RouteDescriptor{
},
},
Failures: []ResponseDescriptor{
invalidPaginationResponseDescriptor,
unauthorizedResponseDescriptor,
repositoryNotFoundResponseDescriptor,
deniedResponseDescriptor,
@ -1578,6 +1592,9 @@ var routeDescriptors = []RouteDescriptor{
},
},
},
Failures: []ResponseDescriptor{
invalidPaginationResponseDescriptor,
},
},
},
},

View file

@ -133,4 +133,13 @@ var (
longer proceed.`,
HTTPStatusCode: http.StatusNotFound,
})
ErrorCodePaginationNumberInvalid = errcode.Register(errGroup, errcode.ErrorDescriptor{
Value: "PAGINATION_NUMBER_INVALID",
Message: "invalid number of results requested",
Description: `Returned when the "n" parameter (number of results
to return) is not an integer, "n" is negative or "n" is bigger than
the maximum allowed.`,
HTTPStatusCode: http.StatusBadRequest,
})
)

View file

@ -252,15 +252,3 @@ func appendValuesURL(u *url.URL, values ...url.Values) *url.URL {
u.RawQuery = merged.Encode()
return u
}
// appendValues appends the parameters to the url. Panics if the string is not
// a url.
func appendValues(u string, values ...url.Values) string {
up, err := url.Parse(u)
if err != nil {
panic(err) // should never happen
}
return appendValuesURL(up, values...).String()
}

View file

@ -182,11 +182,6 @@ func TestURLBuilderWithPrefix(t *testing.T) {
doTest(false)
}
type builderFromRequestTestCase struct {
request *http.Request
base string
}
func TestBuilderFromRequest(t *testing.T) {
u, err := url.Parse("http://example.com")
if err != nil {

View file

@ -8,28 +8,27 @@
// An implementation registers its access controller by name with a constructor
// which accepts an options map for configuring the access controller.
//
// options := map[string]interface{}{"sillySecret": "whysosilly?"}
// accessController, _ := auth.GetAccessController("silly", options)
// options := map[string]interface{}{"sillySecret": "whysosilly?"}
// accessController, _ := auth.GetAccessController("silly", options)
//
// This `accessController` can then be used in a request handler like so:
//
// func updateOrder(w http.ResponseWriter, r *http.Request) {
// orderNumber := r.FormValue("orderNumber")
// resource := auth.Resource{Type: "customerOrder", Name: orderNumber}
// access := auth.Access{Resource: resource, Action: "update"}
// func updateOrder(w http.ResponseWriter, r *http.Request) {
// orderNumber := r.FormValue("orderNumber")
// resource := auth.Resource{Type: "customerOrder", Name: orderNumber}
// access := auth.Access{Resource: resource, Action: "update"}
//
// if ctx, err := accessController.Authorized(ctx, access); err != nil {
// if challenge, ok := err.(auth.Challenge) {
// // Let the challenge write the response.
// challenge.SetHeaders(r, w)
// w.WriteHeader(http.StatusUnauthorized)
// return
// } else {
// // Some other error.
// }
// if ctx, err := accessController.Authorized(ctx, access); err != nil {
// if challenge, ok := err.(auth.Challenge) {
// // Let the challenge write the response.
// challenge.SetHeaders(r, w)
// w.WriteHeader(http.StatusUnauthorized)
// return
// } else {
// // Some other error.
// }
// }
//
// }
// }
package auth
import (

View file

@ -162,11 +162,14 @@ func checkOptions(options map[string]interface{}) (tokenAccessOptions, error) {
opts.realm, opts.issuer, opts.service, opts.rootCertBundle = vals[0], vals[1], vals[2], vals[3]
autoRedirect, ok := options["autoredirect"].(bool)
if !ok {
return opts, fmt.Errorf("token auth requires a valid option bool: autoredirect")
autoRedirectVal, ok := options["autoredirect"]
if ok {
autoRedirect, ok := autoRedirectVal.(bool)
if !ok {
return opts, fmt.Errorf("token auth requires a valid option bool: autoredirect")
}
opts.autoRedirect = autoRedirect
}
opts.autoRedirect = autoRedirect
return opts, nil
}

View file

@ -185,13 +185,15 @@ func (t *Token) Verify(verifyOpts VerifyOptions) error {
// VerifySigningKey attempts to get the key which was used to sign this token.
// The token header should contain either of these 3 fields:
// `x5c` - The x509 certificate chain for the signing key. Needs to be
// verified.
// `jwk` - The JSON Web Key representation of the signing key.
// May contain its own `x5c` field which needs to be verified.
// `kid` - The unique identifier for the key. This library interprets it
// as a libtrust fingerprint. The key itself can be looked up in
// the trustedKeys field of the given verify options.
//
// `x5c` - The x509 certificate chain for the signing key. Needs to be
// verified.
// `jwk` - The JSON Web Key representation of the signing key.
// May contain its own `x5c` field which needs to be verified.
// `kid` - The unique identifier for the key. This library interprets it
// as a libtrust fingerprint. The key itself can be looked up in
// the trustedKeys field of the given verify options.
//
// Each of these methods are tried in that order of preference until the
// signing key is found or an error is returned.
func (t *Token) VerifySigningKey(verifyOpts VerifyOptions) (signingKey libtrust.PublicKey, err error) {

View file

@ -307,10 +307,10 @@ func writeTempRootCerts(rootKeys []libtrust.PrivateKey) (filename string, err er
// TestAccessController tests complete integration of the token auth package.
// It starts by mocking the options for a token auth accessController which
// it creates. It then tries a few mock requests:
// - don't supply a token; should error with challenge
// - supply an invalid token; should error with challenge
// - supply a token with insufficient access; should error with challenge
// - supply a valid token; should not error
// - don't supply a token; should error with challenge
// - supply an invalid token; should error with challenge
// - supply a token with insufficient access; should error with challenge
// - supply a valid token; should not error
func TestAccessController(t *testing.T) {
// Make 2 keys; only the first is to be a trusted root key.
rootKeys, err := makeRootKeys(2)

View file

@ -117,8 +117,8 @@ func init() {
var t octetType
isCtl := c <= 31 || c == 127
isChar := 0 <= c && c <= 127
isSeparator := strings.IndexRune(" \t\"(),/:;<=>?@[]\\{}", rune(c)) >= 0
if strings.IndexRune(" \t\r\n", rune(c)) >= 0 {
isSeparator := strings.ContainsRune(" \t\"(),/:;<=>?@[]\\{}", rune(c))
if strings.ContainsRune(" \t\r\n", rune(c)) {
t |= isSpace
}
if isChar && !isCtl && !isSeparator {

View file

@ -466,7 +466,7 @@ func TestEndpointAuthorizeTokenBasic(t *testing.T) {
},
})
authenicate1 := fmt.Sprintf("Basic realm=localhost")
authenicate1 := "Basic realm=localhost"
basicCheck := func(a string) bool {
return a == fmt.Sprintf("Basic %s", basicAuth(username, password))
}
@ -546,7 +546,7 @@ func TestEndpointAuthorizeTokenBasicWithExpiresIn(t *testing.T) {
},
})
authenicate1 := fmt.Sprintf("Basic realm=localhost")
authenicate1 := "Basic realm=localhost"
tokenExchanges := 0
basicCheck := func(a string) bool {
tokenExchanges = tokenExchanges + 1
@ -706,7 +706,7 @@ func TestEndpointAuthorizeTokenBasicWithExpiresInAndIssuedAt(t *testing.T) {
},
})
authenicate1 := fmt.Sprintf("Basic realm=localhost")
authenicate1 := "Basic realm=localhost"
tokenExchanges := 0
basicCheck := func(a string) bool {
tokenExchanges = tokenExchanges + 1
@ -835,7 +835,7 @@ func TestEndpointAuthorizeBasic(t *testing.T) {
username := "user1"
password := "funSecretPa$$word"
authenicate := fmt.Sprintf("Basic realm=localhost")
authenicate := "Basic realm=localhost"
validCheck := func(a string) bool {
return a == fmt.Sprintf("Basic %s", basicAuth(username, password))
}

View file

@ -8,7 +8,7 @@ import (
"github.com/docker/distribution"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/testutil"
)

View file

@ -55,6 +55,8 @@ func parseHTTPErrorResponse(statusCode int, r io.Reader) error {
switch statusCode {
case http.StatusUnauthorized:
return errcode.ErrorCodeUnauthorized.WithMessage(detailsErr.Details)
case http.StatusForbidden:
return errcode.ErrorCodeDenied.WithMessage(detailsErr.Details)
case http.StatusTooManyRequests:
return errcode.ErrorCodeTooManyRequests.WithMessage(detailsErr.Details)
default:

View file

@ -102,3 +102,18 @@ func TestHandleErrorResponseUnexpectedStatusCode501(t *testing.T) {
t.Errorf("Expected \"%s\", got: \"%s\"", expectedMsg, err.Error())
}
}
func TestHandleErrorResponseInsufficientPrivileges403(t *testing.T) {
json := `{"details":"requesting higher privileges than access token allows"}`
response := &http.Response{
Status: "403 Forbidden",
StatusCode: 403,
Body: nopCloser{bytes.NewBufferString(json)},
}
err := HandleErrorResponse(response)
expectedMsg := "denied: requesting higher privileges than access token allows"
if !strings.Contains(err.Error(), expectedMsg) {
t.Errorf("Expected \"%s\", got: \"%s\"", expectedMsg, err.Error())
}
}

View file

@ -16,7 +16,7 @@ import (
"github.com/docker/distribution"
"github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/v2"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/client/transport"
"github.com/docker/distribution/registry/storage/cache"
"github.com/docker/distribution/registry/storage/cache/memory"
@ -114,9 +114,7 @@ func (r *registry) Repositories(ctx context.Context, entries []string, last stri
return 0, err
}
for cnt := range ctlg.Repositories {
entries[cnt] = ctlg.Repositories[cnt]
}
copy(entries, ctlg.Repositories)
numFilled = len(ctlg.Repositories)
link := resp.Header.Get("Link")
@ -736,7 +734,12 @@ func (bs *blobs) Create(ctx context.Context, options ...distribution.BlobCreateO
return nil, err
}
resp, err := bs.client.Post(u, "", nil)
req, err := http.NewRequest("POST", u, nil)
if err != nil {
return nil, err
}
resp, err := bs.client.Do(req)
if err != nil {
return nil, err
}

View file

@ -22,7 +22,7 @@ import (
"github.com/docker/distribution/manifest/schema1"
"github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/testutil"
"github.com/docker/distribution/uuid"
"github.com/docker/libtrust"
@ -152,7 +152,7 @@ func TestBlobFetch(t *testing.T) {
if err != nil {
t.Fatal(err)
}
if bytes.Compare(b, b1) != 0 {
if !bytes.Equal(b, b1) {
t.Fatalf("Wrong bytes values fetched: [%d]byte != [%d]byte", len(b), len(b1))
}

View file

@ -180,7 +180,6 @@ func (hrs *httpReadSeeker) reader() (io.Reader, error) {
// context.GetLogger(hrs.context).Infof("Range: %s", req.Header.Get("Range"))
}
req.Header.Add("Accept-Encoding", "identity")
resp, err := hrs.client.Do(req)
if err != nil {
return nil, err

View file

@ -28,7 +28,7 @@ import (
"github.com/docker/distribution/manifest/schema2"
"github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
v2 "github.com/docker/distribution/registry/api/v2"
storagedriver "github.com/docker/distribution/registry/storage/driver"
"github.com/docker/distribution/registry/storage/driver/factory"
_ "github.com/docker/distribution/registry/storage/driver/testdriver"
@ -81,21 +81,23 @@ func TestCheckAPI(t *testing.T) {
// TestCatalogAPI tests the /v2/_catalog endpoint
func TestCatalogAPI(t *testing.T) {
chunkLen := 2
env := newTestEnv(t, false)
defer env.Shutdown()
values := url.Values{
"last": []string{""},
"n": []string{strconv.Itoa(chunkLen)}}
maxEntries := env.config.Catalog.MaxEntries
allCatalog := []string{
"foo/aaaa", "foo/bbbb", "foo/cccc", "foo/dddd", "foo/eeee", "foo/ffff",
}
catalogURL, err := env.builder.BuildCatalogURL(values)
chunkLen := maxEntries - 1
catalogURL, err := env.builder.BuildCatalogURL()
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
// -----------------------------------
// try to get an empty catalog
// Case No. 1: Empty catalog
resp, err := http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
@ -113,23 +115,22 @@ func TestCatalogAPI(t *testing.T) {
t.Fatalf("error decoding fetched manifest: %v", err)
}
// we haven't pushed anything to the registry yet
// No images pushed = no image returned
if len(ctlg.Repositories) != 0 {
t.Fatalf("repositories has unexpected values")
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", 0, len(ctlg.Repositories))
}
// No pagination should be returned
if resp.Header.Get("Link") != "" {
t.Fatalf("repositories has more data when none expected")
}
// -----------------------------------
// push something to the registry and try again
images := []string{"foo/aaaa", "foo/bbbb", "foo/cccc"}
for _, image := range images {
for _, image := range allCatalog {
createRepository(env, t, image, "sometag")
}
// -----------------------------------
// Case No. 2: Catalog populated & n is not provided nil (n internally will be min(100, maxEntries))
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
@ -143,27 +144,30 @@ func TestCatalogAPI(t *testing.T) {
t.Fatalf("error decoding fetched manifest: %v", err)
}
if len(ctlg.Repositories) != chunkLen {
t.Fatalf("repositories has unexpected values")
// it must match max entries
if len(ctlg.Repositories) != maxEntries {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", maxEntries, len(ctlg.Repositories))
}
for _, image := range images[:chunkLen] {
// it must return the first maxEntries entries from the catalog
for _, image := range allCatalog[:maxEntries] {
if !contains(ctlg.Repositories, image) {
t.Fatalf("didn't find our repository '%s' in the catalog", image)
}
}
// fail if there's no pagination
link := resp.Header.Get("Link")
if link == "" {
t.Fatalf("repositories has less data than expected")
}
newValues := checkLink(t, link, chunkLen, ctlg.Repositories[len(ctlg.Repositories)-1])
// -----------------------------------
// get the last chunk of data
// Case No. 2.1: Second page (n internally will be min(100, maxEntries))
catalogURL, err = env.builder.BuildCatalogURL(newValues)
// build pagination link
values := checkLink(t, link, maxEntries, ctlg.Repositories[len(ctlg.Repositories)-1])
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
@ -181,18 +185,269 @@ func TestCatalogAPI(t *testing.T) {
t.Fatalf("error decoding fetched manifest: %v", err)
}
if len(ctlg.Repositories) != 1 {
t.Fatalf("repositories has unexpected values")
expectedRemainder := len(allCatalog) - maxEntries
if len(ctlg.Repositories) != expectedRemainder {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", expectedRemainder, len(ctlg.Repositories))
}
lastImage := images[len(images)-1]
if !contains(ctlg.Repositories, lastImage) {
t.Fatalf("didn't find our repository '%s' in the catalog", lastImage)
// -----------------------------------
// Case No. 3: request n = maxentries
values = url.Values{
"last": []string{""},
"n": []string{strconv.Itoa(maxEntries)},
}
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
if len(ctlg.Repositories) != maxEntries {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", maxEntries, len(ctlg.Repositories))
}
// fail if there's no pagination
link = resp.Header.Get("Link")
if link != "" {
t.Fatalf("catalog has unexpected data")
if link == "" {
t.Fatalf("repositories has less data than expected")
}
// -----------------------------------
// Case No. 3.1: Second (last) page
// build pagination link
values = checkLink(t, link, maxEntries, ctlg.Repositories[len(ctlg.Repositories)-1])
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
expectedRemainder = len(allCatalog) - maxEntries
if len(ctlg.Repositories) != expectedRemainder {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", expectedRemainder, len(ctlg.Repositories))
}
// -----------------------------------
// Case No. 4: request n < maxentries
values = url.Values{
"n": []string{strconv.Itoa(chunkLen)},
}
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
// returns the requested amount
if len(ctlg.Repositories) != chunkLen {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", expectedRemainder, len(ctlg.Repositories))
}
// fail if there's no pagination
link = resp.Header.Get("Link")
if link == "" {
t.Fatalf("repositories has less data than expected")
}
// -----------------------------------
// Case No. 4.1: request n < maxentries (second page)
// build pagination link
values = checkLink(t, link, chunkLen, ctlg.Repositories[len(ctlg.Repositories)-1])
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
expectedRemainder = len(allCatalog) - chunkLen
if len(ctlg.Repositories) != expectedRemainder {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", expectedRemainder, len(ctlg.Repositories))
}
// -----------------------------------
// Case No. 5: request n > maxentries | return err: ErrorCodePaginationNumberInvalid
values = url.Values{
"n": []string{strconv.Itoa(maxEntries + 10)},
}
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusBadRequest)
checkBodyHasErrorCodes(t, "invalid number of results requested", resp, v2.ErrorCodePaginationNumberInvalid)
// -----------------------------------
// Case No. 6: request n > maxentries but <= total catalog | return err: ErrorCodePaginationNumberInvalid
values = url.Values{
"n": []string{strconv.Itoa(len(allCatalog))},
}
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusBadRequest)
checkBodyHasErrorCodes(t, "invalid number of results requested", resp, v2.ErrorCodePaginationNumberInvalid)
// -----------------------------------
// Case No. 7: n = 0 | n is set to max(0, min(defaultEntries, maxEntries))
values = url.Values{
"n": []string{"0"},
}
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
// it must be empty
if len(ctlg.Repositories) != 0 {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", 0, len(ctlg.Repositories))
}
// -----------------------------------
// Case No. 8: n = -1 | n is set to max(0, min(defaultEntries, maxEntries))
values = url.Values{
"n": []string{"-1"},
}
catalogURL, err = env.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
// it must match max entries
if len(ctlg.Repositories) != maxEntries {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", expectedRemainder, len(ctlg.Repositories))
}
// -----------------------------------
// Case No. 9: n = 5, max = 5, total catalog = 4
values = url.Values{
"n": []string{strconv.Itoa(maxEntries)},
}
envWithLessImages := newTestEnv(t, false)
for _, image := range allCatalog[0:(maxEntries - 1)] {
createRepository(envWithLessImages, t, image, "sometag")
}
catalogURL, err = envWithLessImages.builder.BuildCatalogURL(values)
if err != nil {
t.Fatalf("unexpected error building catalog url: %v", err)
}
resp, err = http.Get(catalogURL)
if err != nil {
t.Fatalf("unexpected error issuing request: %v", err)
}
defer resp.Body.Close()
checkResponse(t, "issuing catalog api check", resp, http.StatusOK)
dec = json.NewDecoder(resp.Body)
if err = dec.Decode(&ctlg); err != nil {
t.Fatalf("error decoding fetched manifest: %v", err)
}
// it must match max entries
if len(ctlg.Repositories) != maxEntries-1 {
t.Fatalf("repositories returned unexpected entries (expected: %d, returned: %d)", maxEntries-1, len(ctlg.Repositories))
}
}
@ -207,7 +462,7 @@ func checkLink(t *testing.T, urlStr string, numEntries int, last string) url.Val
urlValues := linkURL.Query()
if urlValues.Get("n") != strconv.Itoa(numEntries) {
t.Fatalf("Catalog link entry size is incorrect")
t.Fatalf("Catalog link entry size is incorrect (expected: %v, returned: %v)", urlValues.Get("n"), strconv.Itoa(numEntries))
}
if urlValues.Get("last") != last {
@ -959,7 +1214,6 @@ func testManifestWithStorageError(t *testing.T, env *testEnv, imageName referenc
defer resp.Body.Close()
checkResponse(t, "getting non-existent manifest", resp, expectedStatusCode)
checkBodyHasErrorCodes(t, "getting non-existent manifest", resp, expectedErrorCode)
return
}
func testManifestAPISchema1(t *testing.T, env *testEnv, imageName reference.Named) manifestArgs {
@ -1066,12 +1320,11 @@ func testManifestAPISchema1(t *testing.T, env *testEnv, imageName reference.Name
expectedLayers := make(map[digest.Digest]io.ReadSeeker)
for i := range unsignedManifest.FSLayers {
rs, dgstStr, err := testutil.CreateRandomTarFile()
rs, dgst, err := testutil.CreateRandomTarFile()
if err != nil {
t.Fatalf("error creating random layer %d: %v", i, err)
}
dgst := digest.Digest(dgstStr)
expectedLayers[dgst] = rs
unsignedManifest.FSLayers[i].BlobSum = dgst
@ -1405,12 +1658,11 @@ func testManifestAPISchema2(t *testing.T, env *testEnv, imageName reference.Name
expectedLayers := make(map[digest.Digest]io.ReadSeeker)
for i := range manifest.Layers {
rs, dgstStr, err := testutil.CreateRandomTarFile()
rs, dgst, err := testutil.CreateRandomTarFile()
if err != nil {
t.Fatalf("error creating random layer %d: %v", i, err)
}
dgst := digest.Digest(dgstStr)
expectedLayers[dgst] = rs
manifest.Layers[i].Digest = dgst
@ -2026,6 +2278,9 @@ func newTestEnvMirror(t *testing.T, deleteEnabled bool) *testEnv {
Proxy: configuration.Proxy{
RemoteURL: "http://example.com",
},
Catalog: configuration.Catalog{
MaxEntries: 5,
},
}
config.Compatibility.Schema1.Enabled = true
@ -2042,6 +2297,9 @@ func newTestEnv(t *testing.T, deleteEnabled bool) *testEnv {
"enabled": false,
}},
},
Catalog: configuration.Catalog{
MaxEntries: 5,
},
}
config.Compatibility.Schema1.Enabled = true
@ -2294,7 +2552,6 @@ func checkResponse(t *testing.T, msg string, resp *http.Response, expectedStatus
if resp.StatusCode != expectedStatus {
t.Logf("unexpected status %s: %v != %v", msg, resp.StatusCode, expectedStatus)
maybeDumpResponse(t, resp)
t.FailNow()
}
@ -2357,7 +2614,7 @@ func checkBodyHasErrorCodes(t *testing.T, msg string, resp *http.Response, error
// Ensure that counts of expected errors were all non-zero
for code := range expected {
if counts[code] == 0 {
t.Fatalf("expected error code %v not encounterd during %s: %s", code, msg, string(p))
t.Fatalf("expected error code %v not encountered during %s: %s", code, msg, string(p))
}
}
@ -2432,11 +2689,10 @@ func createRepository(env *testEnv, t *testing.T, imageName string, tag string)
expectedLayers := make(map[digest.Digest]io.ReadSeeker)
for i := range unsignedManifest.FSLayers {
rs, dgstStr, err := testutil.CreateRandomTarFile()
rs, dgst, err := testutil.CreateRandomTarFile()
if err != nil {
t.Fatalf("error creating random layer %d: %v", i, err)
}
dgst := digest.Digest(dgstStr)
expectedLayers[dgst] = rs
unsignedManifest.FSLayers[i].BlobSum = dgst

View file

@ -2,10 +2,11 @@ package handlers
import (
"context"
cryptorand "crypto/rand"
"crypto/rand"
"expvar"
"fmt"
"math/rand"
"math"
"math/big"
"net"
"net/http"
"net/url"
@ -24,7 +25,7 @@ import (
"github.com/docker/distribution/notifications"
"github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/auth"
registrymiddleware "github.com/docker/distribution/registry/middleware/registry"
repositorymiddleware "github.com/docker/distribution/registry/middleware/repository"
@ -610,7 +611,7 @@ func (app *App) configureLogHook(configuration *configuration.Configuration) {
func (app *App) configureSecret(configuration *configuration.Configuration) {
if configuration.HTTP.Secret == "" {
var secretBytes [randomSecretSize]byte
if _, err := cryptorand.Read(secretBytes[:]); err != nil {
if _, err := rand.Read(secretBytes[:]); err != nil {
panic(fmt.Sprintf("could not generate random bytes for HTTP secret: %v", err))
}
configuration.HTTP.Secret = string(secretBytes[:])
@ -753,20 +754,18 @@ func (app *App) logError(ctx context.Context, errors errcode.Errors) {
for _, e1 := range errors {
var c context.Context
switch e1.(type) {
switch e := e1.(type) {
case errcode.Error:
e, _ := e1.(errcode.Error)
c = context.WithValue(ctx, errCodeKey{}, e.Code)
c = context.WithValue(c, errMessageKey{}, e.Message)
c = context.WithValue(c, errDetailKey{}, e.Detail)
case errcode.ErrorCode:
e, _ := e1.(errcode.ErrorCode)
c = context.WithValue(ctx, errCodeKey{}, e)
c = context.WithValue(c, errMessageKey{}, e.Message())
default:
// just normal go 'error'
c = context.WithValue(ctx, errCodeKey{}, errcode.ErrorCodeUnknown)
c = context.WithValue(c, errMessageKey{}, e1.Error())
c = context.WithValue(c, errMessageKey{}, e.Error())
}
c = dcontext.WithLogger(c, dcontext.GetLogger(c,
@ -1062,8 +1061,13 @@ func startUploadPurger(ctx context.Context, storageDriver storagedriver.StorageD
}
go func() {
rand.Seed(time.Now().Unix())
jitter := time.Duration(rand.Int()%60) * time.Minute
randInt, err := rand.Int(rand.Reader, new(big.Int).SetInt64(math.MaxInt64))
if err != nil {
log.Infof("Failed to generate random jitter: %v", err)
// sleep 30min for failure case
randInt = big.NewInt(30)
}
jitter := time.Duration(randInt.Int64()%60) * time.Minute
log.Infof("Starting upload purge in %s", jitter)
time.Sleep(jitter)

View file

@ -11,7 +11,7 @@ import (
"github.com/docker/distribution/configuration"
"github.com/docker/distribution/context"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/auth"
_ "github.com/docker/distribution/registry/auth/silly"
"github.com/docker/distribution/registry/storage"

View file

@ -1,3 +1,4 @@
//go:build go1.4
// +build go1.4
package handlers

View file

@ -1,3 +1,4 @@
//go:build !go1.4
// +build !go1.4
package handlers

View file

@ -6,7 +6,7 @@ import (
"github.com/docker/distribution"
"github.com/docker/distribution/context"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/gorilla/handlers"
"github.com/opencontainers/go-digest"
)

View file

@ -9,7 +9,7 @@ import (
dcontext "github.com/docker/distribution/context"
"github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/storage"
"github.com/gorilla/handlers"
"github.com/opencontainers/go-digest"
@ -172,7 +172,7 @@ func (buh *blobUploadHandler) PatchBlobData(w http.ResponseWriter, r *http.Reque
ct := r.Header.Get("Content-Type")
if ct != "" && ct != "application/octet-stream" {
buh.Errors = append(buh.Errors, errcode.ErrorCodeUnknown.WithDetail(fmt.Errorf("Bad Content-Type")))
buh.Errors = append(buh.Errors, errcode.ErrorCodeUnknown.WithDetail(fmt.Errorf("bad Content-Type")))
// TODO(dmcgowan): encode error
return
}

View file

@ -9,11 +9,13 @@ import (
"strconv"
"github.com/docker/distribution/registry/api/errcode"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/storage/driver"
"github.com/gorilla/handlers"
)
const maximumReturnedEntries = 100
const defaultReturnedEntries = 100
func catalogDispatcher(ctx *Context, r *http.Request) http.Handler {
catalogHandler := &catalogHandler{
@ -38,29 +40,55 @@ func (ch *catalogHandler) GetCatalog(w http.ResponseWriter, r *http.Request) {
q := r.URL.Query()
lastEntry := q.Get("last")
maxEntries, err := strconv.Atoi(q.Get("n"))
if err != nil || maxEntries < 0 {
maxEntries = maximumReturnedEntries
entries := defaultReturnedEntries
maximumConfiguredEntries := ch.App.Config.Catalog.MaxEntries
// parse n, if n unparseable, or negative assign it to defaultReturnedEntries
if n := q.Get("n"); n != "" {
parsedMax, err := strconv.Atoi(n)
if err == nil {
if parsedMax > maximumConfiguredEntries {
ch.Errors = append(ch.Errors, v2.ErrorCodePaginationNumberInvalid.WithDetail(map[string]int{"n": parsedMax}))
return
} else if parsedMax >= 0 {
entries = parsedMax
}
}
}
repos := make([]string, maxEntries)
// then enforce entries to be between 0 & maximumConfiguredEntries
// max(0, min(entries, maximumConfiguredEntries))
if entries < 0 || entries > maximumConfiguredEntries {
entries = maximumConfiguredEntries
}
filled, err := ch.App.registry.Repositories(ch.Context, repos, lastEntry)
_, pathNotFound := err.(driver.PathNotFoundError)
repos := make([]string, entries)
filled := 0
if err == io.EOF || pathNotFound {
// entries is guaranteed to be >= 0 and < maximumConfiguredEntries
if entries == 0 {
moreEntries = false
} else if err != nil {
ch.Errors = append(ch.Errors, errcode.ErrorCodeUnknown.WithDetail(err))
return
} else {
returnedRepositories, err := ch.App.registry.Repositories(ch.Context, repos, lastEntry)
if err != nil {
_, pathNotFound := err.(driver.PathNotFoundError)
if err != io.EOF && !pathNotFound {
ch.Errors = append(ch.Errors, errcode.ErrorCodeUnknown.WithDetail(err))
return
}
// err is either io.EOF or not PathNotFoundError
moreEntries = false
}
filled = returnedRepositories
}
w.Header().Set("Content-Type", "application/json; charset=utf-8")
// Add a link header if there are more entries to retrieve
if moreEntries {
lastEntry = repos[len(repos)-1]
urlStr, err := createLinkEntry(r.URL.String(), maxEntries, lastEntry)
lastEntry = repos[filled-1]
urlStr, err := createLinkEntry(r.URL.String(), entries, lastEntry)
if err != nil {
ch.Errors = append(ch.Errors, errcode.ErrorCodeUnknown.WithDetail(err))
return

View file

@ -8,7 +8,7 @@ import (
"github.com/docker/distribution"
dcontext "github.com/docker/distribution/context"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/auth"
"github.com/opencontainers/go-digest"
)

View file

@ -20,7 +20,7 @@ type logHook struct {
func (hook *logHook) Fire(entry *logrus.Entry) error {
addr := strings.Split(hook.Mail.Addr, ":")
if len(addr) != 2 {
return errors.New("Invalid Mail Address")
return errors.New("invalid Mail Address")
}
host := addr[0]
subject := fmt.Sprintf("[%s] %s: %s", entry.Level, host, entry.Message)
@ -37,7 +37,7 @@ func (hook *logHook) Fire(entry *logrus.Entry) error {
if err := t.Execute(b, entry); err != nil {
return err
}
body := fmt.Sprintf("%s", b)
body := b.String()
return hook.Mail.sendMail(subject, body)
}

View file

@ -17,7 +17,7 @@ type mailer struct {
func (mail *mailer) sendMail(subject, message string) error {
addr := strings.Split(mail.Addr, ":")
if len(addr) != 2 {
return errors.New("Invalid Mail Address")
return errors.New("invalid Mail Address")
}
host := addr[0]
msg := []byte("To:" + strings.Join(mail.To, ";") +

View file

@ -14,11 +14,11 @@ import (
"github.com/docker/distribution/manifest/schema2"
"github.com/docker/distribution/reference"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/docker/distribution/registry/auth"
"github.com/gorilla/handlers"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
)
// These constants determine which architecture and OS to choose from a

View file

@ -6,7 +6,7 @@ import (
"github.com/docker/distribution"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/api/v2"
v2 "github.com/docker/distribution/registry/api/v2"
"github.com/gorilla/handlers"
)

View file

@ -6,7 +6,6 @@ import (
"net/http"
"strconv"
"sync"
"time"
"github.com/docker/distribution"
dcontext "github.com/docker/distribution/context"
@ -15,9 +14,6 @@ import (
"github.com/opencontainers/go-digest"
)
// todo(richardscothern): from cache control header or config file
const blobTTL = 24 * 7 * time.Hour
type proxyBlobStore struct {
localStore distribution.BlobStore
remoteStore distribution.BlobService

View file

@ -193,7 +193,7 @@ func makeTestEnv(t *testing.T, name string) *testEnv {
}
func makeBlob(size int) []byte {
blob := make([]byte, size, size)
blob := make([]byte, size)
for i := 0; i < size; i++ {
blob[i] = byte('A' + rand.Int()%48)
}
@ -204,16 +204,6 @@ func init() {
rand.Seed(42)
}
func perm(m []distribution.Descriptor) []distribution.Descriptor {
for i := 0; i < len(m); i++ {
j := rand.Intn(i + 1)
tmp := m[i]
m[i] = m[j]
m[j] = tmp
}
return m
}
func populate(t *testing.T, te *testEnv, blobCount, size, numUnique int) {
var inRemote []distribution.Descriptor

View file

@ -165,11 +165,10 @@ func populateRepo(ctx context.Context, t *testing.T, repository distribution.Rep
t.Fatalf("unexpected error creating test upload: %v", err)
}
rs, ts, err := testutil.CreateRandomTarFile()
rs, dgst, err := testutil.CreateRandomTarFile()
if err != nil {
t.Fatalf("unexpected error generating test layer file")
}
dgst := digest.Digest(ts)
if _, err := io.Copy(wr, rs); err != nil {
t.Fatalf("unexpected error copying to upload: %v", err)
}

View file

@ -118,7 +118,7 @@ func (ttles *TTLExpirationScheduler) Start() error {
}
if !ttles.stopped {
return fmt.Errorf("Scheduler already started")
return fmt.Errorf("scheduler already started")
}
dcontext.GetLogger(ttles.ctx).Infof("Starting cached object TTL expiration scheduler...")
@ -126,7 +126,7 @@ func (ttles *TTLExpirationScheduler) Start() error {
// Start timer for each deserialized entry
for _, entry := range ttles.entries {
entry.timer = ttles.startTimer(entry, entry.Expiry.Sub(time.Now()))
entry.timer = ttles.startTimer(entry, time.Until(entry.Expiry))
}
// Start a ticker to periodically save the entries index
@ -164,7 +164,7 @@ func (ttles *TTLExpirationScheduler) add(r reference.Reference, ttl time.Duratio
Expiry: time.Now().Add(ttl),
EntryType: eType,
}
dcontext.GetLogger(ttles.ctx).Infof("Adding new scheduler entry for %s with ttl=%s", entry.Key, entry.Expiry.Sub(time.Now()))
dcontext.GetLogger(ttles.ctx).Infof("Adding new scheduler entry for %s with ttl=%s", entry.Key, time.Until(entry.Expiry))
if oldEntry, present := ttles.entries[entry.Key]; present && oldEntry.timer != nil {
oldEntry.timer.Stop()
}

View file

@ -9,12 +9,14 @@ import (
"net/http"
"os"
"os/signal"
"strings"
"syscall"
"time"
"rsc.io/letsencrypt"
"github.com/Shopify/logrus-bugsnag"
logrus_bugsnag "github.com/Shopify/logrus-bugsnag"
logstash "github.com/bshuster-repo/logrus-logstash-hook"
"github.com/bugsnag/bugsnag-go"
"github.com/docker/distribution/configuration"
@ -31,6 +33,60 @@ import (
"github.com/yvasiyarov/gorelic"
)
// a map of TLS cipher suite names to constants in https://golang.org/pkg/crypto/tls/#pkg-constants
var cipherSuites = map[string]uint16{
// TLS 1.0 - 1.2 cipher suites
"TLS_RSA_WITH_RC4_128_SHA": tls.TLS_RSA_WITH_RC4_128_SHA,
"TLS_RSA_WITH_3DES_EDE_CBC_SHA": tls.TLS_RSA_WITH_3DES_EDE_CBC_SHA,
"TLS_RSA_WITH_AES_128_CBC_SHA": tls.TLS_RSA_WITH_AES_128_CBC_SHA,
"TLS_RSA_WITH_AES_256_CBC_SHA": tls.TLS_RSA_WITH_AES_256_CBC_SHA,
"TLS_RSA_WITH_AES_128_CBC_SHA256": tls.TLS_RSA_WITH_AES_128_CBC_SHA256,
"TLS_RSA_WITH_AES_128_GCM_SHA256": tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
"TLS_RSA_WITH_AES_256_GCM_SHA384": tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
"TLS_ECDHE_ECDSA_WITH_RC4_128_SHA": tls.TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA": tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA": tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
"TLS_ECDHE_RSA_WITH_RC4_128_SHA": tls.TLS_ECDHE_RSA_WITH_RC4_128_SHA,
"TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA": tls.TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA": tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA": tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256": tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256": tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256": tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256": tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384": tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384": tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256": tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256": tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
// TLS 1.3 cipher suites
"TLS_AES_128_GCM_SHA256": tls.TLS_AES_128_GCM_SHA256,
"TLS_AES_256_GCM_SHA384": tls.TLS_AES_256_GCM_SHA384,
"TLS_CHACHA20_POLY1305_SHA256": tls.TLS_CHACHA20_POLY1305_SHA256,
}
// a list of default ciphersuites to utilize
var defaultCipherSuites = []uint16{
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_AES_128_GCM_SHA256,
tls.TLS_CHACHA20_POLY1305_SHA256,
tls.TLS_AES_256_GCM_SHA384,
}
// maps tls version strings to constants
var defaultTLSVersionStr = "tls1.2"
var tlsVersions = map[string]uint16{
// user specified values
"tls1.0": tls.VersionTLS10,
"tls1.1": tls.VersionTLS11,
"tls1.2": tls.VersionTLS12,
"tls1.3": tls.VersionTLS13,
}
// this channel gets notified when process receives signal. It is global to ease unit testing
var quit = make(chan os.Signal, 1)
@ -125,6 +181,35 @@ func NewRegistry(ctx context.Context, config *configuration.Configuration) (*Reg
}, nil
}
// takes a list of cipher suites and converts it to a list of respective tls constants
// if an empty list is provided, then the defaults will be used
func getCipherSuites(names []string) ([]uint16, error) {
if len(names) == 0 {
return defaultCipherSuites, nil
}
cipherSuiteConsts := make([]uint16, len(names))
for i, name := range names {
cipherSuiteConst, ok := cipherSuites[name]
if !ok {
return nil, fmt.Errorf("unknown TLS cipher suite '%s' specified for http.tls.cipherSuites", name)
}
cipherSuiteConsts[i] = cipherSuiteConst
}
return cipherSuiteConsts, nil
}
// takes a list of cipher suite ids and converts it to a list of respective names
func getCipherSuiteNames(ids []uint16) []string {
if len(ids) == 0 {
return nil
}
names := make([]string, len(ids))
for i, id := range ids {
names[i] = tls.CipherSuiteName(id)
}
return names
}
// ListenAndServe runs the registry's HTTP server.
func (registry *Registry) ListenAndServe() error {
config := registry.config
@ -135,19 +220,27 @@ func (registry *Registry) ListenAndServe() error {
}
if config.HTTP.TLS.Certificate != "" || config.HTTP.TLS.LetsEncrypt.CacheFile != "" {
if config.HTTP.TLS.MinimumTLS == "" {
config.HTTP.TLS.MinimumTLS = defaultTLSVersionStr
}
tlsMinVersion, ok := tlsVersions[config.HTTP.TLS.MinimumTLS]
if !ok {
return fmt.Errorf("unknown minimum TLS level '%s' specified for http.tls.minimumtls", config.HTTP.TLS.MinimumTLS)
}
dcontext.GetLogger(registry.app).Infof("restricting TLS version to %s or higher", config.HTTP.TLS.MinimumTLS)
tlsCipherSuites, err := getCipherSuites(config.HTTP.TLS.CipherSuites)
if err != nil {
return err
}
dcontext.GetLogger(registry.app).Infof("restricting TLS cipher suites to: %s", strings.Join(getCipherSuiteNames(tlsCipherSuites), ","))
tlsConf := &tls.Config{
ClientAuth: tls.NoClientCert,
NextProtos: nextProtos(config),
MinVersion: tls.VersionTLS10,
MinVersion: tlsMinVersion,
PreferServerCipherSuites: true,
CipherSuites: []uint16{
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
},
CipherSuites: tlsCipherSuites,
}
if config.HTTP.TLS.LetsEncrypt.CacheFile != "" {
@ -185,7 +278,7 @@ func (registry *Registry) ListenAndServe() error {
}
if ok := pool.AppendCertsFromPEM(caPem); !ok {
return fmt.Errorf("Could not add CA to pool")
return fmt.Errorf("could not add CA to pool")
}
}

View file

@ -3,12 +3,24 @@ package registry
import (
"bufio"
"context"
"crypto"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"fmt"
"io/ioutil"
"math/big"
"net"
"net/http"
"os"
"path"
"reflect"
"strings"
"testing"
"time"
@ -38,18 +50,30 @@ func TestNextProtos(t *testing.T) {
}
}
func setupRegistry() (*Registry, error) {
type registryTLSConfig struct {
cipherSuites []string
certificatePath string
privateKeyPath string
certificate *tls.Certificate
}
func setupRegistry(tlsCfg *registryTLSConfig, addr string) (*Registry, error) {
config := &configuration.Configuration{}
// TODO: this needs to change to something ephemeral as the test will fail if there is any server
// already listening on port 5000
config.HTTP.Addr = ":5000"
config.HTTP.Addr = addr
config.HTTP.DrainTimeout = time.Duration(10) * time.Second
if tlsCfg != nil {
config.HTTP.TLS.CipherSuites = tlsCfg.cipherSuites
config.HTTP.TLS.Certificate = tlsCfg.certificatePath
config.HTTP.TLS.Key = tlsCfg.privateKeyPath
}
config.Storage = map[string]configuration.Parameters{"inmemory": map[string]interface{}{}}
return NewRegistry(context.Background(), config)
}
func TestGracefulShutdown(t *testing.T) {
registry, err := setupRegistry()
registry, err := setupRegistry(nil, ":5000")
if err != nil {
t.Fatal(err)
}
@ -98,3 +122,227 @@ func TestGracefulShutdown(t *testing.T) {
t.Error("Body is not {}; ", string(body))
}
}
func TestGetCipherSuite(t *testing.T) {
resp, err := getCipherSuites([]string{"TLS_RSA_WITH_AES_128_CBC_SHA"})
if err != nil || len(resp) != 1 || resp[0] != tls.TLS_RSA_WITH_AES_128_CBC_SHA {
t.Errorf("expected cipher suite %q, got %q",
"TLS_RSA_WITH_AES_128_CBC_SHA",
strings.Join(getCipherSuiteNames(resp), ","),
)
}
resp, err = getCipherSuites([]string{"TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_AES_128_GCM_SHA256"})
if err != nil || len(resp) != 2 ||
resp[0] != tls.TLS_RSA_WITH_AES_128_CBC_SHA || resp[1] != tls.TLS_AES_128_GCM_SHA256 {
t.Errorf("expected cipher suites %q, got %q",
"TLS_RSA_WITH_AES_128_CBC_SHA,TLS_AES_128_GCM_SHA256",
strings.Join(getCipherSuiteNames(resp), ","),
)
}
_, err = getCipherSuites([]string{"TLS_RSA_WITH_AES_128_CBC_SHA", "bad_input"})
if err == nil {
t.Error("did not return expected error about unknown cipher suite")
}
}
func buildRegistryTLSConfig(name, keyType string, cipherSuites []string) (*registryTLSConfig, error) {
var priv interface{}
var pub crypto.PublicKey
var err error
switch keyType {
case "rsa":
priv, err = rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
return nil, fmt.Errorf("failed to create rsa private key: %v", err)
}
rsaKey := priv.(*rsa.PrivateKey)
pub = rsaKey.Public()
case "ecdsa":
priv, err = ecdsa.GenerateKey(elliptic.P384(), rand.Reader)
if err != nil {
return nil, fmt.Errorf("failed to create ecdsa private key: %v", err)
}
ecdsaKey := priv.(*ecdsa.PrivateKey)
pub = ecdsaKey.Public()
default:
return nil, fmt.Errorf("unsupported key type: %v", keyType)
}
notBefore := time.Now()
notAfter := notBefore.Add(time.Minute)
serialNumberLimit := new(big.Int).Lsh(big.NewInt(1), 128)
serialNumber, err := rand.Int(rand.Reader, serialNumberLimit)
if err != nil {
return nil, fmt.Errorf("failed to create serial number: %v", err)
}
cert := x509.Certificate{
SerialNumber: serialNumber,
Subject: pkix.Name{
Organization: []string{"registry_test"},
},
NotBefore: notBefore,
NotAfter: notAfter,
KeyUsage: x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
BasicConstraintsValid: true,
IPAddresses: []net.IP{net.ParseIP("127.0.0.1")},
DNSNames: []string{"localhost"},
IsCA: true,
}
derBytes, err := x509.CreateCertificate(rand.Reader, &cert, &cert, pub, priv)
if err != nil {
return nil, fmt.Errorf("failed to create certificate: %v", err)
}
if _, err := os.Stat(os.TempDir()); os.IsNotExist(err) {
os.Mkdir(os.TempDir(), 1777)
}
certPath := path.Join(os.TempDir(), name+".pem")
certOut, err := os.Create(certPath)
if err != nil {
return nil, fmt.Errorf("failed to create pem: %v", err)
}
if err := pem.Encode(certOut, &pem.Block{Type: "CERTIFICATE", Bytes: derBytes}); err != nil {
return nil, fmt.Errorf("failed to write data to %s: %v", certPath, err)
}
if err := certOut.Close(); err != nil {
return nil, fmt.Errorf("error closing %s: %v", certPath, err)
}
keyPath := path.Join(os.TempDir(), name+".key")
keyOut, err := os.OpenFile(keyPath, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600)
if err != nil {
return nil, fmt.Errorf("failed to open %s for writing: %v", keyPath, err)
}
privBytes, err := x509.MarshalPKCS8PrivateKey(priv)
if err != nil {
return nil, fmt.Errorf("unable to marshal private key: %v", err)
}
if err := pem.Encode(keyOut, &pem.Block{Type: "PRIVATE KEY", Bytes: privBytes}); err != nil {
return nil, fmt.Errorf("failed to write data to key.pem: %v", err)
}
if err := keyOut.Close(); err != nil {
return nil, fmt.Errorf("error closing %s: %v", keyPath, err)
}
tlsCert := tls.Certificate{
Certificate: [][]byte{derBytes},
PrivateKey: priv,
}
tlsTestCfg := registryTLSConfig{
cipherSuites: cipherSuites,
certificatePath: certPath,
privateKeyPath: keyPath,
certificate: &tlsCert,
}
return &tlsTestCfg, nil
}
func TestRegistrySupportedCipherSuite(t *testing.T) {
name := "registry_test_server_supported_cipher"
cipherSuites := []string{"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"}
serverTLS, err := buildRegistryTLSConfig(name, "rsa", cipherSuites)
if err != nil {
t.Fatal(err)
}
registry, err := setupRegistry(serverTLS, ":5001")
if err != nil {
t.Fatal(err)
}
// run registry server
var errchan chan error
go func() {
errchan <- registry.ListenAndServe()
}()
select {
case err = <-errchan:
t.Fatalf("Error listening: %v", err)
default:
}
// Wait for some unknown random time for server to start listening
time.Sleep(3 * time.Second)
// send tls request with server supported cipher suite
clientCipherSuites, err := getCipherSuites(cipherSuites)
if err != nil {
t.Fatal(err)
}
clientTLS := tls.Config{
InsecureSkipVerify: true,
CipherSuites: clientCipherSuites,
}
dialer := net.Dialer{
Timeout: time.Second * 5,
}
conn, err := tls.DialWithDialer(&dialer, "tcp", "127.0.0.1:5001", &clientTLS)
if err != nil {
t.Fatal(err)
}
fmt.Fprintf(conn, "GET /v2/ HTTP/1.1\r\nHost: 127.0.0.1\r\n\r\n")
resp, err := http.ReadResponse(bufio.NewReader(conn), nil)
if err != nil {
t.Fatal(err)
}
if resp.Status != "200 OK" {
t.Error("response status is not 200 OK: ", resp.Status)
}
if body, err := ioutil.ReadAll(resp.Body); err != nil || string(body) != "{}" {
t.Error("Body is not {}; ", string(body))
}
// send stop signal
quit <- os.Interrupt
time.Sleep(100 * time.Millisecond)
}
func TestRegistryUnsupportedCipherSuite(t *testing.T) {
name := "registry_test_server_unsupported_cipher"
serverTLS, err := buildRegistryTLSConfig(name, "rsa", []string{"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA358"})
if err != nil {
t.Fatal(err)
}
registry, err := setupRegistry(serverTLS, ":5002")
if err != nil {
t.Fatal(err)
}
// run registry server
var errchan chan error
go func() {
errchan <- registry.ListenAndServe()
}()
select {
case err = <-errchan:
t.Fatalf("Error listening: %v", err)
default:
}
// Wait for some unknown random time for server to start listening
time.Sleep(3 * time.Second)
// send tls request with server unsupported cipher suite
clientTLS := tls.Config{
InsecureSkipVerify: true,
CipherSuites: []uint16{tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256},
}
dialer := net.Dialer{
Timeout: time.Second * 5,
}
_, err = tls.DialWithDialer(&dialer, "tcp", "127.0.0.1:5002", &clientTLS)
if err == nil {
t.Error("expected TLS connection to timeout")
}
// send stop signal
quit <- os.Interrupt
time.Sleep(100 * time.Millisecond)
}

View file

@ -418,7 +418,7 @@ func TestBlobMount(t *testing.T) {
bs := repository.Blobs(ctx)
// Test destination for existence.
statDesc, err = bs.Stat(ctx, desc.Digest)
_, err = bs.Stat(ctx, desc.Digest)
if err == nil {
t.Fatalf("unexpected non-error stating unmounted blob: %v", desc)
}
@ -478,12 +478,12 @@ func TestBlobMount(t *testing.T) {
t.Fatalf("Unexpected error deleting blob")
}
d, err := bs.Stat(ctx, desc.Digest)
_, err = bs.Stat(ctx, desc.Digest)
if err != nil {
t.Fatalf("unexpected error stating blob deleted from source repository: %v", err)
}
d, err = sbs.Stat(ctx, desc.Digest)
d, err := sbs.Stat(ctx, desc.Digest)
if err == nil {
t.Fatalf("unexpected non-error stating deleted blob: %v", d)
}

View file

@ -152,16 +152,6 @@ func (bs *blobStore) readlink(ctx context.Context, path string) (digest.Digest,
return linked, nil
}
// resolve reads the digest link at path and returns the blob store path.
func (bs *blobStore) resolve(ctx context.Context, path string) (string, error) {
dgst, err := bs.readlink(ctx, path)
if err != nil {
return "", err
}
return bs.path(dgst)
}
type blobStatter struct {
driver driver.StorageDriver
}

View file

@ -1,3 +1,4 @@
//go:build noresumabledigest
// +build noresumabledigest
package storage

View file

@ -1,3 +1,4 @@
//go:build !noresumabledigest
// +build !noresumabledigest
package storage

View file

@ -173,8 +173,7 @@ func checkBlobDescriptorCacheClear(ctx context.Context, t *testing.T, provider c
t.Error(err)
}
desc, err = cache.Stat(ctx, localDigest)
if err == nil {
if _, err = cache.Stat(ctx, localDigest); err == nil {
t.Fatalf("expected error statting deleted blob: %v", err)
}
}

View file

@ -55,17 +55,17 @@ func (factory *azureDriverFactory) Create(parameters map[string]interface{}) (st
func FromParameters(parameters map[string]interface{}) (*Driver, error) {
accountName, ok := parameters[paramAccountName]
if !ok || fmt.Sprint(accountName) == "" {
return nil, fmt.Errorf("No %s parameter provided", paramAccountName)
return nil, fmt.Errorf("no %s parameter provided", paramAccountName)
}
accountKey, ok := parameters[paramAccountKey]
if !ok || fmt.Sprint(accountKey) == "" {
return nil, fmt.Errorf("No %s parameter provided", paramAccountKey)
return nil, fmt.Errorf("no %s parameter provided", paramAccountKey)
}
container, ok := parameters[paramContainer]
if !ok || fmt.Sprint(container) == "" {
return nil, fmt.Errorf("No %s parameter provided", paramContainer)
return nil, fmt.Errorf("no %s parameter provided", paramContainer)
}
realm, ok := parameters[paramRealm]

View file

@ -6,14 +6,14 @@
// struct such that calls are proxied through this implementation. First,
// declare the internal driver, as follows:
//
// type driver struct { ... internal ...}
// type driver struct { ... internal ...}
//
// The resulting type should implement StorageDriver such that it can be the
// target of a Base struct. The exported type can then be declared as follows:
//
// type Driver struct {
// Base
// }
// type Driver struct {
// Base
// }
//
// Because Driver embeds Base, it effectively implements Base. If the driver
// needs to intercept a call, before going to base, Driver should implement
@ -23,15 +23,15 @@
// To further shield the embed from other packages, it is recommended to
// employ a private embed struct:
//
// type baseEmbed struct {
// base.Base
// }
// type baseEmbed struct {
// base.Base
// }
//
// Then, declare driver to embed baseEmbed, rather than Base directly:
//
// type Driver struct {
// baseEmbed
// }
// type Driver struct {
// baseEmbed
// }
//
// The type now implements StorageDriver, proxying through Base, without
// exporting an unnecessary field.

View file

@ -145,7 +145,7 @@ func (r *regulator) Stat(ctx context.Context, path string) (storagedriver.FileIn
}
// List returns a list of the objects that are direct descendants of the
//given path.
// given path.
func (r *regulator) List(ctx context.Context, path string) ([]string, error) {
r.enter()
defer r.exit()

View file

@ -36,7 +36,7 @@ func init() {
func TestFromParametersImpl(t *testing.T) {
tests := []struct {
params map[string]interface{} // techincally the yaml can contain anything
params map[string]interface{} // technically the yaml can contain anything
expected DriverParameters
pass bool
}{

View file

@ -1,17 +1,17 @@
//go:build include_gcs
// +build include_gcs
// Package gcs provides a storagedriver.StorageDriver implementation to
// store blobs in Google cloud storage.
//
// This package leverages the google.golang.org/cloud/storage client library
//for interfacing with gcs.
// for interfacing with gcs.
//
// Because gcs is a key, value store the Stat call does not support last modification
// time for directories (directories are an abstraction for key, value stores)
//
// Note that the contents of incomplete uploads are not accessible even though
// Stat returns their length
//
// +build include_gcs
package gcs
import (
@ -61,7 +61,6 @@ var rangeHeader = regexp.MustCompile(`^bytes=([0-9])+-([0-9]+)$`)
// driverParameters is a struct that encapsulates all of the driver parameters after all values have been set
type driverParameters struct {
bucket string
config *jwt.Config
email string
privateKey []byte
client *http.Client
@ -87,6 +86,8 @@ func (factory *gcsDriverFactory) Create(parameters map[string]interface{}) (stor
return FromParameters(parameters)
}
var _ storagedriver.StorageDriver = &driver{}
// driver is a storagedriver.StorageDriver implementation backed by GCS
// Objects are stored at absolute keys in the provided bucket.
type driver struct {
@ -297,7 +298,7 @@ func (d *driver) Reader(context context.Context, path string, offset int64) (io.
if err != nil {
return nil, err
}
if offset == int64(obj.Size) {
if offset == obj.Size {
return ioutil.NopCloser(bytes.NewReader([]byte{})), nil
}
return nil, storagedriver.InvalidOffsetError{Path: path, Offset: offset}
@ -433,7 +434,6 @@ func putContentsClose(wc *storage.Writer, contents []byte) error {
}
}
if err != nil {
wc.CloseWithError(err)
return err
}
return wc.Close()
@ -613,10 +613,10 @@ func (d *driver) Stat(context context.Context, path string) (storagedriver.FileI
//try to get as folder
dirpath := d.pathToDirKey(path)
var query *storage.Query
query = &storage.Query{}
query.Prefix = dirpath
query.MaxResults = 1
query := &storage.Query{
Prefix: dirpath,
MaxResults: 1,
}
objects, err := storageListObjects(gcsContext, d.bucket, query)
if err != nil {
@ -638,12 +638,12 @@ func (d *driver) Stat(context context.Context, path string) (storagedriver.FileI
}
// List returns a list of the objects that are direct descendants of the
//given path.
// given path.
func (d *driver) List(context context.Context, path string) ([]string, error) {
var query *storage.Query
query = &storage.Query{}
query.Delimiter = "/"
query.Prefix = d.pathToDirKey(path)
query := &storage.Query{
Delimiter: "/",
Prefix: d.pathToDirKey(path),
}
list := make([]string, 0, 64)
for {
objects, err := storageListObjects(d.context(context), d.bucket, query)

View file

@ -1,3 +1,4 @@
//go:build include_gcs
// +build include_gcs
package gcs
@ -58,7 +59,7 @@ func init() {
panic(fmt.Sprintf("Error reading JWT config : %s", err))
}
email = jwtConfig.Email
privateKey = []byte(jwtConfig.PrivateKey)
privateKey = jwtConfig.PrivateKey
if len(privateKey) == 0 {
panic("Error reading JWT config : missing private_key property")
}
@ -259,6 +260,9 @@ func TestEmptyRootList(t *testing.T) {
}
}()
keys, err := emptyRootDriver.List(ctx, "/")
if err != nil {
t.Fatalf("unexpected error listing empty root content: %v", err)
}
for _, path := range keys {
if !storagedriver.PathRegexp.MatchString(path) {
t.Fatalf("unexpected string in path: %q != %q", path, storagedriver.PathRegexp)
@ -266,6 +270,9 @@ func TestEmptyRootList(t *testing.T) {
}
keys, err = slashRootDriver.List(ctx, "/")
if err != nil {
t.Fatalf("unexpected error listing slash root content: %v", err)
}
for _, path := range keys {
if !storagedriver.PathRegexp.MatchString(path) {
t.Fatalf("unexpected string in path: %q != %q", path, storagedriver.PathRegexp)

View file

@ -252,20 +252,6 @@ func (d *dir) delete(p string) error {
return nil
}
// dump outputs a primitive directory structure to stdout.
func (d *dir) dump(indent string) {
fmt.Println(indent, d.name()+"/")
for _, child := range d.children {
if child.isdir() {
child.(*dir).dump(indent + "\t")
} else {
fmt.Println(indent, child.name())
}
}
}
func (d *dir) String() string {
return fmt.Sprintf("&dir{path: %v, children: %v}", d.p, d.children)
}
@ -293,6 +279,9 @@ func (f *file) sectionReader(offset int64) io.Reader {
}
func (f *file) ReadAt(p []byte, offset int64) (n int, err error) {
if offset >= int64(len(f.data)) {
return 0, io.EOF
}
return copy(p, f.data[offset:]), nil
}

View file

@ -1,6 +1,5 @@
// Package middleware - cloudfront wrapper for storage libs
// N.B. currently only works with S3, not arbitrary sites
//
package middleware
import (
@ -16,7 +15,7 @@ import (
"github.com/aws/aws-sdk-go/service/cloudfront/sign"
dcontext "github.com/docker/distribution/context"
storagedriver "github.com/docker/distribution/registry/storage/driver"
"github.com/docker/distribution/registry/storage/driver/middleware"
storagemiddleware "github.com/docker/distribution/registry/storage/driver/middleware"
)
// cloudFrontStorageMiddleware provides a simple implementation of layerHandler that
@ -38,7 +37,9 @@ var _ storagedriver.StorageDriver = &cloudFrontStorageMiddleware{}
// Optional options: ipFilteredBy, awsregion
// ipfilteredby: valid value "none|aws|awsregion". "none", do not filter any IP, default value. "aws", only aws IP goes
// to S3 directly. "awsregion", only regions listed in awsregion options goes to S3 directly
//
// to S3 directly. "awsregion", only regions listed in awsregion options goes to S3 directly
//
// awsregion: a comma separated string of AWS regions.
func newCloudFrontStorageMiddleware(storageDriver storagedriver.StorageDriver, options map[string]interface{}) (storagedriver.StorageDriver, error) {
// parse baseurl
@ -138,27 +139,33 @@ func newCloudFrontStorageMiddleware(storageDriver storagedriver.StorageDriver, o
// parse ipfilteredby
var awsIPs *awsIPs
if ipFilteredBy := options["ipfilteredby"].(string); ok {
switch strings.ToLower(strings.TrimSpace(ipFilteredBy)) {
case "", "none":
awsIPs = nil
case "aws":
newAWSIPs(ipRangesURL, updateFrequency, nil)
case "awsregion":
var awsRegion []string
if regions, ok := options["awsregion"].(string); ok {
for _, awsRegions := range strings.Split(regions, ",") {
awsRegion = append(awsRegion, strings.ToLower(strings.TrimSpace(awsRegions)))
if i, ok := options["ipfilteredby"]; ok {
if ipFilteredBy, ok := i.(string); ok {
switch strings.ToLower(strings.TrimSpace(ipFilteredBy)) {
case "", "none":
awsIPs = nil
case "aws":
awsIPs = newAWSIPs(ipRangesURL, updateFrequency, nil)
case "awsregion":
var awsRegion []string
if i, ok := options["awsregion"]; ok {
if regions, ok := i.(string); ok {
for _, awsRegions := range strings.Split(regions, ",") {
awsRegion = append(awsRegion, strings.ToLower(strings.TrimSpace(awsRegions)))
}
awsIPs = newAWSIPs(ipRangesURL, updateFrequency, awsRegion)
} else {
return nil, fmt.Errorf("awsRegion must be a comma separated string of valid aws regions")
}
} else {
return nil, fmt.Errorf("awsRegion is not defined")
}
awsIPs = newAWSIPs(ipRangesURL, updateFrequency, awsRegion)
} else {
return nil, fmt.Errorf("awsRegion must be a comma separated string of valid aws regions")
default:
return nil, fmt.Errorf("ipfilteredby only allows a string the following value: none|aws|awsregion")
}
default:
return nil, fmt.Errorf("ipfilteredby only allows a string the following value: none|aws|awsregion")
} else {
return nil, fmt.Errorf("ipfilteredby only allows a string with the following value: none|aws|awsregion")
}
} else {
return nil, fmt.Errorf("ipfilteredby only allows a string with the following value: none|aws|awsregion")
}
return &cloudFrontStorageMiddleware{

View file

@ -1,3 +1,6 @@
//go:build include_oss
// +build include_oss
// Package oss provides a storagedriver.StorageDriver implementation to
// store blobs in Aliyun OSS cloud storage.
//
@ -6,9 +9,6 @@
//
// Because OSS is a key, value store the Stat call does not support last modification
// time for directories (directories are an abstraction for key, value stores)
//
// +build include_oss
package oss
import (
@ -37,12 +37,11 @@ const driverName = "oss"
const minChunkSize = 5 << 20
const defaultChunkSize = 2 * minChunkSize
const defaultTimeout = 2 * time.Minute // 2 minute timeout per chunk
// listMax is the largest amount of objects you can request from OSS in a list call
const listMax = 1000
//DriverParameters A struct that encapsulates all of the driver parameters after all values have been set
// DriverParameters A struct that encapsulates all of the driver parameters after all values have been set
type DriverParameters struct {
AccessKeyID string
AccessKeySecret string
@ -67,6 +66,8 @@ func (factory *ossDriverFactory) Create(parameters map[string]interface{}) (stor
return FromParameters(parameters)
}
var _ storagedriver.StorageDriver = &driver{}
type driver struct {
Client *oss.Client
Bucket *oss.Bucket
@ -497,11 +498,6 @@ func parseError(path string, err error) error {
return err
}
func hasCode(err error, code string) bool {
ossErr, ok := err.(*oss.Error)
return ok && ossErr.Code == code
}
func (d *driver) getOptions() oss.Options {
return oss.Options{ServerSideEncryption: d.Encrypt}
}

View file

@ -1,3 +1,4 @@
//go:build include_oss
// +build include_oss
package oss
@ -127,6 +128,9 @@ func TestEmptyRootList(t *testing.T) {
defer rootedDriver.Delete(ctx, filename)
keys, err := emptyRootDriver.List(ctx, "/")
if err != nil {
t.Fatalf("unexpected error listing empty root content: %v", err)
}
for _, path := range keys {
if !storagedriver.PathRegexp.MatchString(path) {
t.Fatalf("unexpected string in path: %q != %q", path, storagedriver.PathRegexp)
@ -134,6 +138,9 @@ func TestEmptyRootList(t *testing.T) {
}
keys, err = slashRootDriver.List(ctx, "/")
if err != nil {
t.Fatalf("unexpected error listing slash root content: %v", err)
}
for _, path := range keys {
if !storagedriver.PathRegexp.MatchString(path) {
t.Fatalf("unexpected string in path: %q != %q", path, storagedriver.PathRegexp)

Some files were not shown because too many files have changed in this diff Show more