Compare commits

...

189 commits

Author SHA1 Message Date
9969ec1039 Sync with upstream distribution v3.0.0-beta.1
Reviewed-on: TrueCloudLab/distribution#12
Reviewed-by: Denis Kirillov <dkirillov@noreply.frostfs.info>
Reviewed-by: pogpp <pogpp@noreply.frostfs.info>
2024-08-19 12:13:19 +00:00
f2abe6a1ec [#11] Update frostfs-sdk-go version
It is necessary to eliminate the error with dependencies.
TrueCloudLab/distribution#12 (comment).

Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-08-07 11:39:27 +03:00
8ceca80274 [#11] Update tcl/master with v3.0.0-beta.1 commits
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-08-05 13:28:26 +03:00
bdf2cf14b8 [#6] Add Forgejo Actions
Signed-off-by: Pavel Pogodaev <p.pogodaev@yadro.com>
2024-07-25 16:34:25 +03:00
Milos Gajdos
c709432b91
Prep for v3-beta1 release (#4399) 2024-07-10 08:35:47 +01:00
Milos Gajdos
c72db4109c
Prep for v3-beta1 release
Created a changelog file
Updated mailmap
Updated version

Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2024-07-09 19:31:16 +01:00
Milos Gajdos
60da1934b6
Bump Go and golang linter (#4389) 2024-07-09 07:59:01 +01:00
Milos Gajdos
948a39d358
Update docs: JWKS credentials and AZ identity (#4397) 2024-07-09 06:39:26 +01:00
Milos Gajdos
d3cc664fa2
Update docs: JWKS credentials and AZ identity
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2024-07-06 10:13:29 +01:00
Milos Gajdos
4dd0ac977e
feat: implement 'rewrite' storage middleware (#4146) 2024-07-04 16:16:29 +01:00
Milos Gajdos
306f4ff71e
Replace custom Redis config struct with go-redis UniversalOptions (adds sentinel & cluster support) (#4306) 2024-07-04 16:00:37 +01:00
Andrey Smirnov
558ace1391
feat: implement 'rewrite' storage middleware
This allows to rewrite 'URLFor' of the storage driver to use a specific
host/trim the base path.

It is different from the 'redirect' middleware, as it still calls the
storage driver URLFor.

For example, with Azure storage provider, this allows to transform the
SAS Azure Blob Storage URL into the URL compatible with Azure Front
Door.

Signed-off-by: Andrey Smirnov <andrey.smirnov@siderolabs.com>
2024-07-04 18:49:25 +04:00
Milos Gajdos
6d5911900a
Update Redis configuration docs with TLS options
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2024-07-04 15:44:41 +01:00
Milos Gajdos
3a8499541a
docs: disable base element override (#4391) 2024-07-04 09:00:57 +01:00
Milos Gajdos
10d90f7290
remove layer's link file by gc (#4344) 2024-07-02 18:08:56 +01:00
Liang Zheng
d9050bb917 remove layer's link file by gc
The garbage-collect should remove unsed layer link file

P.S. This was originally contributed by @m-masataka, now I would like to take over it.
Thanks @m-masataka efforts with PR https://github.com/distribution/distribution/pull/2288

Signed-off-by: Liang Zheng <zhengliang0901@gmail.com>
2024-07-03 00:16:11 +08:00
Milos Gajdos
2b036a9fc1
Update dockerhub.md (#4394) 2024-07-01 19:04:39 +01:00
Mahmoud Kandil
43a64480ef
Update dockerhub.md
Signed-off-by: Mahmoud Kandil <47168819+MahmoudKKandil@users.noreply.github.com>
2024-07-01 13:53:43 +03:00
David Karlsson
f36b44ff73 docs: disable base element override
Setting the HTML <base> element causes page-internal links to point to
the root of the website, rather than local anchors on the same page.

Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-07-01 10:07:44 +02:00
Milos Gajdos
83a071e98a
Bump alpine version
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2024-06-30 16:59:12 +01:00
Milos Gajdos
5316d3bda2
Bump Go and golang linter
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2024-06-30 16:50:09 +01:00
Milos Gajdos
a008d360b4
Create type alias for redis.UniversalOptions
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2024-06-30 11:20:51 +01:00
Milos Gajdos
f27799d1aa
Add custom TLS config to Redis
We also update the Redis TLS config initialization in the app.

Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2024-06-28 22:03:22 +01:00
Milos Gajdos
5f804a9df7
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity from 1.3.0 to 1.6.0 (#4380) 2024-06-26 09:39:21 +01:00
Anders Ingemann
b63cbb3318
Replace custom Redis config struct with go-redis UniversalOptions
Huge help from @milosgajdos who figured out how to do the entire
marshalling/unmarshalling for the configs

Signed-off-by: Anders Ingemann <aim@orbit.online>
2024-06-14 10:31:09 +02:00
dependabot[bot]
050e1a3ee7
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) from 1.3.0 to 1.6.0.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.3.0...sdk/azcore/v1.6.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-11 20:09:16 +00:00
Milos Gajdos
e1ec19ae60
New path for distribution config (#4365) 2024-06-11 12:19:40 +01:00
Milos Gajdos
675d7e27f5
feature: Bump go-jose and require signing algorithms in auth (#4349) 2024-05-30 20:54:20 +01:00
Milos Gajdos
52d68216c0
feature: Bump go-jose and require signing algorithms in auth
This bumps go-jose to the latest available version: v4.0.3.
This slightly breaks the backwards compatibility with the existing
registry deployments but brings more security with it.

We now require the users to specify the list of token signing algorithms in
the configuration. We do strive to maintain the b/w compat by providing
a list of supported algorithms, though, this isn't something we
recommend due to security issues, see:
* https://github.com/go-jose/go-jose/issues/64
* https://github.com/go-jose/go-jose/pull/69

As part of this change we now return to the original flow of the token
signature validation:
1. X2C (tls) headers
2. JWKS
3. KeyID

Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2024-05-30 20:44:35 +01:00
Milos Gajdos
975613d4a0
New path for distribution config
The original path was referencing a docker directory which no longer
makes much sense.

Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2024-05-29 22:05:22 +01:00
Milos Gajdos
37b83869a9
Add option to enable sparse indexes (#3536) 2024-05-28 10:15:02 +01:00
James Hewitt
c40c4b289a
Enable configuration of index dependency validation
Enable configuration options that can selectively disable validation
that dependencies exist within the registry before the image index
is uploaded.

This enables sparse indexes, where a registry holds a manifest index that
could be signed (so the digest must not change) but does not hold every
referenced image in the index. The use case for this is when a registry
mirror does not need to mirror all platforms, but does need to maintain
the digests of all manifests either because they are signed or because
they are pulled by digest.

The registry administrator can also select specific image architectures
that must exist in the registry, enabling a registry operator to select
only the platforms they care about and ensure all image indexes uploaded
to the registry are valid for those platforms.

Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2024-05-28 09:56:14 +01:00
Milos Gajdos
e0a54de7fc
Add a go.mod toolchain version (#4347) 2024-05-16 19:51:27 +01:00
Milos Gajdos
ad69db3fd5
docs: update location of filesystem.md (#4355) 2024-05-16 14:14:00 +01:00
Emmanuel Ferdman
119c608fad
docs: update location of filesystem.md
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2024-05-16 15:43:41 +03:00
Milos Gajdos
2c6b6482fc
Include headers when serving blob through proxy (#4273) 2024-05-14 14:27:09 +01:00
Milos Gajdos
6a9b0cfb71
Add support for Basic Authentication to proxyingRegistry (#4263)
Merging despite CodeQL warnings. see this for more details, why we decided to merge: https://github.com/github/codeql/issues/16486
2024-05-14 10:43:56 +01:00
Milos Gajdos
56a020f7f1
Stop proxy scheduler on system exit (#4293) 2024-05-13 17:31:23 +01:00
Dimitar Kostadinov
062309c08b Stop proxy scheduler on system exit
Signed-off-by: Dimitar Kostadinov <dimitar.kostadinov@sap.com>
2024-05-13 17:01:35 +03:00
James Hewitt
421a359b26
Add a go.mod toolchain version
go 1.21 added toolchain support. We should now specify a toolchain
version in go.mod.

https://go.dev/doc/toolchain

Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2024-05-13 14:47:07 +01:00
Milos Gajdos
c49220d492
Fix #2902: ‘autoRedirect’ hardcode ‘https’ scheme (#2903) 2024-05-04 15:32:25 +01:00
Milos Gajdos
cb3a2010c4
Set readStartAtFile context aware for purge uploads (#4339) 2024-05-02 19:00:43 +01:00
Sylvain DESGRAIS
f1875862cf Set readStartAtFile context aware for purge uploads
Signed-off-by: Sylvain DESGRAIS <sylvain.desgrais@gmail.com>
2024-05-02 11:06:39 +02:00
Milos Gajdos
c8e22f6723
Add Shutdown method to registry.Registry (#4338) 2024-05-01 15:05:44 +01:00
Robin Ketelbuters
16a305ebaf Add registry.Shutdown method for graceful shutdown of embedded registry
Signed-off-by: Robin Ketelbuters <robin.ketelbuters@gmail.com>
2024-04-29 20:18:58 +02:00
Milos Gajdos
e0795fcfe3
add bounded concurrency for tag lookup and untag (#4329) 2024-04-26 19:59:59 +01:00
Liang Zheng
a2afe23f38 add concurrency limits for tag lookup and untag
Harbor is using the distribution for it's (harbor-registry) registry component.
The harbor GC will call into the registry to delete the manifest, which in turn
then does a lookup for all tags that reference the deleted manifest.
To find the tag references, the registry will iterate every tag in the repository
and read it's link file to check if it matches the deleted manifest (i.e. to see
if uses the same sha256 digest). So, the more tags in repository, the worse the
performance will be (as there will be more s3 API calls occurring for the tag
directory lookups and tag file reads).

Therefore, we can use concurrent lookup and untag to optimize performance as described in https://github.com/goharbor/harbor/issues/12948.

P.S. This optimization was originally contributed by @Antiarchitect, now I would like to take it over.
Thanks @Antiarchitect's efforts with PR https://github.com/distribution/distribution/pull/3890.

Signed-off-by: Liang Zheng <zhengliang0901@gmail.com>
2024-04-26 22:32:21 +08:00
Liang Zheng
a5882d6646 vendor: update manifest dependencies
Signed-off-by: Liang Zheng <zhengliang0901@gmail.com>
2024-04-26 22:22:49 +08:00
Kyle Squizzato
47a9dac250
fix: ignore error of manifest tag path not found in gc (#4331) 2024-04-25 10:25:54 -07:00
Liang Zheng
112156321f fix: ignore error of manifest tag path not found in gc
it is reasonable to ignore the error that the manifest tag path does not exist when querying
all tags of the specified repository when executing gc.

Signed-off-by: Liang Zheng <zhengliang0901@gmail.com>
2024-04-25 17:13:06 +08:00
Milos Gajdos
e6d1d182bf
Allow setting s3 forcepathstyle without regionendpoint (#4291) 2024-04-24 08:34:01 +01:00
Milos Gajdos
03e58dfcf8
chore: fix some typos in comments (#4335) 2024-04-24 08:33:32 +01:00
Milos Gajdos
d61d8ebc16
build(deps): bump golang.org/x/net from 0.20.0 to 0.23.0 (#4333) 2024-04-23 16:18:48 +01:00
guoguangwu
2fe3442035 chore: fix some typos in comments
Signed-off-by: guoguangwu <guoguangwug@gmail.com>
2024-04-23 17:48:53 +08:00
Milos Gajdos
e8ea4e5951
chore: fix some typos in comments (#4332) 2024-04-23 09:03:51 +01:00
Milos Gajdos
bdd3d31fae
proxy: Do not configure HTTP secret for proxy registry (#4305) 2024-04-23 08:17:50 +01:00
goodactive
e0a1ce14a8 chore: fix some typos in comments
Signed-off-by: goodactive <goodactive@qq.com>
2024-04-23 12:04:03 +08:00
Milos Gajdos
df98374764
Fix garbage-collect --delete-untagged to handle schema 2 manifest list and OCI image index (#4285) 2024-04-21 09:18:41 +01:00
Anthony Ramahay
601b37d98b Handle OCI image index and V2 manifest list during garbage collection
Signed-off-by: Anthony Ramahay <thewolt@gmail.com>
2024-04-20 16:41:50 +02:00
dependabot[bot]
2db0a598cc
build(deps): bump golang.org/x/net from 0.20.0 to 0.23.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.20.0 to 0.23.0.
- [Commits](https://github.com/golang/net/compare/v0.20.0...v0.23.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-04-19 12:59:08 +00:00
Milos Gajdos
bc6e81e1b9
Add Go 1.22 support to CI (#4314) 2024-04-08 12:15:39 +01:00
Wang Yan
0947c654e9
chore: bump distriution/reference dependency (#4312) 2024-04-08 19:13:55 +08:00
Milos Gajdos
dde4f2a6db
chore: remove repetitive words in comments (#4313) 2024-04-08 12:04:43 +01:00
Benjamin Schanzel
8654a0ee45
Allow setting s3 forcepathstyle without regionendpoint
Currently, the `forcepathstyle` parameter for the s3 storage driver is
considered only if the `regionendpoint` parameter is set. Since setting
a region endpoint explicitly is discouraged with AWS s3, it is not clear
how to enforce path style URLs with AWS s3.
This also means, that the default value (true) only applies if a region
endpoint is configured.

This change makes sure we always forward the `forcepathstyle` parameter
to the aws-sdk if present in the config. This is a breaking change where
a `regionendpoint` is configured but no explicit `forcepathstyle` value
is set.

Signed-off-by: Benjamin Schanzel <benjamin.schanzel@bmw.de>
2024-04-08 12:45:26 +02:00
b8de0a6caf [#9] Update frostfs driver config
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-04-05 18:22:25 +03:00
1720b860fd [#9] Update frostfs-sdk-go version
Signed-off-by: Roman Loginov <r.loginov@yadro.com>
2024-04-05 18:21:55 +03:00
Milos Gajdos
0d1792f55f
build(deps): bump fossa-contrib/fossa-action from 2 to 3 (#4232) 2024-04-02 10:11:05 +01:00
Milos Gajdos
f525c27f55
build(deps): bump ossf/scorecard-action from 2.0.6 to 2.3.1 (#4231) 2024-04-02 10:10:51 +01:00
Austin Vazquez
21c718d58c
Add Go 1.22 support to CI
This change adds Go 1.22 to the Go version matrix in CI and updates all
Dockerfiles to use Go 1.21.8.

Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-03-27 15:59:13 +00:00
xiaoxiangxianzi
2446e1102d chore: remove repetitive words in comments
Signed-off-by: xiaoxiangxianzi <zhaoyizheng@outlook.com>
2024-03-27 17:34:22 +08:00
Milos Gajdos
167d7996be
chore: bump distriution/reference dependency
We've made a new release https://github.com/distribution/reference

Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2024-03-26 20:19:28 +00:00
Milos Gajdos
9d36624563
Upgrade Scorecard Action version to fix error (#4311) 2024-03-26 14:49:01 +00:00
Joyce Brum
fdbb3a8288
fix: upgrade scorecard version
Signed-off-by: Joyce Brum <joycebrum@google.com>
2024-03-26 11:28:03 -03:00
Milos Gajdos
94146f53d8
Don't try to parse error responses with no body (#4307) 2024-03-20 16:36:20 +00:00
Markus Thömmes
e8820b2564 Don't try to parse error responses with no body
HEAD requests for instance return no body while still having all the relevant Content-Type headers set, causing unnecessary parsing errors. This skips further parsing for all requests that don't have any body to begin with.

Signed-off-by: Markus Thömmes <markusthoemmes@me.com>
2024-03-20 11:46:14 +01:00
Milos Gajdos
3cb985cac0
Initialize proxy prometheus counters values to 0 (#4283) 2024-03-18 14:34:28 +00:00
Milos Gajdos
1e3de58231
Update go versions (#4303) 2024-03-18 14:08:00 +00:00
Milos Gajdos
7c7517493c
build(deps): bump github.com/go-jose/go-jose/v3 from 3.0.1 to 3.0.3 (#4297) 2024-03-17 10:38:34 +00:00
Ismail Alidzhikov
127fa7e057 proxy: Do not configure HTTP secret for proxy registry
Signed-off-by: Ismail Alidzhikov <i.alidjikov@gmail.com>
2024-03-15 18:27:08 +02:00
Ismail Alidzhikov
1cb89e3e0e Update go versions
Signed-off-by: Ismail Alidzhikov <i.alidjikov@gmail.com>
2024-03-15 10:57:53 +02:00
Milos Gajdos
3783a79518
build(deps): bump google.golang.org/protobuf from 1.31.0 to 1.33.0 (#4301) 2024-03-14 11:13:50 +00:00
dependabot[bot]
cb2b51cac9
build(deps): bump google.golang.org/protobuf from 1.31.0 to 1.33.0
Bumps google.golang.org/protobuf from 1.31.0 to 1.33.0.

---
updated-dependencies:
- dependency-name: google.golang.org/protobuf
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-13 23:16:02 +00:00
Milos Gajdos
d9815da9cb
Support redirects in gcs storage with default credentials (#4295) 2024-03-11 22:29:57 +00:00
Tadeusz Dudkiewicz
de450c903a update: support redirects in gcs storage with default credentials
Signed-off-by: Tadeusz Dudkiewicz <tadeusz.dudkiewicz@rtbhouse.com>
2024-03-11 21:05:03 +01:00
dependabot[bot]
1c5fe22dec
build(deps): bump github.com/go-jose/go-jose/v3 from 3.0.1 to 3.0.3
Bumps [github.com/go-jose/go-jose/v3](https://github.com/go-jose/go-jose) from 3.0.1 to 3.0.3.
- [Release notes](https://github.com/go-jose/go-jose/releases)
- [Changelog](https://github.com/go-jose/go-jose/blob/v3.0.3/CHANGELOG.md)
- [Commits](https://github.com/go-jose/go-jose/compare/v3.0.1...v3.0.3)

---
updated-dependencies:
- dependency-name: github.com/go-jose/go-jose/v3
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-03-07 23:01:05 +00:00
Milos Gajdos
663b430ccc
fix: typo (#4296) 2024-03-07 10:18:20 +00:00
guoguangwu
6465b4cd08 fix: typo
Signed-off-by: guoguangwu <guoguangwug@gmail.com>
2024-03-07 10:08:58 +08:00
Milos Gajdos
5c662eb1c2
Standardize OTEL error logging format to match application logs (#4292) 2024-03-05 17:22:26 +00:00
icefed
63eb22d74b
Fix: ‘autoRedirect’ hardcode ‘https’ scheme
Signed-off-by: icefed <zlwangel@gmail.com>
2024-03-05 20:50:09 +08:00
gotgelf
71a069dc38 Standardize OTEL error logging format to match application logs
Signed-off-by: gotgelf <gotgelf@gmail.com>
2024-03-05 07:22:10 +01:00
Milos Gajdos
51a72c2aef
[otel-tracing] Added Tracing to Base package (driver) (#4196) 2024-03-04 17:06:07 +00:00
gotgelf
f690b3ebe2 Added Open Telemetry Tracing to Filesystem package
Signed-off-by: gotgelf <gotgelf@gmail.com>
2024-03-04 13:31:22 +01:00
Milos Gajdos
95077fda37
fix: typo (#4290) 2024-03-04 09:03:44 +00:00
guoguangwu
a4918b67bb fix: typo
Signed-off-by: guoguangwu <guoguangwug@gmail.com>
2024-03-04 11:00:08 +08:00
Milos Gajdos
38beeee2c8
Update notifications.md (#4287) 2024-03-01 22:23:16 +00:00
Milos Gajdos
a2b608a15c
build(deps): bump codecov/codecov-action from 3 to 4 (#4271) 2024-03-01 21:27:07 +00:00
João Pereira
6a568c100f
Do not write manifests on HEAD requests (#4286) 2024-02-29 07:52:56 +00:00
Jaime Martinez
2763ba1eae
Do not write manifests on HEAD requests
Signed-off-by: Jaime Martinez <jmartinez@gitlab.com>
2024-02-29 11:16:11 +11:00
Chad Faragher
1c3d44eccd
Update notifications.md
_setup_ is a noun , _set up_ is the verb.

Signed-off-by: Chad Faragher <wyckster@hotmail.com>
2024-02-28 13:32:59 -05:00
Dimitar Kostadinov
6ca646caad Initialize proxy prometheus counters values to 0 to prevent gaps after registry restart
Signed-off-by: Dimitar Kostadinov <dimitar.kostadinov@sap.com>
2024-02-21 14:35:49 +02:00
Milos Gajdos
62aa44edfd
Add a trademarks and docs license link (#4276) 2024-02-15 14:01:22 +07:00
oliver-goetz
1e8ea03173
Add support for Basic Authentication to proxyingRegistry
Signed-off-by: oliver-goetz <o.goetz@sap.com>
2024-02-07 03:08:12 +01:00
James Hewitt
5bebd152be
Add a trademarks and docs license link
Fixes #4264

Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2024-02-06 16:36:46 +00:00
Mikel Rychliski
041824555c Include headers when serving blob through proxy
In commit 17952924f3 we updated ServeBlob() to use an io.MultiWriter to
write simultaneously to the local store and the HTTP response.

However, copyContent was using a type assertion to only add headers if
the io.Writer was a http.ResponseWriter. Therefore, this change caused
us to stop sending the expected headers (i.e. Content-Length, Etag,
etc.) on the first request for a blob.

Resolve the issue by explicitly passing in http.Header and setting it
unconditionally.

Signed-off-by: Mikel Rychliski <mikel@mikelr.com>
2024-02-01 19:31:53 -05:00
dependabot[bot]
939061d102
build(deps): bump codecov/codecov-action from 3 to 4
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 3 to 4.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-01 01:51:49 +00:00
Milos Gajdos
9b3eac8f08
build(deps): bump peter-evans/dockerhub-description from 3 to 4 (#4267) 2024-01-29 07:33:29 +07:00
dependabot[bot]
e5f5ff7a11
build(deps): bump peter-evans/dockerhub-description from 3 to 4
Bumps [peter-evans/dockerhub-description](https://github.com/peter-evans/dockerhub-description) from 3 to 4.
- [Release notes](https://github.com/peter-evans/dockerhub-description/releases)
- [Commits](https://github.com/peter-evans/dockerhub-description/compare/v3...v4)

---
updated-dependencies:
- dependency-name: peter-evans/dockerhub-description
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-26 02:05:37 +00:00
Milos Gajdos
6bc70e640d
build(deps): bump actions/upload-artifact from 4.1.0 to 4.3.0 (#4265) 2024-01-24 11:00:41 +07:00
dependabot[bot]
ee58e3438f
build(deps): bump actions/upload-artifact from 4.1.0 to 4.3.0
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.1.0 to 4.3.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v4.1.0...v4.3.0)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-24 01:42:26 +00:00
Milos Gajdos
945eed71e1
feat: Add HTTP2 for unencrypted HTTP (v3) (#4248) 2024-01-18 20:51:58 +07:00
Milos Gajdos
0b21cc06b0
refactor(storage/s3): remove redundant len check (#4259) 2024-01-18 17:29:46 +07:00
erezrokah
11f50c034e
feat: Add HTTP2 for unencrypted HTTP
Signed-off-by: erezrokah <erezrokah@users.noreply.github.com>
2024-01-17 20:59:02 +00:00
Eng Zer Jun
41161a6e12
refactor(storage/s3): remove redundant len check
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2024-01-17 18:27:05 +08:00
Milos Gajdos
01b4555d59
docs: add rendering hook and fix broken links (#4247) 2024-01-17 08:18:02 +07:00
Milos Gajdos
1611bd2fc4
chore: Migrate PR labeler config to v5 (#4258) 2024-01-17 08:03:46 +07:00
Erez Rokah
c78c156139
Update labeler.yml
Signed-off-by: Erez Rokah <erezrokah@users.noreply.github.com>
2024-01-16 18:55:32 +02:00
Erez Rokah
65c6a6d377
Update .github/labeler.yml
Co-authored-by: James Hewitt <james.hewitt@gmail.com>
Signed-off-by: Erez Rokah <erezrokah@users.noreply.github.com>
2024-01-16 18:54:54 +02:00
erezrokah
b1d1be8e87
chore: Migrate PR labeler config to v5
Signed-off-by: erezrokah <erezrokah@users.noreply.github.com>
2024-01-16 15:22:02 +00:00
Milos Gajdos
969bc4a125
chore: Remove area/config duplicate entry in labeler.yml (#4257) 2024-01-16 21:10:59 +07:00
erezrokah
a626871f12
chore: Sort entries
Signed-off-by: erezrokah <erezrokah@users.noreply.github.com>
2024-01-16 14:07:24 +00:00
erezrokah
d2c57396e0
chore: Remove area/config duplicate entry in labeler.yml
Signed-off-by: erezrokah <erezrokah@users.noreply.github.com>
2024-01-16 13:34:45 +00:00
Milos Gajdos
781d03682c
chore: Remove duplicate area/ci entry in PR labeler (#4256) 2024-01-16 20:28:17 +07:00
Erez Rokah
45cea887eb
chore: Remove duplicate area/ci entry in PR labeler
Signed-off-by: Erez Rokah <erezrokah@users.noreply.github.com>
2024-01-16 15:07:19 +02:00
Milos Gajdos
bf6f5c3f74
fix: add labeler action (#4213) 2024-01-16 17:23:39 +07:00
Milos Gajdos
dd32792bc0
fix: update Dockerfile version output (#4212) 2024-01-16 17:07:04 +07:00
Milos Gajdos
6926aea0ee
vendor: github.com/gorilla/handlers v1.5.2 (#4211) 2024-01-16 17:06:16 +07:00
Milos Gajdos
435d1b9483
remove deprecated ReadSeekCloser interfaces (#4245) 2024-01-15 19:44:57 +07:00
Milos Gajdos
0c13e046ae
build(deps): bump actions/upload-artifact from 3.0.0 to 4.1.0 (#4254) 2024-01-15 16:53:14 +07:00
dependabot[bot]
ef1db8ac26
build(deps): bump actions/upload-artifact from 3.0.0 to 4.1.0
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3.0.0 to 4.1.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v3...v4.1.0)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-15 09:40:46 +00:00
Wang Yan
88d854269f
build(deps): bump docker/bake-action from 2 to 4 (#4253) 2024-01-15 17:39:41 +08:00
David Karlsson
5e75227fb2 docs: fix broken links and improve link resolution
Update the formatting of links and add a Markdown render hook for
handling relative internal links. Cross-references between markdown
files are now resolved the same way in both GitHub and Hugo.

Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-01-14 21:33:55 +01:00
CrazyMax
6b14735dbf
ci: disable provenance when generating docs
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-01-12 12:35:51 +01:00
CrazyMax
f09bf31f3e
ci: handle provenance for built artifacts
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2024-01-12 12:35:51 +01:00
Wang Yan
14366a2dff
fix: load gcs credentials and client inside DriverConstructor (#4218) 2024-01-12 18:32:28 +08:00
dependabot[bot]
f4a3149a2f
build(deps): bump docker/bake-action from 2 to 4
Bumps [docker/bake-action](https://github.com/docker/bake-action) from 2 to 4.
- [Release notes](https://github.com/docker/bake-action/releases)
- [Commits](https://github.com/docker/bake-action/compare/v2...v4)

---
updated-dependencies:
- dependency-name: docker/bake-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-12 10:03:05 +00:00
Wang Yan
9dfead3d9a
build(deps): bump docker/setup-buildx-action from 2 to 3 (#4230) 2024-01-12 18:01:57 +08:00
Wang Yan
e780c8bb24
update to alpine 3.19 (#4210) 2024-01-11 14:54:10 +08:00
Wang Yan
9d04a0fcd1
build(deps): bump docker/metadata-action from 4 to 5 (#4240) 2024-01-11 14:48:06 +08:00
Sebastiaan van Stijn
5033279355
remove deprecated ReadSeekCloser interfaces
These were deprecated in 019ead86f5 and
d71ad5b3a6, and are no longer in use in
our code.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-06 12:08:21 +01:00
dependabot[bot]
5c585db74e
build(deps): bump docker/metadata-action from 4 to 5
Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 4 to 5.
- [Release notes](https://github.com/docker/metadata-action/releases)
- [Upgrade guide](https://github.com/docker/metadata-action/blob/master/UPGRADE.md)
- [Commits](https://github.com/docker/metadata-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: docker/metadata-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-04 01:55:09 +00:00
Milos Gajdos
1d2895f2bf
build(deps): bump docker/login-action from 2 to 3 (#4239) 2024-01-03 08:49:58 +00:00
dependabot[bot]
5c5d8d3ddd
build(deps): bump docker/login-action from 2 to 3
Bumps [docker/login-action](https://github.com/docker/login-action) from 2 to 3.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-03 02:03:24 +00:00
Milos Gajdos
2fcf2091e2
build(deps): bump actions/upload-pages-artifact from 2 to 3 (#4234) 2024-01-02 19:55:33 +00:00
David Karlsson
fc992dfef7 build(deps): bump actions/upload-pages-artifact from 2 to 3
Fixes artifact fetching failure by ensuring compatibility with actions/artifact@v4

Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
2024-01-02 13:17:06 +01:00
Milos Gajdos
e9995cdb3f
chore: use no-cache-filter for outdated stage (#4216) 2024-01-01 11:33:33 +00:00
dependabot[bot]
87ae3eb8d4
build(deps): bump fossa-contrib/fossa-action from 2 to 3
Bumps [fossa-contrib/fossa-action](https://github.com/fossa-contrib/fossa-action) from 2 to 3.
- [Release notes](https://github.com/fossa-contrib/fossa-action/releases)
- [Changelog](https://github.com/fossa-contrib/fossa-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/fossa-contrib/fossa-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: fossa-contrib/fossa-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-01 01:54:25 +00:00
dependabot[bot]
053fd16ae9
build(deps): bump ossf/scorecard-action from 2.0.6 to 2.3.1
Bumps [ossf/scorecard-action](https://github.com/ossf/scorecard-action) from 2.0.6 to 2.3.1.
- [Release notes](https://github.com/ossf/scorecard-action/releases)
- [Changelog](https://github.com/ossf/scorecard-action/blob/main/RELEASE.md)
- [Commits](99c53751e0...0864cf1902)

---
updated-dependencies:
- dependency-name: ossf/scorecard-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-01 01:54:22 +00:00
dependabot[bot]
f234296646
build(deps): bump docker/setup-buildx-action from 2 to 3
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 2 to 3.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-01 01:54:07 +00:00
Sebastiaan van Stijn
4382e4bb20
chore: generate authors and update mailmap (#4215) 2023-12-31 22:07:25 +01:00
Milos Gajdos
a808a5bb0e
build(deps): bump actions/configure-pages from 3 to 4 (#4227) 2023-12-30 09:54:08 +00:00
Milos Gajdos
ec0a477324
build(deps): bump actions/setup-go from 3 to 5 (#4228) 2023-12-30 09:53:57 +00:00
Milos Gajdos
51a7c2bdf8
build(deps): bump actions/checkout from 3 to 4 (#4226) 2023-12-30 09:53:48 +00:00
Milos Gajdos
8ab33dd8ad
build(deps): bump actions/deploy-pages from 2 to 4 (#4224) 2023-12-30 09:53:37 +00:00
Milos Gajdos
f73bcf0700
build(deps): bump github/codeql-action from 1.0.26 to 3.22.12 (#4225) 2023-12-30 00:02:11 +00:00
dependabot[bot]
78a6be85ee
build(deps): bump actions/setup-go from 3 to 5
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 3 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v3...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-29 23:06:07 +00:00
dependabot[bot]
f0a669540e
build(deps): bump actions/configure-pages from 3 to 4
Bumps [actions/configure-pages](https://github.com/actions/configure-pages) from 3 to 4.
- [Release notes](https://github.com/actions/configure-pages/releases)
- [Commits](https://github.com/actions/configure-pages/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/configure-pages
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-29 23:06:03 +00:00
dependabot[bot]
38a2d53c7b
build(deps): bump actions/checkout from 3 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-29 23:06:01 +00:00
dependabot[bot]
ba702e1d7c
build(deps): bump github/codeql-action from 1.0.26 to 3.22.12
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 1.0.26 to 3.22.12.
- [Release notes](https://github.com/github/codeql-action/releases)
- [Commits](https://github.com/github/codeql-action/compare/v1.0.26...v3.22.12)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-29 23:05:55 +00:00
dependabot[bot]
af2fa0ff4d
build(deps): bump actions/deploy-pages from 2 to 4
Bumps [actions/deploy-pages](https://github.com/actions/deploy-pages) from 2 to 4.
- [Release notes](https://github.com/actions/deploy-pages/releases)
- [Commits](https://github.com/actions/deploy-pages/compare/v2...v4)

---
updated-dependencies:
- dependency-name: actions/deploy-pages
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-29 23:05:37 +00:00
Milos Gajdos
7a9e0ea014
chore: dependabot to keep gha up to date (#4217) 2023-12-29 23:05:03 +00:00
Milos Gajdos
2cc6bd73e6
vendor: github.com/mitchellh/mapstructure v1.5.0 (#4222) 2023-12-29 23:04:28 +00:00
CrazyMax
587f9e286d
chore: generate authors
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-12-29 12:13:49 +01:00
CrazyMax
befbaa680c
chore: update mailmap
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-12-29 12:13:00 +01:00
Milos Gajdos
316e4099b1
fix: add missing skip in s3 driver test (#4219) 2023-12-27 13:21:12 +00:00
Sebastiaan van Stijn
bdfa8324a0
vendor: github.com/mitchellh/mapstructure v1.5.0
note that this repository will be sunset, and the "endorsed" fork will be
maintened by "go-viper". Updating the dependency to the latest version in
preparation.

full diff: https://github.com/mitchellh/mapstructure/compare/v1.1.2...v1.5.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-12-27 12:28:10 +01:00
Paul Meyer
5bd7f25880 fix: load gcs credentials and client inside DriverConstructor
Signed-off-by: Paul Meyer <49727155+katexochen@users.noreply.github.com>
2023-12-27 11:22:27 +01:00
Paul Meyer
6908e0d5fa fix: add missing skip in s3 driver test
Signed-off-by: Paul Meyer <49727155+katexochen@users.noreply.github.com>
2023-12-26 13:55:18 +01:00
CrazyMax
b2bd724b52
chore: sort and fix mailmap
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-12-24 11:47:04 +01:00
Milos Gajdos
ea02d9c42e
fix: add labeler action
Whilst we had added labeles to GHA config, we forgot to add the actual
action doing the labeling.

Co-authored-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-12-23 20:39:31 +00:00
CrazyMax
7838a369a3
chore: dependabot to keep gha up to date
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-12-23 15:10:52 +01:00
CrazyMax
55e91b39e4
chore: use no-cache-filter for outdated stage
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
2023-12-23 14:41:23 +01:00
Milos Gajdos
5bd45551b4
fix: update Dockerfile version output
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-12-22 09:36:48 +00:00
Milos Gajdos
012adcae7d
feat: add PR labeler (#4205) 2023-12-22 09:35:39 +00:00
Sebastiaan van Stijn
4f9fe183c3
vendor: github.com/gorilla/handlers v1.5.2
full diff: https://github.com/gorilla/handlers/compare/v1.5.1...v1.5.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-12-22 10:23:09 +01:00
Milos Gajdos
e96fce1703
feat: add PR labeler
This is an initial commit to kickstart a conversation about how we want
the new PRs to be labeled. TBC.

Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-12-22 09:22:15 +00:00
Sebastiaan van Stijn
5f397b877d
update to alpine 3.19
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-12-22 10:06:51 +01:00
Milos Gajdos
fb6ccc33d1
update: readme cleanup and fxes (#4208) 2023-12-21 22:18:07 +00:00
Milos Gajdos
e29a5c8e68
update: readme cleanup and fxes
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-12-21 22:05:56 +00:00
Milos Gajdos
c8f17009c4
docs: remove legacy kramdown options from link (#4209) 2023-12-21 13:55:13 +00:00
Steven Kalt
0e0d74b037
docs: remove legacy kramdown options from link
I was reading https://distribution.github.io/distribution/recipes/mirror/#gotcha when I noticed some unexpected annotations after the "fair use policy" link. According to [Stack Overflow](https://stackoverflow.com/a/4705645/6571327), these are kramdown options that the current hugo documentation site isn't respecting. I searched the hugo docs and couldn't find an easy way to preserve `rel="noopener" target="_blank"` behavior, so I removed the annotation.

Signed-off-by: Steven Kalt <SKalt@users.noreply.github.com>
2023-12-21 08:00:21 -05:00
Milos Gajdos
d830076a49
fix: build status badge (#4207) 2023-12-20 16:28:29 +00:00
Milos Gajdos
2306ab8aed
feat: add GH issue template (#4206) 2023-12-20 16:15:20 +00:00
Milos Gajdos
5992903182
fix: build status badge
At some point we renamed the build workflow from CI to build but forgot
to update the build status badge link in the readme. This fixes it.

Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-12-20 15:34:13 +00:00
Milos Gajdos
535b65869b
feat: add GH issue template
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-12-20 14:24:15 +00:00
Milos Gajdos
c5a887217e
version: export getter functions (#4204) 2023-12-19 23:24:35 +00:00
Hayley Swimelar
ec617ca6d2
update: set User-Agent header in GCS storage driver (#4203) 2023-12-19 10:04:10 -08:00
Cory Snider
a74cacff04 version: export getter functions
Future-proof the version package's exported interface by only making the
data available through getter functions. This affords us the flexibility
to e.g. implement them in terms of "runtime/debug".ReadBuildInfo() in
the future.

Signed-off-by: Cory Snider <csnider@mirantis.com>
2023-12-19 13:02:44 -05:00
Cory Snider
ab27c9d5f1 version: use go list -m
It appears that the value of Package is intended to be what is nowadays
called the module path, not the path to the version package. This also
fixes the issue of the version file being regenerated incorrectly under
shell redirection as the go list command no longer attempts to parse .go
files under the version package.

    $ ./version.sh > version.go
    version.go:1:1: expected 'package', found 'EOF'

Signed-off-by: Cory Snider <csnider@mirantis.com>
2023-12-19 13:00:22 -05:00
Milos Gajdos
d59a570c3d
update: set User-Agent header in GCS storage driver
Signed-off-by: Milos Gajdos <milosthegajdos@gmail.com>
2023-12-19 14:39:13 +00:00
124 changed files with 4312 additions and 1409 deletions

View file

@ -0,0 +1,23 @@
on: [pull_request]
jobs:
builds:
name: Builds
runs-on: ubuntu-latest
strategy:
matrix:
go_versions: [ '1.21', '1.22' ]
fail-fast: false
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '${{ matrix.go_versions }}'
- name: Build binary
run: make
- name: Check dirty suffix
run: if [[ $(make version) == *"dirty"* ]]; then echo "Version has dirty suffix" && exit 1; fi

View file

@ -0,0 +1,20 @@
on: [pull_request]
jobs:
dco:
name: DCO
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.22'
- name: Run commit format checker
uses: https://git.frostfs.info/TrueCloudLab/dco-go@v3
with:
from: 'origin/${{ github.event.pull_request.base.ref }}'

View file

@ -0,0 +1,21 @@
on: [pull_request]
jobs:
vulncheck:
name: Vulncheck
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.22'
- name: Install govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest
- name: Run govulncheck
run: govulncheck ./...

48
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View file

@ -0,0 +1,48 @@
name: Bug report
description: Create a report to help us improve
labels:
- kind/bug
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to report a bug!
If this is a security issue please report it to the [Distributions Security Mailing List](mailto:cncf-distribution-security@lists.cncf.io).
- type: textarea
id: description
attributes:
label: Description
description: Please give a clear and concise description of the bug
validations:
required: true
- type: textarea
id: repro
attributes:
label: Reproduce
description: Steps to reproduce the bug
placeholder: |
1. start registry version X ...
2. `docker push image:tag` ...
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected behavior
description: What is the expected behavior?
placeholder: |
E.g. "registry returns an incorrect API error"
- type: textarea
id: version
attributes:
label: registry version
description: Output of `registry --version`. Alternatively tell us the docker image tag.
validations:
required: true
- type: textarea
id: additional
attributes:
label: Additional Info
description: Additional info you want to provide such as logs, system info, environment, etc.
validations:
required: false

8
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View file

@ -0,0 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Security and Vulnerabilities
url: https://github.com/distribution/distribution/blob/main/SECURITY.md
about: Please report any security issues or vulnerabilities responsibly to the distribution maintainers team. Please do not use the public issue tracker.
- name: Questions and Discussions
url: https://github.com/distribution/distribution/discussions/new/choose
about: Use Github Discussions to ask questions and/or open discussion topics.

View file

@ -0,0 +1,12 @@
name: Feature request
description: Missing functionality? Come tell us about it!
labels:
- kind/feature
body:
- type: textarea
id: description
attributes:
label: Description
description: What is the feature you want to see?
validations:
required: true

8
.github/dependabot.yml vendored Normal file
View file

@ -0,0 +1,8 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
labels:
- "dependencies"

61
.github/labeler.yml vendored Normal file
View file

@ -0,0 +1,61 @@
area/api:
- changed-files:
- any-glob-to-any-file:
- registry/api/**
- registry/handlers/**
area/auth:
- changed-files:
- any-glob-to-any-file:
- registry/auth/**
area/build:
- changed-files:
- any-glob-to-any-file:
- Makefile
- Dockerfile
- docker-bake.hcl
- dockerfiles/**
area/cache:
- changed-files:
- any-glob-to-any-file:
- registry/storage/cache/**
area/ci:
- changed-files:
- any-glob-to-any-file:
- .github/**
- tests/**
- testutil/**
area/config:
- changed-files:
- any-glob-to-any-file:
- configuration/**
area/docs:
- changed-files:
- any-glob-to-any-file:
- README.md
- docs/**/*.md
area/proxy:
- changed-files:
- any-glob-to-any-file:
- registry/proxy/**
area/storage:
- changed-files:
- any-glob-to-any-file:
- registry/storage/**
area/storage/azure:
- changed-files:
- any-glob-to-any-file:
- registry/storage/driver/azure/**
area/storage/gcs:
- changed-files:
- any-glob-to-any-file:
- registry/storage/driver/gcs/**
area/storage/s3:
- changed-files:
- any-glob-to-any-file:
- registry/storage/driver/s3-aws/**
dependencies:
- changed-files:
- any-glob-to-any-file:
- vendor/**
- go.mod
- go.sum

View file

@ -27,18 +27,18 @@ jobs:
fail-fast: false fail-fast: false
matrix: matrix:
go: go:
- 1.20.12 - 1.21.8
- 1.21.5 - 1.22.1
target: target:
- test-coverage - test-coverage
- test-cloud-storage - test-cloud-storage
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- -
name: Set up Go name: Set up Go
uses: actions/setup-go@v3 uses: actions/setup-go@v5
with: with:
go-version: ${{ matrix.go }} go-version: ${{ matrix.go }}
- -
@ -47,7 +47,7 @@ jobs:
make ${{ matrix.target }} make ${{ matrix.target }}
- -
name: Codecov name: Codecov
uses: codecov/codecov-action@v3 uses: codecov/codecov-action@v4
with: with:
directory: ./ directory: ./
@ -62,13 +62,13 @@ jobs:
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
- -
name: Docker meta name: Docker meta
id: meta id: meta
uses: docker/metadata-action@v4 uses: docker/metadata-action@v5
with: with:
images: | images: |
${{ env.DOCKERHUB_SLUG }} ${{ env.DOCKERHUB_SLUG }}
@ -94,43 +94,53 @@ jobs:
org.opencontainers.image.description=The toolkit to pack, ship, store, and distribute container content org.opencontainers.image.description=The toolkit to pack, ship, store, and distribute container content
- -
name: Set up Docker Buildx name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2 uses: docker/setup-buildx-action@v3
- -
name: Login to DockerHub name: Login to DockerHub
if: github.event_name != 'pull_request' if: github.event_name != 'pull_request'
uses: docker/login-action@v2 uses: docker/login-action@v3
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}
- -
name: Log in to GitHub Container registry name: Log in to GitHub Container registry
if: github.event_name != 'pull_request' if: github.event_name != 'pull_request'
uses: docker/login-action@v2 uses: docker/login-action@v3
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.actor }} username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }} password: ${{ secrets.GITHUB_TOKEN }}
- -
name: Build artifacts name: Build artifacts
uses: docker/bake-action@v2 uses: docker/bake-action@v4
with: with:
targets: artifact-all targets: artifact-all
- -
name: Move artifacts name: Rename provenance
run: |
for pdir in ./bin/*/; do
(
cd "$pdir"
binname=$(find . -name '*.tar.gz')
filename=$(basename "${binname%.tar.gz}")
mv "provenance.json" "${filename}.provenance.json"
)
done
-
name: Move and list artifacts
run: | run: |
mv ./bin/**/* ./bin/ mv ./bin/**/* ./bin/
tree -nh ./bin
- -
name: Upload artifacts name: Upload artifacts
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4.3.0
with: with:
name: registry name: registry
path: ./bin/* path: ./bin/*
if-no-files-found: error if-no-files-found: error
- -
name: Build image name: Build image
uses: docker/bake-action@v2 uses: docker/bake-action@v4
with: with:
files: | files: |
./docker-bake.hcl ./docker-bake.hcl
@ -145,6 +155,7 @@ jobs:
draft: true draft: true
files: | files: |
bin/*.tar.gz bin/*.tar.gz
bin/*.provenance.json
bin/*.sha256 bin/*.sha256
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View file

@ -34,7 +34,7 @@ jobs:
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 2 fetch-depth: 2
- -
@ -44,12 +44,12 @@ jobs:
git checkout HEAD^2 git checkout HEAD^2
- -
name: Initialize CodeQL name: Initialize CodeQL
uses: github/codeql-action/init@v2 uses: github/codeql-action/init@v3.22.12
with: with:
languages: ${{ matrix.language }} languages: ${{ matrix.language }}
- -
name: Autobuild name: Autobuild
uses: github/codeql-action/autobuild@v2 uses: github/codeql-action/autobuild@v3.22.12
- -
name: Perform CodeQL Analysis name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2 uses: github/codeql-action/analyze@v3.22.12

View file

@ -17,12 +17,12 @@ jobs:
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
- -
name: Build image name: Build image
uses: docker/bake-action@v2 uses: docker/bake-action@v4
with: with:
targets: image-local targets: image-local
- -
@ -49,7 +49,7 @@ jobs:
run: mkdir -p .out/ && mv {report.html,junit.xml} .out/ run: mkdir -p .out/ && mv {report.html,junit.xml} .out/
- -
name: Upload test results name: Upload test results
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4.3.0
with: with:
name: oci-test-results-${{ github.sha }} name: oci-test-results-${{ github.sha }}
path: .out/ path: .out/

View file

@ -27,7 +27,7 @@ jobs:
uses: actions/checkout@v4 uses: actions/checkout@v4
- -
name: Update Docker Hub README name: Update Docker Hub README
uses: peter-evans/dockerhub-description@v3 uses: peter-evans/dockerhub-description@v4
with: with:
username: ${{ secrets.DOCKERHUB_USERNAME }} username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} password: ${{ secrets.DOCKERHUB_TOKEN }}

View file

@ -26,27 +26,26 @@ jobs:
uses: actions/checkout@v4 uses: actions/checkout@v4
- name: Setup Pages - name: Setup Pages
id: pages id: pages
uses: actions/configure-pages@v3 uses: actions/configure-pages@v4
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2 uses: docker/setup-buildx-action@v3
- name: Build docs - name: Build docs
uses: docker/bake-action@v3 uses: docker/bake-action@v4
with: with:
files: | files: |
docker-bake.hcl docker-bake.hcl
targets: docs-export targets: docs-export
provenance: false
set: | set: |
*.cache-from=type=gha,scope=docs *.cache-from=type=gha,scope=docs
*.cache-to=type=gha,scope=docs,mode=max *.cache-to=type=gha,scope=docs,mode=max
env:
DOCS_BASEURL: ${{ steps.pages.outputs.base_path }}
- name: Fix permissions - name: Fix permissions
run: | run: |
chmod -c -R +rX "./build/docs" | while read line; do chmod -c -R +rX "./build/docs" | while read line; do
echo "::warning title=Invalid file permissions automatically fixed::$line" echo "::warning title=Invalid file permissions automatically fixed::$line"
done done
- name: Upload Pages artifact - name: Upload Pages artifact
uses: actions/upload-pages-artifact@v2 uses: actions/upload-pages-artifact@v3
with: with:
path: ./build/docs path: ./build/docs
@ -70,4 +69,4 @@ jobs:
steps: steps:
- name: Deploy to GitHub Pages - name: Deploy to GitHub Pages
id: deployment id: deployment
uses: actions/deploy-pages@v2 # or the latest "vX.X.X" version tag for this action uses: actions/deploy-pages@v4 # or the latest "vX.X.X" version tag for this action

View file

@ -20,12 +20,12 @@ jobs:
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
- -
name: Build image name: Build image
uses: docker/bake-action@v2 uses: docker/bake-action@v4
with: with:
targets: image-local targets: image-local
- -
@ -42,7 +42,7 @@ jobs:
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
- -

View file

@ -17,9 +17,9 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v3 uses: actions/checkout@v4
- name: Run FOSSA scan and upload build data - name: Run FOSSA scan and upload build data
uses: fossa-contrib/fossa-action@v2 uses: fossa-contrib/fossa-action@v3
with: with:
fossa-api-key: cac3dc8d4f2ba86142f6c0f2199a160f fossa-api-key: cac3dc8d4f2ba86142f6c0f2199a160f

19
.github/workflows/label.yaml vendored Normal file
View file

@ -0,0 +1,19 @@
name: Pull Request Labeler
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
pull_request_target:
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v5
with:
dot: true

View file

@ -22,12 +22,12 @@ jobs:
steps: steps:
- name: "Checkout code" - name: "Checkout code"
uses: actions/checkout@a12a3943b4bdde767164f792f33f40b04645d846 # tag=v3.0.0 uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # tag=v4.1.1
with: with:
persist-credentials: false persist-credentials: false
- name: "Run analysis" - name: "Run analysis"
uses: ossf/scorecard-action@99c53751e09b9529366343771cc321ec74e9bd3d # tag=v2.0.6 uses: ossf/scorecard-action@0864cf19026789058feabb7e87baa5f140aac736 # tag=v2.3.1
with: with:
results_file: results.sarif results_file: results.sarif
results_format: sarif results_format: sarif
@ -46,7 +46,7 @@ jobs:
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF # Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab. # format to the repository Actions tab.
- name: "Upload artifact" - name: "Upload artifact"
uses: actions/upload-artifact@6673cd052c4cd6fcf4b4e6e60ea986c889389535 # tag=v3.0.0 uses: actions/upload-artifact@26f96dfa697d77e81fd5907df203aa23a56210a8 # tag=v4.3.0
with: with:
name: SARIF file name: SARIF file
path: results.sarif path: results.sarif
@ -54,7 +54,7 @@ jobs:
# Upload the results to GitHub's code scanning dashboard. # Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning" - name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@5f532563584d71fdef14ee64d17bafb34f751ce5 # tag=v1.0.26 uses: github/codeql-action/upload-sarif@1500a131381b66de0c52ac28abb13cd79f4b7ecc # tag=v2.22.12
with: with:
sarif_file: results.sarif sarif_file: results.sarif

View file

@ -29,7 +29,7 @@ jobs:
steps: steps:
- -
name: Checkout name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v4
- -
name: Run name: Run
run: | run: |

View file

@ -6,7 +6,7 @@ linters:
- goimports - goimports
- revive - revive
- ineffassign - ineffassign
- vet - govet
- unused - unused
- misspell - misspell
- bodyclose - bodyclose
@ -22,7 +22,7 @@ linters-settings:
- name: unused-parameter - name: unused-parameter
disabled: true disabled: true
run: issues:
deadline: 2m deadline: 2m
skip-dirs: exlude-dirs:
- vendor - vendor

224
.mailmap
View file

@ -1,32 +1,194 @@
Stephen J Day <stephen.day@docker.com> Stephen Day <stevvooe@users.noreply.github.com> Aaron Lehmann <alehmann@netflix.com>
Stephen J Day <stephen.day@docker.com> Stephen Day <stevvooe@gmail.com> Aaron Lehmann <alehmann@netflix.com> <aaron.lehmann@docker.com>
Olivier Gambier <olivier@docker.com> Olivier Gambier <dmp42@users.noreply.github.com> Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Brian Bland <brian.bland@docker.com> Brian Bland <r4nd0m1n4t0r@gmail.com> Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.akihiro@lab.ntt.co.jp>
Brian Bland <brian.bland@docker.com> Brian Bland <brian.t.bland@gmail.com> Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.kyoto@gmail.com>
Josh Hawn <josh.hawn@docker.com> Josh Hawn <jlhawn@berkeley.edu> Alexander Morozov <lk4d4math@gmail.com>
Richard Scothern <richard.scothern@docker.com> Richard <richard.scothern@gmail.com> Alexander Morozov <lk4d4math@gmail.com> <lk4d4@docker.com>
Richard Scothern <richard.scothern@docker.com> Richard Scothern <richard.scothern@gmail.com> Anders Ingemann <aim@orbit.online>
Andrew Meredith <andymeredith@gmail.com> Andrew Meredith <kendru@users.noreply.github.com> Andrew Meredith <andymeredith@gmail.com>
harche <p.harshal@gmail.com> harche <harche@users.noreply.github.com> Andrew Meredith <andymeredith@gmail.com> <kendru@users.noreply.github.com>
Jessie Frazelle <jessie@docker.com> <jfrazelle@users.noreply.github.com> Andrey Smirnov <andrey.smirnov@siderolabs.com>
Sharif Nassar <sharif@mrwacky.com> Sharif Nassar <mrwacky42@users.noreply.github.com> Andrii Soldatenko <andrii.soldatenko@gmail.com>
Sven Dowideit <SvenDowideit@home.org.au> Sven Dowideit <SvenDowideit@users.noreply.github.com> Andrii Soldatenko <andrii.soldatenko@gmail.com> <andrii.soldatenko@dynatrace.com>
Vincent Giersch <vincent.giersch@ovh.net> Vincent Giersch <vincent@giersch.fr> Anthony Ramahay <thewolt@gmail.com>
davidli <wenquan.li@hp.com> davidli <wenquan.li@hpe.com> Antonio Murdaca <antonio.murdaca@gmail.com>
Omer Cohen <git@omer.io> Omer Cohen <git@omerc.net> Antonio Murdaca <antonio.murdaca@gmail.com> <amurdaca@redhat.com>
Eric Yang <windfarer@gmail.com> Eric Yang <Windfarer@users.noreply.github.com> Antonio Murdaca <antonio.murdaca@gmail.com> <me@runcom.ninja>
Nikita Tarasov <nikita@mygento.ru> Nikita <luckyraul@users.noreply.github.com> Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@linux.com>
Yu Wang <yuwa@microsoft.com> yuwaMSFT2 <yuwa@microsoft.com> Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@redhat.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@users.noreply.github.com>
Austin Vazquez <macedonv@amazon.com>
Benjamin Schanzel <benjamin.schanzel@bmw.de>
Brian Bland <brian.t.bland@gmail.com>
Brian Bland <brian.t.bland@gmail.com> <brian.bland@docker.com>
Brian Bland <brian.t.bland@gmail.com> <r4nd0m1n4t0r@gmail.com>
Chad Faragher <wyckster@hotmail.com>
Cory Snider <csnider@mirantis.com>
CrazyMax <github@crazymax.dev>
CrazyMax <github@crazymax.dev> <1951866+crazy-max@users.noreply.github.com>
CrazyMax <github@crazymax.dev> <crazy-max@users.noreply.github.com>
Cristian Staretu <cristian.staretu@gmail.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejack@users.noreply.github.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejacksons@gmail.com>
Daniel Nephin <dnephin@gmail.com>
Daniel Nephin <dnephin@gmail.com> <dnephin@docker.com>
David Karlsson <david.karlsson@docker.com>
David Karlsson <david.karlsson@docker.com> <35727626+dvdksn@users.noreply.github.com>
David Wu <dwu7401@gmail.com>
David Wu <dwu7401@gmail.com> <david.wu@docker.com>
Derek McGowan <derek@mcg.dev>
Derek McGowan <derek@mcg.dev> <derek@mcgstyle.net>
Dimitar Kostadinov <dimitar.kostadinov@sap.com>
Doug Davis <dug@us.ibm.com>
Doug Davis <dug@us.ibm.com> <duglin@users.noreply.github.com>
Emmanuel Ferdman <emmanuelferdman@gmail.com>
Eng Zer Jun <engzerjun@gmail.com>
Eric Yang <windfarer@gmail.com>
Eric Yang <windfarer@gmail.com> <Windfarer@users.noreply.github.com>
Eric Yang <windfarer@gmail.com> <qizhao.yang@daocloud.io>
Erica Windisch <erica@windisch.us>
Erica Windisch <erica@windisch.us> <eric@windisch.us>
Guillaume J. Charmes <charmes.guillaume@gmail.com>
Guillaume J. Charmes <charmes.guillaume@gmail.com> <guillaume.charmes@dotcloud.com>
Guillaume J. Charmes <charmes.guillaume@gmail.com> <guillaume@charmes.net>
Guillaume J. Charmes <charmes.guillaume@gmail.com> <guillaume@docker.com>
Guillaume J. Charmes <charmes.guillaume@gmail.com> <guillaume@dotcloud.com>
Hayley Swimelar <hswimelar@gmail.com>
Ismail Alidzhikov <i.alidjikov@gmail.com>
Jaime Martinez <jmartinez@gitlab.com>
James Hewitt <james.hewitt@uk.ibm.com>
Jessica Frazelle <jess@oxide.computer>
Jessica Frazelle <jess@oxide.computer> <acidburn@docker.com>
Jessica Frazelle <jess@oxide.computer> <acidburn@google.com>
Jessica Frazelle <jess@oxide.computer> <acidburn@microsoft.com>
Jessica Frazelle <jess@oxide.computer> <jess@docker.com>
Jessica Frazelle <jess@oxide.computer> <jess@mesosphere.com>
Jessica Frazelle <jess@oxide.computer> <jessfraz@google.com>
Jessica Frazelle <jess@oxide.computer> <jfrazelle@users.noreply.github.com>
Jessica Frazelle <jess@oxide.computer> <me@jessfraz.com>
Jessica Frazelle <jess@oxide.computer> <princess@docker.com>
Joao Fernandes <joaofnfernandes@gmail.com>
Joao Fernandes <joaofnfernandes@gmail.com> <joao.fernandes@docker.com>
João Pereira <484633+joaodrp@users.noreply.github.com>
Joffrey F <joffrey@docker.com>
Joffrey F <joffrey@docker.com> <f.joffrey@gmail.com>
Joffrey F <joffrey@docker.com> <joffrey@dotcloud.com>
Johan Euphrosine <proppy@google.com>
Johan Euphrosine <proppy@google.com> <proppy@aminche.com>
John Howard <github@lowenna.com>
John Howard <github@lowenna.com> <jhoward@microsoft.com>
Josh Hawn <jlhawn@berkeley.edu>
Josh Hawn <jlhawn@berkeley.edu> <josh.hawn@docker.com>
Joyce Brum <joycebrumu.u@gmail.com>
Joyce Brum <joycebrumu.u@gmail.com> <joycebrum@google.com>
Justin Cormack <justin.cormack@docker.com>
Justin Cormack <justin.cormack@docker.com> <justin.cormack@unikernel.com>
Justin Cormack <justin.cormack@docker.com> <justin@specialbusservice.com>
Kirat Singh <kirat.singh@gmail.com>
Kirat Singh <kirat.singh@gmail.com> <kirat.singh@beacon.io>
Kirat Singh <kirat.singh@gmail.com> <kirat.singh@wsq.io>
Kyle Squizzato <ksquizz@gmail.com>
Liang Zheng <zhengliang0901@gmail.com>
Luca Bruno <lucab@debian.org>
Luca Bruno <lucab@debian.org> <luca.bruno@coreos.com>
Mahmoud Kandil <47168819+MahmoudKKandil@users.noreply.github.com>
Manish Tomar <manish.tomar@docker.com>
Manish Tomar <manish.tomar@docker.com> <manishtomar@users.noreply.github.com>
Maria Bermudez <bermudez.mt@gmail.com>
Maria Bermudez <bermudez.mt@gmail.com> <bermudezmt@users.noreply.github.com>
Markus Thömmes <markusthoemmes@me.com>
Matt Linville <matt@linville.me>
Matt Linville <matt@linville.me> <misty@apache.org>
Matt Linville <matt@linville.me> <misty@docker.com>
Michael Crosby <crosbymichael@gmail.com>
Michael Crosby <crosbymichael@gmail.com> <crosby.michael@gmail.com>
Michael Crosby <crosbymichael@gmail.com> <michael@crosbymichael.com>
Michael Crosby <crosbymichael@gmail.com> <michael@docker.com>
Michael Crosby <crosbymichael@gmail.com> <michael@thepasture.io>
Michal Minar <miminar@redhat.com>
Michal Minar <miminar@redhat.com> Michal Minář <miminar@redhat.com>
Mike Brown <brownwm@us.ibm.com>
Mike Brown <brownwm@us.ibm.com> <mikebrow@users.noreply.github.com>
Mikel Rychliski <mikel@mikelr.com>
Milos Gajdos <milosthegajdos@gmail.com>
Milos Gajdos <milosthegajdos@gmail.com> <1392526+milosgajdos@users.noreply.github.com>
Milos Gajdos <milosthegajdos@gmail.com> <milosgajdos83@gmail.com>
Nikita Tarasov <nikita@mygento.ru>
Nikita Tarasov <nikita@mygento.ru> <luckyraul@users.noreply.github.com>
Oleg Bulatov <oleg@bulatov.me>
Oleg Bulatov <oleg@bulatov.me> <obulatov@redhat.com>
Olivier Gambier <olivier@docker.com>
Olivier Gambier <olivier@docker.com> <dmp42@users.noreply.github.com>
Omer Cohen <git@omer.io>
Omer Cohen <git@omer.io> <git@omerc.net>
Paul Meyer <49727155+katexochen@users.noreply.github.com>
Per Lundberg <perlun@gmail.com>
Per Lundberg <perlun@gmail.com> <per.lundberg@ecraft.com>
Peter Dave Hello <hsu@peterdavehello.org>
Peter Dave Hello <hsu@peterdavehello.org> <PeterDaveHello@users.noreply.github.com>
Phil Estes <estesp@gmail.com>
Phil Estes <estesp@gmail.com> <estesp@amazon.com>
Phil Estes <estesp@gmail.com> <estesp@linux.vnet.ibm.com>
Richard Scothern <richard.scothern@gmail.com>
Richard Scothern <richard.scothern@gmail.com> <richard.scothern@docker.com>
Rober Morales-Chaparro <rober.morales@rstor.io>
Rober Morales-Chaparro <rober.morales@rstor.io> <rober@rstor.io>
Robin Ketelbuters <robin.ketelbuters@gmail.com>
Sebastiaan van Stijn <github@gone.nl>
Sebastiaan van Stijn <github@gone.nl> <moby@example.com>
Sebastiaan van Stijn <github@gone.nl> <sebastiaan@ws-key-sebas3.dpi1.dpi>
Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com>
Sharif Nassar <sharif@mrwacky.com>
Sharif Nassar <sharif@mrwacky.com> <mrwacky42@users.noreply.github.com>
Solomon Hykes <solomon@dagger.io>
Solomon Hykes <solomon@dagger.io> <s@docker.com>
Solomon Hykes <solomon@dagger.io> <solomon.hykes@dotcloud.com>
Solomon Hykes <solomon@dagger.io> <solomon@docker.com>
Solomon Hykes <solomon@dagger.io> <solomon@dotcloud.com>
Stephen Day <stevvooe@gmail.com>
Stephen Day <stevvooe@gmail.com> <stephen.day@docker.com>
Stephen Day <stevvooe@gmail.com> <stevvooe@users.noreply.github.com>
Steven Kalt <SKalt@users.noreply.github.com>
Sven Dowideit <SvenDowideit@home.org.au>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@users.noreply.github.com>
Sylvain DESGRAIS <sylvain.desgrais@gmail.com>
Tadeusz Dudkiewicz <tadeusz.dudkiewicz@rtbhouse.com>
Tibor Vass <teabee89@gmail.com>
Tibor Vass <teabee89@gmail.com> <tibor@docker.com>
Tibor Vass <teabee89@gmail.com> <tiborvass@users.noreply.github.com>
Victor Vieux <victorvieux@gmail.com>
Victor Vieux <victorvieux@gmail.com> <dev@vvieux.com>
Victor Vieux <victorvieux@gmail.com> <victor.vieux@docker.com>
Victor Vieux <victorvieux@gmail.com> <victor.vieux@dotcloud.com>
Victor Vieux <victorvieux@gmail.com> <victor@docker.com>
Victor Vieux <victorvieux@gmail.com> <victor@dotcloud.com>
Victor Vieux <victorvieux@gmail.com> <victorvieux@gmail.com>
Victor Vieux <victorvieux@gmail.com> <vieux@docker.com>
Victoria Bialas <victoria.bialas@docker.com>
Victoria Bialas <victoria.bialas@docker.com> <londoncalling@users.noreply.github.com>
Vincent Batts <vbatts@redhat.com>
Vincent Batts <vbatts@redhat.com> <vbatts@hashbangbash.com>
Vincent Demeester <vincent.demeester@docker.com>
Vincent Demeester <vincent.demeester@docker.com> <vincent+github@demeester.fr>
Vincent Demeester <vincent.demeester@docker.com> <vincent@demeester.fr>
Vincent Demeester <vincent.demeester@docker.com> <vincent@sbr.pm>
Vincent Giersch <vincent@giersch.fr>
Vincent Giersch <vincent@giersch.fr> <vincent.giersch@ovh.net>
Wang Yan <wangyan@vmware.com>
Wen-Quan Li <legendarilylwq@gmail.com>
Wen-Quan Li <legendarilylwq@gmail.com> <wenquan.li@hp.com>
Wen-Quan Li <legendarilylwq@gmail.com> <wenquan.li@hpe.com>
Yu Wang <yuwa@microsoft.com>
Yu Wang <yuwa@microsoft.com> Yu Wang (UC) <yuwa@microsoft.com> Yu Wang <yuwa@microsoft.com> Yu Wang (UC) <yuwa@microsoft.com>
Olivier Gambier <olivier@docker.com> dmp <dmp@loaner.local> baojiangnan <baojiangnan@meituan.com>
Olivier Gambier <olivier@docker.com> Olivier <o+github@gambier.email> baojiangnan <baojiangnan@meituan.com> <baojn1998@163.com>
Olivier Gambier <olivier@docker.com> Olivier <dmp42@users.noreply.github.com> erezrokah <erezrokah@users.noreply.github.com>
Elsan Li 李楠 <elsanli@tencent.com> elsanli(李楠) <elsanli@tencent.com> goodactive <goodactive@qq.com>
Rui Cao <ruicao@alauda.io> ruicao <ruicao@alauda.io> gotgelf <gotgelf@gmail.com>
Gwendolynne Barr <gwendolynne.barr@docker.com> gbarr01 <gwendolynne.barr@docker.com> guoguangwu <guoguangwug@gmail.com>
Haibing Zhou 周海兵 <zhouhaibing089@gmail.com> zhouhaibing089 <zhouhaibing089@gmail.com> harche <p.harshal@gmail.com>
Feng Honglin <tifayuki@gmail.com> tifayuki <tifayuki@gmail.com> harche <p.harshal@gmail.com> <harche@users.noreply.github.com>
Helen Xie <xieyulin821@harmonycloud.cn> Helen-xie <xieyulin821@harmonycloud.cn> icefed <zlwangel@gmail.com>
Mike Brown <brownwm@us.ibm.com> Mike Brown <mikebrow@users.noreply.github.com> oliver-goetz <o.goetz@sap.com>
Manish Tomar <manish.tomar@docker.com> Manish Tomar <manishtomar@users.noreply.github.com> xiaoxiangxianzi <zhaoyizheng@outlook.com>
Sakeven Jiang <jc5930@sina.cn> sakeven <jc5930@sina.cn>

530
AUTHORS Normal file
View file

@ -0,0 +1,530 @@
# This file lists all individuals having contributed content to the repository.
# For how it is generated, see dockerfiles/authors.Dockerfile.
a-palchikov <deemok@gmail.com>
Aaron Lehmann <alehmann@netflix.com>
Aaron Schlesinger <aschlesinger@deis.com>
Aaron Vinson <avinson.public@gmail.com>
Adam Dobrawy <ad-m@users.noreply.github.com>
Adam Duke <adam.v.duke@gmail.com>
Adam Enger <adamenger@gmail.com>
Adam Kaplan <adam.kaplan@redhat.com>
Adam Wolfe Gordon <awg@digitalocean.com>
AdamKorcz <adam@adalogics.com>
Adrian Mouat <adrian.mouat@gmail.com>
Adrian Plata <adrian.plata@docker.com>
Adrien Duermael <adrien@duermael.com>
Ahmet Alp Balkan <ahmetalpbalkan@gmail.com>
Aidan Hobson Sayers <aidanhs@cantab.net>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Aleksejs Sinicins <monder@monder.cc>
Alex <aleksandrosansan@gmail.com>
Alex Chan <alex.chan@metaswitch.com>
Alex Elman <aelman@indeed.com>
Alex Laties <agl@tumblr.com>
Alexander Larsson <alexl@redhat.com>
Alexander Morozov <lk4d4math@gmail.com>
Alexey Gladkov <gladkov.alexey@gmail.com>
Alfonso Acosta <fons@syntacticsugar.consulting>
allencloud <allen.sun@daocloud.io>
Alvin Feng <alvin4feng@yahoo.com>
amitshukla <ashukla73@hotmail.com>
Amy Lindburg <amy.lindburg@docker.com>
Andreas Hassing <andreas@famhassing.dk>
Andrew Bulford <andrew.bulford@redmatter.com>
Andrew Hsu <andrewhsu@acm.org>
Andrew Lavery <laverya@umich.edu>
Andrew Leung <anwleung@gmail.com>
Andrew Lively <andrew.lively2@gmail.com>
Andrew Meredith <andymeredith@gmail.com>
Andrew T Nguyen <andrew.nguyen@docker.com>
Andrews Medina <andrewsmedina@gmail.com>
Andrey Kostov <kostov.andrey@gmail.com>
Andrii Soldatenko <andrii.soldatenko@gmail.com>
Andy Goldstein <agoldste@redhat.com>
andyzhangx <xiazhang@microsoft.com>
Anian Z <ziegler@sicony.de>
Anil Belur <askb23@gmail.com>
Anis Elleuch <vadmeste@gmail.com>
Ankush Agarwal <ankushagarwal11@gmail.com>
Anne Henmi <41210220+ahh-docker@users.noreply.github.com>
Anton Tiurin <noxiouz@yandex.ru>
Antonio Mercado <amercado@thinknode.com>
Antonio Murdaca <antonio.murdaca@gmail.com>
Antonio Ojea <antonio.ojea.garcia@gmail.com>
Anusha Ragunathan <anusha@docker.com>
Arien Holthuizen <aholthuizen@schubergphilis.com>
Arko Dasgupta <arkodg@users.noreply.github.com>
Arnaud Porterie <arnaud.porterie@docker.com>
Arthur Baars <arthur@semmle.com>
Arthur Gautier <baloo@gandi.net>
Asuka Suzuki <hello@tanksuzuki.com>
Avi Miller <avi.miller@oracle.com>
Aviral Takkar <aviral26@users.noreply.github.com>
Ayose Cazorla <ayosec@gmail.com>
BadZen <dave.trombley@gmail.com>
baojiangnan <baojiangnan@meituan.com>
Ben Bodenmiller <bbodenmiller@hotmail.com>
Ben De St Paer-Gotch <bende@outlook.com>
Ben Emamian <ben@ictace.com>
Ben Firshman <ben@firshman.co.uk>
Ben Kochie <superq@gmail.com>
Ben Manuel <ben.manuel@procore.com>
Bhavin Gandhi <bhavin192@users.noreply.github.com>
Bill <NonCreature0714@users.noreply.github.com>
bin liu <liubin0329@gmail.com>
Bouke van der Bijl <me@bou.ke>
Bracken Dawson <abdawson@gmail.com>
Brandon Mitchell <git@bmitch.net>
Brandon Philips <brandon@ifup.co>
Brett Higgins <brhiggins@arbor.net>
Brian Bland <brian.t.bland@gmail.com>
Brian Goff <cpuguy83@gmail.com>
burnettk <burnettk@gmail.com>
Caleb Spare <cespare@gmail.com>
Carson A <ca@carsonoid.net>
Cezar Sa Espinola <cezarsa@gmail.com>
Chad Faragher <wyckster@hotmail.com>
Chaos John <chaosjohn.yjh@icloud.com>
Charles Smith <charles.smith@docker.com>
Cheng Zheng <chengzheng.apply@gmail.com>
chlins <chenyuzh@vmware.com>
Chris Aniszczyk <caniszczyk@gmail.com>
Chris Dillon <squarism@gmail.com>
Chris K. Wong <chriskw.xyz@gmail.com>
Chris Patterson <chrispat@github.com>
Christopher Yeleighton <ne01026@shark.2a.pl>
Christy Perez <christy@linux.vnet.ibm.com>
Chuanying Du <cydu@google.com>
Clayton Coleman <ccoleman@redhat.com>
Collin Shoop <cshoop@digitalocean.com>
Corey Quon <corey.quon@gmail.com>
Cory Snider <csnider@mirantis.com>
CrazyMax <github@crazymax.dev>
cressie176 <github@stephen-cresswell.net>
Cristian Staretu <cristian.staretu@gmail.com>
cui fliter <imcusg@gmail.com>
cuiwei13 <cuiwei13@pku.edu.cn>
cyli <cyli@twistedmatrix.com>
Daehyeok Mun <daehyeok@gmail.com>
Daisuke Fujita <dtanshi45@gmail.com>
Damien Mathieu <dmathieu@salesforce.com>
Dan Fredell <furtchet@gmail.com>
Dan Walsh <dwalsh@redhat.com>
Daniel Helfand <helfand.4@gmail.com>
Daniel Huhn <daniel@danielhuhn.de>
Daniel Menet <membership@sontags.ch>
Daniel Mizyrycki <mzdaniel@glidelink.net>
Daniel Nephin <dnephin@gmail.com>
Daniel, Dao Quang Minh <dqminh89@gmail.com>
Danila Fominykh <dancheg97@fmnx.su>
Darren Shepherd <darren@rancher.com>
Dave <david.warshaw@gmail.com>
Dave Trombley <dave.trombley@gmail.com>
Dave Tucker <dt@docker.com>
David Calavera <david.calavera@gmail.com>
David Justice <david@devigned.com>
David Karlsson <david.karlsson@docker.com>
David Lawrence <david.lawrence@docker.com>
David Luu <david@davidluu.info>
David Mackey <tdmackey@booleanhaiku.com>
David van der Spek <vanderspek.david@gmail.com>
David Verhasselt <david@crowdway.com>
David Wu <dwu7401@gmail.com>
David Xia <dxia@spotify.com>
Dawn W Docker <dawn.wood@users.noreply.github.com>
ddelange <14880945+ddelange@users.noreply.github.com>
Dejan Golja <dejan@golja.org>
Denis Andrejew <da.colonel@gmail.com>
dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Derek <crq@kernel.org>
Derek McGowan <derek@mcg.dev>
Deshi Xiao <xiaods@gmail.com>
Dimitar Kostadinov <dimitar.kostadinov@sap.com>
Diogo Mónica <diogo.monica@gmail.com>
DJ Enriquez <dj.enriquez@infospace.com>
Djibril Koné <kone.djibril@gmail.com>
dmp <dmp@loaner.local>
Don Bowman <don@agilicus.com>
Don Kjer <don.kjer@gmail.com>
Donald Huang <don.hcd@gmail.com>
Doug Davis <dug@us.ibm.com>
drornir <drornir@users.noreply.github.com>
duanhongyi <duanhongyi@doopai.com>
ducksecops <daniel@ducksecops.uk>
E. M. Bray <erik.m.bray@gmail.com>
Edgar Lee <edgar.lee@docker.com>
Elliot Pahl <elliot.pahl@gmail.com>
elsanli(李楠) <elsanli@tencent.com>
Elton Stoneman <elton@sixeyed.com>
Emmanuel Briney <emmanuel.briney@docker.com>
Eng Zer Jun <engzerjun@gmail.com>
Eohyung Lee <liquidnuker@gmail.com>
Eric Yang <windfarer@gmail.com>
Erica Windisch <erica@windisch.us>
Erik Hollensbe <github@hollensbe.org>
Etki <etki@etki.me>
Eugene Lubarsky <eug48@users.noreply.github.com>
eyjhb <eyjhbb@gmail.com>
eyjhbb@gmail.com <eyjhbb@gmail.com>
Fabio Berchtold <jamesclonk@jamesclonk.ch>
Fabio Falci <fabiofalci@gmail.com>
Fabio Huser <fabio@fh1.ch>
farmerworking <farmerworking@gmail.com>
fate-grand-order <chenjg@harmonycloud.cn>
Felix Bünemann <buenemann@louis.info>
Felix Yan <felixonmars@archlinux.org>
Feng Honglin <tifayuki@gmail.com>
Fernando Mayo Fernandez <fernando@undefinedlabs.com>
Flavian Missi <fmissi@redhat.com>
Florentin Raud <florentin.raud@gmail.com>
forkbomber <forkbomber@users.noreply.github.com>
Frank Chen <frankchn@gmail.com>
Frederick F. Kautz IV <fkautz@alumni.cmu.edu>
Gabor Nagy <mail@aigeruth.hu>
gabriell nascimento <gabriell@bluesoft.com.br>
Gaetan <gdevillele@gmail.com>
gary schaetz <gary@schaetzkc.com>
gbarr01 <gwendolynne.barr@docker.com>
Geoffrey Hausheer <rc2012@pblue.org>
ghodsizadeh <mehdi.ghodsizadeh@gmail.com>
Giovanni Toraldo <giovanni.toraldo@eng.it>
Gladkov Alexey <agladkov@redhat.com>
Gleb M Borisov <borisov.gleb@gmail.com>
Gleb Schukin <gschukin@ptsecurity.com>
glefloch <glfloch@gmail.com>
Glyn Owen Hanmer <1295698+glynternet@users.noreply.github.com>
gotgelf <gotgelf@gmail.com>
Grachev Mikhail <work@mgrachev.com>
Grant Watters <grant.watters@docker.com>
Greg Rebholz <gregrebholz@gmail.com>
Guillaume J. Charmes <charmes.guillaume@gmail.com>
Guillaume Rose <guillaume.rose@docker.com>
Gábor Lipták <gliptak@gmail.com>
harche <p.harshal@gmail.com>
hasheddan <georgedanielmangum@gmail.com>
Hayley Swimelar <hswimelar@gmail.com>
Helen-xie <xieyulin821@harmonycloud.cn>
Henri Gomez <henri.gomez@gmail.com>
Honglin Feng <tifayuki@gmail.com>
Hu Keping <hukeping@huawei.com>
Hua Wang <wanghua.humble@gmail.com>
HuKeping <hukeping@huawei.com>
Huu Nguyen <whoshuu@gmail.com>
ialidzhikov <i.alidjikov@gmail.com>
Ian Babrou <ibobrik@gmail.com>
iasoon <ilion.beyst@gmail.com>
igayoso <igayoso@gmail.com>
Igor Dolzhikov <bluesriverz@gmail.com>
Igor Morozov <igmorv@gmail.com>
Ihor Dvoretskyi <ihor@linux.com>
Ilion Beyst <ilion.beyst@gmail.com>
Ina Panova <ipanova@redhat.com>
Irene Diez <idiez@redhat.com>
Ismail Alidzhikov <i.alidjikov@gmail.com>
Jack Baines <jack.baines@uk.ibm.com>
Jack Griffin <jackpg14@gmail.com>
Jacob Atzen <jatzen@gmail.com>
Jake Moshenko <jake@devtable.com>
Jakob Ackermann <das7pad@outlook.com>
Jakub Mikulas <jakub@mikul.as>
James Findley <jfindley@fastmail.com>
James Hewitt <james.hewitt@uk.ibm.com>
James Lal <james@lightsofapollo.com>
Jason Freidman <jason.freidman@gmail.com>
Jason Heiss <jheiss@aput.net>
Javier Palomo Almena <javier.palomo.almena@gmail.com>
jdolitsky <393494+jdolitsky@users.noreply.github.com>
Jeff Nickoloff <jeff@allingeek.com>
Jeffrey van Gogh <jvg@google.com>
jerae-duffin <83294991+jerae-duffin@users.noreply.github.com>
Jeremy THERIN <jtherin@scaleway.com>
Jesse Brown <jabrown85@gmail.com>
Jesse Haka <haka.jesse@gmail.com>
Jessica Frazelle <jess@oxide.computer>
jhaohai <jhaohai@foxmail.com>
Jianqing Wang <tsing@jianqing.org>
Jihoon Chung <jihoon@gmail.com>
Jim Galasyn <jim.galasyn@docker.com>
Joao Fernandes <joaofnfernandes@gmail.com>
Joffrey F <joffrey@docker.com>
Johan Euphrosine <proppy@google.com>
John Howard <github@lowenna.com>
John Mulhausen <john@docker.com>
John Starks <jostarks@microsoft.com>
Jon Johnson <jonjohnson@google.com>
Jon Poler <jonathan.poler@apcera.com>
Jonas Hecht <jonas.hecht@codecentric.de>
Jonathan Boulle <jonathanboulle@gmail.com>
Jonathan Lee <jonjohn1232009@gmail.com>
Jonathan Rudenberg <jonathan@titanous.com>
Jordan Liggitt <jliggitt@redhat.com>
Jose D. Gomez R <jose.gomez@suse.com>
Josh Chorlton <josh.chorlton@docker.com>
Josh Dolitsky <josh@dolit.ski>
Josh Hawn <jlhawn@berkeley.edu>
Josiah Kiehl <jkiehl@riotgames.com>
Joyce Brum <joycebrumu.u@gmail.com>
João Pereira <484633+joaodrp@users.noreply.github.com>
Julien Bordellier <1444415+jstoja@users.noreply.github.com>
Julien Fernandez <julien.fernandez@gmail.com>
Justas Brazauskas <brazauskasjustas@gmail.com>
Justin Cormack <justin.cormack@docker.com>
Justin I. Nevill <JustinINevill@users.noreply.github.com>
Justin Santa Barbara <justin@fathomdb.com>
kaiwentan <kaiwentan@harmonycloud.cn>
Ke Xu <leonhartx.k@gmail.com>
Keerthan Mala <kmala@engineyard.com>
Kelsey Hightower <kelsey.hightower@gmail.com>
Ken Cochrane <KenCochrane@gmail.com>
Kenneth Lim <kennethlimcp@gmail.com>
Kenny Leung <kleung@google.com>
Kevin Lin <kevin@kelda.io>
Kevin Robatel <kevinrob2@gmail.com>
Kira <me@imkira.com>
Kirat Singh <kirat.singh@gmail.com>
L-Hudson <44844738+L-Hudson@users.noreply.github.com>
Lachlan Cooper <lachlancooper@gmail.com>
Laura Brehm <laurabrehm@hey.com>
Lei Jitang <leijitang@huawei.com>
Lenny Linux <tippexs91@googlemail.com>
Leonardo Azize Martins <lazize@users.noreply.github.com>
leonstrand <leonstrand@gmail.com>
Li Yi <denverdino@gmail.com>
Liam White <liamwhite@uk.ibm.com>
libo.huang <huanglibo2010@gmail.com>
LingFaKe <lingfake@huawei.com>
Liron Levin <liron@twistlock.com>
lisong <lisong@cdsunrise.net>
Littlemoon917 <18084421+Littlemoon917@users.noreply.github.com>
Liu Hua <sdu.liu@huawei.com>
liuchang0812 <liuchang0812@gmail.com>
liyongxin <yxli@alauda.io>
Lloyd Ramey <lnr0626@gmail.com>
lostsquirrel <lostsquirreli@hotmail.com>
Louis Kottmann <louis.kottmann@gmail.com>
Luca Bruno <lucab@debian.org>
Lucas França de Oliveira <lucasfdo@palantir.com>
Lucas Santos <lhs.santoss@gmail.com>
Luis Lobo Borobia <luislobo@gmail.com>
Luke Carpenter <x@rubynerd.net>
Ma Shimiao <mashimiao.fnst@cn.fujitsu.com>
Makoto Oda <truth_jp_4133@yahoo.co.jp>
mallchin <mallchin@mac.com>
Manish Tomar <manish.tomar@docker.com>
Marco Hennings <marco.hennings@freiheit.com>
Marcus Martins <marcus@docker.com>
Maria Bermudez <bermudez.mt@gmail.com>
Mark Sagi-Kazar <mark.sagikazar@gmail.com>
Mary Anthony <mary@docker.com>
Masataka Mizukoshi <m.mizukoshi.wakuwaku@gmail.com>
Matin Rahmanian <itsmatinx@gmail.com>
MATSUMOTO TAKEAKI <takeaki.matsumoto@linecorp.com>
Matt Bentley <mbentley@mbentley.net>
Matt Duch <matt@learnmetrics.com>
Matt Linville <matt@linville.me>
Matt Moore <mattmoor@google.com>
Matt Robenolt <matt@ydekproductions.com>
Matt Tescher <matthew.tescher@docker.com>
Matthew Balvanz <matthew.balvanz@workiva.com>
Matthew Green <greenmr@live.co.uk>
Matthew Riley <mattdr@google.com>
Maurice Sotzny <ailuridae@users.noreply.github.com>
Meaglith Ma <genedna@gmail.com>
Michael Bonfils <bonfils.michael@protonmail.com>
Michael Crosby <crosbymichael@gmail.com>
Michael Prokop <mika@grml.org>
Michael Vetter <jubalh@iodoru.org>
Michal Fojtik <mfojtik@redhat.com>
Michal Gebauer <mishak@mishak.net>
Michal Guerquin <michalg@allenai.org>
Michal Minar <miminar@redhat.com>
Mike Brown <brownwm@us.ibm.com>
Mike Lundy <mike@fluffypenguin.org>
Mike Truman <miketruman42@gmail.com>
Milos Gajdos <milosthegajdos@gmail.com>
Miquel Sabaté <msabate@suse.com>
mlmhl <409107750@qq.com>
Monika Katiyar <monika@jeavio.com>
Morgan Bauer <mbauer@us.ibm.com>
moxiegirl <mary@docker.com>
mqliang <mqliang.zju@gmail.com>
Muesli <solom.emmanuel@gmail.com>
Nan Monnand Deng <monnand@gmail.com>
Nat Zimmermann <ntzm@users.noreply.github.com>
Nathan Sullivan <nathan@nightsys.net>
Naveed Jamil <naveed.jamil@tenpearl.com>
Neil Wilson <neil@aldur.co.uk>
nevermosby <robolwq@qq.com>
Nghia Tran <tcnghia@gmail.com>
Nicolas De Loof <nicolas.deloof@gmail.com>
Nikita Tarasov <nikita@mygento.ru>
ning xie <andy.xning@gmail.com>
Nishant Totla <nishanttotla@gmail.com>
Noah Treuhaft <noah.treuhaft@docker.com>
Novak Ivanovski <novakivanovski@gmail.com>
Nuutti Kotivuori <nuutti.kotivuori@poplatek.fi>
Nycholas de Oliveira e Oliveira <nycholas@gmail.com>
Oilbeater <liumengxinfly@gmail.com>
Oleg Bulatov <oleg@bulatov.me>
olegburov <oleg.burov@outlook.com>
Olivier <o+github@gambier.email>
Olivier Gambier <olivier@docker.com>
Olivier Jacques <olivier.jacques@hp.com>
ollypom <oppomeroy@gmail.com>
Omer Cohen <git@omer.io>
Oscar Caballero <ocaballero@opensistemas.com>
Owen W. Taylor <otaylor@fishsoup.net>
paigehargrave <Paige.hargrave@docker.com>
Parth Mehrotra <parth@mehrotra.me>
Pascal Borreli <pascal@borreli.com>
Patrick Devine <patrick.devine@docker.com>
Patrick Easters <peasters@redhat.com>
Paul Cacheux <paul.cacheux@datadoghq.com>
Pavel Antonov <ddc67cd@gmail.com>
Paweł Gronowski <pawel.gronowski@docker.com>
Per Lundberg <perlun@gmail.com>
Peter Choi <reikani@Peters-MacBook-Pro.local>
Peter Dave Hello <hsu@peterdavehello.org>
Peter Kokot <peterkokot@gmail.com>
Phil Estes <estesp@gmail.com>
Philip Misiowiec <philip@atlashealth.com>
Pierre-Yves Ritschard <pyr@spootnik.org>
Pieter Scheffers <pieter.scheffers@gmail.com>
Qiang Huang <h.huangqiang@huawei.com>
Qiao Anran <qiaoanran@gmail.com>
Radon Rosborough <radon.neon@gmail.com>
Randy Barlow <randy@electronsweatshop.com>
Raphaël Enrici <raphael@root-42.com>
Ricardo Maraschini <ricardo.maraschini@gmail.com>
Richard Scothern <richard.scothern@gmail.com>
Rick Wieman <git@rickw.nl>
Rik Nijessen <rik@keefo.nl>
Riyaz Faizullabhoy <riyaz.faizullabhoy@docker.com>
Rober Morales-Chaparro <rober.morales@rstor.io>
Robert Kaussow <mail@geeklabor.de>
Robert Steward <speaktorob@users.noreply.github.com>
Roberto G. Hashioka <roberto.hashioka@docker.com>
Rodolfo Carvalho <rhcarvalho@gmail.com>
ROY <qqbuby@gmail.com>
Rui Cao <ruicao@alauda.io>
ruicao <ruicao@alauda.io>
Rusty Conover <rusty@luckydinosaur.com>
Ryan Abrams <rdabrams@gmail.com>
Ryan Thomas <rthomas@atlassian.com>
sakeven <jc5930@sina.cn>
Sam Alba <sam.alba@gmail.com>
Samuel Karp <skarp@amazon.com>
sangluo <sangluo@pinduoduo.com>
Santiago Torres <torresariass@gmail.com>
Sargun Dhillon <sargun@sargun.me>
sayboras <sayboras@yahoo.com>
Sean Boran <Boran@users.noreply.github.com>
Sean P. Kane <spkane00@gmail.com>
Sebastiaan van Stijn <github@gone.nl>
Sebastien Coavoux <s.coavoux@free.fr>
Serge Dubrouski <sergeyfd@gmail.com>
Sevki Hasirci <sevki@cloudflare.com>
Sharif Nassar <sharif@mrwacky.com>
Shawn Chen <chen8132@gmail.com>
Shawn Falkner-Horine <dreadpirateshawn@gmail.com>
Shawnpku <chen8132@gmail.com>
Shengjing Zhu <zhsj@debian.org>
Shiela M Parker <smp13@live.com>
Shishir Mahajan <shishir.mahajan@redhat.com>
Shreyas Karnik <karnik.shreyas@gmail.com>
Silvin Lubecki <31478878+silvin-lubecki@users.noreply.github.com>
Simon <crydotsnakegithub@gmail.com>
Simon Thulbourn <simon+github@thulbourn.com>
Simone Locci <simone.locci@eng.it>
Smasherr <soundcracker@gmail.com>
Solomon Hykes <solomon@dagger.io>
Sora Morimoto <sora@morimoto.io>
spacexnice <yaoyao.xyy@alibaba-inc.com>
Spencer Rinehart <anubis@overthemonkey.com>
srajmane <31947381+srajmane@users.noreply.github.com>
Srini Brahmaroutu <srbrahma@us.ibm.com>
Stan Hu <stanhu@gmail.com>
Stefan Lörwald <10850250+stefanloerwald@users.noreply.github.com>
Stefan Majewsky <stefan.majewsky@sap.com>
Stefan Nica <snica@suse.com>
Stefan Weil <sw@weilnetz.de>
Stephen Day <stevvooe@gmail.com>
Steve Lasker <stevenlasker@hotmail.com>
Steven Hanna <stevenhanna6@gmail.com>
Steven Kalt <SKalt@users.noreply.github.com>
Steven Taylor <steven.taylor@me.com>
stonezdj <stonezdj@gmail.com>
sun jian <cnhttpd@gmail.com>
Sungho Moon <sungho.moon@navercorp.com>
Sven Dowideit <SvenDowideit@home.org.au>
Sylvain Baubeau <sbaubeau@redhat.com>
syntaxkim <40621244+syntaxkim@users.noreply.github.com>
T N <tnir@users.noreply.github.com>
t-eimizu <t-eimizu@aim.ac>
Tariq Ibrahim <tariq181290@gmail.com>
TaylorKanper <tony_kanper@hotmail.com>
Ted Reed <ted.reed@gmail.com>
Terin Stock <terinjokes@gmail.com>
tgic <farmer1992@gmail.com>
Thomas Berger <loki@lokis-chaos.de>
Thomas Sjögren <konstruktoid@users.noreply.github.com>
Tianon Gravi <admwiggin@gmail.com>
Tibor Vass <teabee89@gmail.com>
tifayuki <tifayuki@gmail.com>
Tiger Kaovilai <tkaovila@redhat.com>
Tobias Fuhrimann <mastertinner@users.noreply.github.com>
Tobias Schwab <tobias.schwab@dynport.de>
Tom Hayward <thayward@infoblox.com>
Tom Hu <tomhu1096@gmail.com>
Tonis Tiigi <tonistiigi@gmail.com>
Tony Holdstock-Brown <tony@docker.com>
Tosone <i@tosone.cn>
Trapier Marshall <trapier@users.noreply.github.com>
Trevor Pounds <trevor.pounds@gmail.com>
Trevor Wood <Trevor.G.Wood@gmail.com>
Troels Thomsen <troels@thomsen.io>
uhayate <uhayate.gong@daocloud.io>
Usha Mandya <47779042+usha-mandya@users.noreply.github.com>
Usha Mandya <usha.mandya@docker.com>
Vaidas Jablonskis <jablonskis@gmail.com>
Vega Chou <VegeChou@users.noreply.github.com>
Veres Lajos <vlajos@gmail.com>
Victor Vieux <victorvieux@gmail.com>
Victoria Bialas <victoria.bialas@docker.com>
Vidar <vl@ez.no>
Viktor Stanchev <me@viktorstanchev.com>
Vincent Batts <vbatts@redhat.com>
Vincent Demeester <vincent.demeester@docker.com>
Vincent Giersch <vincent@giersch.fr>
Vishesh Jindal <vishesh92@gmail.com>
W. Trevor King <wking@tremily.us>
Wang Jie <wangjie5@chinaskycloud.com>
Wang Yan <wangyan@vmware.com>
Wassim Dhif <wassimdhif@gmail.com>
wayne <wayne.warren.s@gmail.com>
Wei Fu <fuweid89@gmail.com>
Wei Meng <wemeng@microsoft.com>
weiyuan.yl <weiyuan.yl@alibaba-inc.com>
Wen-Quan Li <legendarilylwq@gmail.com>
Wenkai Yin <yinw@vmware.com>
william wei <1342247033@qq.com>
xg.song <xg.song@venusource.com>
xiekeyang <xiekeyang@huawei.com>
Xueshan Feng <xueshan.feng@gmail.com>
Yann ROBERT <yann.robert@anantaplex.fr>
Yannick Fricke <YannickFricke@users.noreply.github.com>
yaoyao.xyy <yaoyao.xyy@alibaba-inc.com>
yixi zhang <yixi@memsql.com>
Yong Tang <yong.tang.github@outlook.com>
Yong Wen Chua <lawliet89@users.noreply.github.com>
Yongxin Li <yxli@alauda.io>
Yu Wang <yuwa@microsoft.com>
yuexiao-wang <wang.yuexiao@zte.com.cn>
YuJie <390282283@qq.com>
yuzou <zouyu7@huawei.com>
Zhang Wei <zhangwei555@huawei.com>
zhipengzuo <zuozhipeng@baidu.com>
zhouhaibing089 <zhouhaibing089@gmail.com>
zounengren <zounengren@cmss.chinamobile.com>
姜继忠 <jizhong.jiangjz@alibaba-inc.com>

View file

@ -1,7 +1,7 @@
# syntax=docker/dockerfile:1 # syntax=docker/dockerfile:1
ARG GO_VERSION=1.21.5 ARG GO_VERSION=1.22.4
ARG ALPINE_VERSION=3.18 ARG ALPINE_VERSION=3.20
ARG XX_VERSION=1.2.1 ARG XX_VERSION=1.2.1
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
@ -16,7 +16,7 @@ FROM base AS version
ARG PKG=github.com/distribution/distribution/v3 ARG PKG=github.com/distribution/distribution/v3
RUN --mount=target=. \ RUN --mount=target=. \
VERSION=$(git describe --match 'v[0-9]*' --dirty='.m' --always --tags) REVISION=$(git rev-parse HEAD)$(if ! git diff --no-ext-diff --quiet --exit-code; then echo .m; fi); \ VERSION=$(git describe --match 'v[0-9]*' --dirty='.m' --always --tags) REVISION=$(git rev-parse HEAD)$(if ! git diff --no-ext-diff --quiet --exit-code; then echo .m; fi); \
echo "-X ${PKG}/version.Version=${VERSION#v} -X ${PKG}/version.Revision=${REVISION} -X ${PKG}/version.Package=${PKG}" | tee /tmp/.ldflags; \ echo "-X ${PKG}/version.version=${VERSION#v} -X ${PKG}/version.revision=${REVISION} -X ${PKG}/version.mainpkg=${PKG}" | tee /tmp/.ldflags; \
echo -n "${VERSION}" | tee /tmp/.version; echo -n "${VERSION}" | tee /tmp/.version;
FROM base AS build FROM base AS build
@ -52,9 +52,9 @@ COPY --from=releaser /out /
FROM alpine:${ALPINE_VERSION} FROM alpine:${ALPINE_VERSION}
RUN apk add --no-cache ca-certificates RUN apk add --no-cache ca-certificates
COPY cmd/registry/config-dev.yml /etc/docker/registry/config.yml COPY cmd/registry/config-dev.yml /etc/distribution/config.yml
COPY --from=binary /registry /bin/registry COPY --from=binary /registry /bin/registry
VOLUME ["/var/lib/registry"] VOLUME ["/var/lib/registry"]
EXPOSE 5000 EXPOSE 5000
ENTRYPOINT ["registry"] ENTRYPOINT ["registry"]
CMD ["serve", "/etc/docker/registry/config.yml"] CMD ["serve", "/etc/distribution/config.yml"]

View file

@ -37,7 +37,7 @@ WHALE = "+"
TESTFLAGS_RACE= TESTFLAGS_RACE=
GOFILES=$(shell find . -type f -name '*.go') GOFILES=$(shell find . -type f -name '*.go')
GO_TAGS=$(if $(BUILDTAGS),-tags "$(BUILDTAGS)",) GO_TAGS=$(if $(BUILDTAGS),-tags "$(BUILDTAGS)",)
GO_LDFLAGS=-ldflags '-extldflags "-Wl,-z,now" -s -w -X $(PKG)/version.Version=$(VERSION) -X $(PKG)/version.Revision=$(REVISION) -X $(PKG)/version.Package=$(PKG) $(EXTRA_LDFLAGS)' GO_LDFLAGS=-ldflags '-extldflags "-Wl,-z,now" -s -w -X $(PKG)/version.version=$(VERSION) -X $(PKG)/version.revision=$(REVISION) -X $(PKG)/version.mainpkg=$(PKG) $(EXTRA_LDFLAGS)'
BINARIES=$(addprefix bin/,$(COMMANDS)) BINARIES=$(addprefix bin/,$(COMMANDS))
@ -45,7 +45,7 @@ BINARIES=$(addprefix bin/,$(COMMANDS))
TESTFLAGS ?= -v $(TESTFLAGS_RACE) TESTFLAGS ?= -v $(TESTFLAGS_RACE)
TESTFLAGS_PARALLEL ?= 8 TESTFLAGS_PARALLEL ?= 8
.PHONY: all build binaries clean test test-race test-full integration test-coverage validate lint validate-git validate-vendor vendor mod-outdated image .PHONY: all build binaries clean test test-race test-full integration test-coverage validate lint validate-git validate-vendor vendor mod-outdated image validate-authors authors
.DEFAULT: all .DEFAULT: all
.PHONY: FORCE .PHONY: FORCE
@ -86,6 +86,9 @@ vendor: ## update vendor
mod-outdated: ## check outdated dependencies mod-outdated: ## check outdated dependencies
docker buildx bake $@ docker buildx bake $@
authors: ## generate authors
docker buildx bake $@
##@ Test ##@ Test
test: ## run tests, except integration test with test.short test: ## run tests, except integration test with test.short
@ -172,6 +175,9 @@ validate-git: ## validate git
validate-vendor: ## validate vendor validate-vendor: ## validate vendor
docker buildx bake $@ docker buildx bake $@
validate-authors: ## validate authors
docker buildx bake $@
.PHONY: help .PHONY: help
help: help:
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m\033[0m\n"} /^[a-zA-Z0-9_\/%-]+:.*?##/ { printf " \033[36m%-27s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST) @awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m\033[0m\n"} /^[a-zA-Z0-9_\/%-]+:.*?##/ { printf " \033[36m%-27s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)

View file

@ -2,7 +2,7 @@
<img style="align: center; padding-left: 10px; padding-right: 10px; padding-bottom: 10px;" width="238px" height="238px" src="./distribution-logo.svg" /> <img style="align: center; padding-left: 10px; padding-right: 10px; padding-bottom: 10px;" width="238px" height="238px" src="./distribution-logo.svg" />
</p> </p>
[![Build Status](https://github.com/distribution/distribution/workflows/CI/badge.svg?branch=main&event=push)](https://github.com/distribution/distribution/actions?query=workflow%3ACI) [![Build Status](https://github.com/distribution/distribution/workflows/build/badge.svg?branch=main&event=push)](https://github.com/distribution/distribution/actions/workflows/build.yml?query=workflow%3Abuild)
[![GoDoc](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/distribution/distribution) [![GoDoc](https://img.shields.io/badge/go.dev-reference-007d9c?logo=go&logoColor=white&style=flat-square)](https://pkg.go.dev/github.com/distribution/distribution)
[![License: Apache-2.0](https://img.shields.io/badge/License-Apache--2.0-blue.svg)](LICENSE) [![License: Apache-2.0](https://img.shields.io/badge/License-Apache--2.0-blue.svg)](LICENSE)
[![codecov](https://codecov.io/gh/distribution/distribution/branch/main/graph/badge.svg)](https://codecov.io/gh/distribution/distribution) [![codecov](https://codecov.io/gh/distribution/distribution/branch/main/graph/badge.svg)](https://codecov.io/gh/distribution/distribution)
@ -27,7 +27,7 @@ This repository contains the following components:
|--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **registry** | An implementation of the [OCI Distribution Specification](https://github.com/opencontainers/distribution-spec). | | **registry** | An implementation of the [OCI Distribution Specification](https://github.com/opencontainers/distribution-spec). |
| **libraries** | A rich set of libraries for interacting with distribution components. Please see [godoc](https://pkg.go.dev/github.com/distribution/distribution) for details. **Note**: The interfaces for these libraries are **unstable**. | | **libraries** | A rich set of libraries for interacting with distribution components. Please see [godoc](https://pkg.go.dev/github.com/distribution/distribution) for details. **Note**: The interfaces for these libraries are **unstable**. |
| **documentation** | Docker's full documentation set is available at [docs.docker.com](https://docs.docker.com). This repository [contains the subset](docs/) related just to the registry. | | **documentation** | Full documentation is available at [https://distribution.github.io/distribution](https://distribution.github.io/distribution/).
### How does this integrate with Docker, containerd, and other OCI client? ### How does this integrate with Docker, containerd, and other OCI client?

View file

@ -140,12 +140,6 @@ type BlobDescriptorServiceFactory interface {
BlobAccessController(svc BlobDescriptorService) BlobDescriptorService BlobAccessController(svc BlobDescriptorService) BlobDescriptorService
} }
// ReadSeekCloser is the primary reader type for blob data, combining
// io.ReadSeeker with io.Closer.
//
// Deprecated: use [io.ReadSeekCloser].
type ReadSeekCloser = io.ReadSeekCloser
// BlobProvider describes operations for getting blob data. // BlobProvider describes operations for getting blob data.
type BlobProvider interface { type BlobProvider interface {
// Get returns the entire blob identified by digest along with the descriptor. // Get returns the entire blob identified by digest along with the descriptor.

View file

@ -12,6 +12,8 @@ storage:
maintenance: maintenance:
uploadpurging: uploadpurging:
enabled: false enabled: false
tag:
concurrencylimit: 8
http: http:
addr: :5000 addr: :5000
secret: asecretforlocaldevelopment secret: asecretforlocaldevelopment
@ -20,11 +22,10 @@ http:
headers: headers:
X-Content-Type-Options: [nosniff] X-Content-Type-Options: [nosniff]
redis: redis:
addr: localhost:6379 addrs: [localhost:6379]
pool: maxidleconns: 16
maxidle: 16 poolsize: 64
maxactive: 64 connmaxidletime: 300s
idletimeout: 300s
dialtimeout: 10ms dialtimeout: 10ms
readtimeout: 10ms readtimeout: 10ms
writetimeout: 10ms writetimeout: 10ms

View file

@ -4,29 +4,12 @@ log:
fields: fields:
service: registry service: registry
environment: development environment: development
hooks:
- type: mail
disabled: true
levels:
- panic
options:
smtp:
addr: mail.example.com:25
username: mailuser
password: password
insecure: true
from: sender@example.com
to:
- errors@example.com
storage: storage:
delete: delete:
enabled: true enabled: true
cache:
blobdescriptor: inmemory
maintenance: maintenance:
uploadpurging: uploadpurging:
enabled: false enabled: false
frostfs: frostfs:
wallet: wallet:
path: /path/to/wallet.json path: /path/to/wallet.json
@ -58,40 +41,8 @@ storage:
rpc_endpoint: http://morph-chain.frostfs.devenv:30333 rpc_endpoint: http://morph-chain.frostfs.devenv:30333
http: http:
addr: :5000 addr: :5000
debug:
addr: :5001
prometheus:
enabled: true
path: /metrics
headers: headers:
X-Content-Type-Options: [ nosniff ] X-Content-Type-Options: [ nosniff ]
redis:
addr: localhost:6379
pool:
maxidle: 16
maxactive: 64
idletimeout: 300s
dialtimeout: 10ms
readtimeout: 10ms
writetimeout: 10ms
notifications:
events:
includereferences: true
endpoints:
- name: local-5003
url: http://localhost:5003/callback
headers:
Authorization: [ Bearer <an example token> ]
timeout: 1s
threshold: 10
backoff: 1s
disabled: true
- name: local-8083
url: http://localhost:8083/callback
timeout: 1s
threshold: 10
backoff: 1s
disabled: true
health: health:
storagedriver: storagedriver:
enabled: true enabled: true

View file

@ -14,6 +14,8 @@ storage:
maintenance: maintenance:
uploadpurging: uploadpurging:
enabled: false enabled: false
tag:
concurrencylimit: 8
http: http:
addr: :5000 addr: :5000
debug: debug:

View file

@ -7,6 +7,8 @@ storage:
blobdescriptor: inmemory blobdescriptor: inmemory
filesystem: filesystem:
rootdirectory: /var/lib/registry rootdirectory: /var/lib/registry
tag:
concurrencylimit: 8
http: http:
addr: :5000 addr: :5000
headers: headers:

View file

@ -15,6 +15,7 @@ import (
_ "github.com/distribution/distribution/v3/registry/storage/driver/inmemory" _ "github.com/distribution/distribution/v3/registry/storage/driver/inmemory"
_ "github.com/distribution/distribution/v3/registry/storage/driver/middleware/cloudfront" _ "github.com/distribution/distribution/v3/registry/storage/driver/middleware/cloudfront"
_ "github.com/distribution/distribution/v3/registry/storage/driver/middleware/redirect" _ "github.com/distribution/distribution/v3/registry/storage/driver/middleware/redirect"
_ "github.com/distribution/distribution/v3/registry/storage/driver/middleware/rewrite"
_ "github.com/distribution/distribution/v3/registry/storage/driver/s3-aws" _ "github.com/distribution/distribution/v3/registry/storage/driver/s3-aws"
) )

View file

@ -8,6 +8,8 @@ import (
"reflect" "reflect"
"strings" "strings"
"time" "time"
"github.com/redis/go-redis/v9"
) )
// Configuration is a versioned registry configuration, intended to be provided by a yaml file, and // Configuration is a versioned registry configuration, intended to be provided by a yaml file, and
@ -157,9 +159,15 @@ type Configuration struct {
// HTTP2 configuration options // HTTP2 configuration options
HTTP2 struct { HTTP2 struct {
// Specifies whether the registry should disallow clients attempting // Specifies whether the registry should disallow clients attempting
// to connect via http2. If set to true, only http/1.1 is supported. // to connect via HTTP/2. If set to true, only HTTP/1.1 is supported.
Disabled bool `yaml:"disabled,omitempty"` Disabled bool `yaml:"disabled,omitempty"`
} `yaml:"http2,omitempty"` } `yaml:"http2,omitempty"`
H2C struct {
// Enables H2C (HTTP/2 Cleartext). Enable to support HTTP/2 without needing to configure TLS
// Useful when deploying the registry behind a load balancer (e.g. Cloud Run)
Enabled bool `yaml:"enabled,omitempty"`
} `yaml:"h2c,omitempty"`
} `yaml:"http,omitempty"` } `yaml:"http,omitempty"`
// Notifications specifies configuration about various endpoint to which // Notifications specifies configuration about various endpoint to which
@ -175,25 +183,7 @@ type Configuration struct {
Proxy Proxy `yaml:"proxy,omitempty"` Proxy Proxy `yaml:"proxy,omitempty"`
// Validation configures validation options for the registry. // Validation configures validation options for the registry.
Validation struct { Validation Validation `yaml:"validation,omitempty"`
// Enabled enables the other options in this section. This field is
// deprecated in favor of Disabled.
Enabled bool `yaml:"enabled,omitempty"`
// Disabled disables the other options in this section.
Disabled bool `yaml:"disabled,omitempty"`
// Manifests configures manifest validation.
Manifests struct {
// URLs configures validation for URLs in pushed manifests.
URLs struct {
// Allow specifies regular expressions (https://godoc.org/regexp/syntax)
// that URLs in pushed manifests must match.
Allow []string `yaml:"allow,omitempty"`
// Deny specifies regular expressions (https://godoc.org/regexp/syntax)
// that URLs in pushed manifests must not match.
Deny []string `yaml:"deny,omitempty"`
} `yaml:"urls,omitempty"`
} `yaml:"manifests,omitempty"`
} `yaml:"validation,omitempty"`
// Policy configures registry policy options. // Policy configures registry policy options.
Policy struct { Policy struct {
@ -271,44 +261,6 @@ type FileChecker struct {
Threshold int `yaml:"threshold,omitempty"` Threshold int `yaml:"threshold,omitempty"`
} }
// Redis configures the redis pool available to the registry webapp.
type Redis struct {
// Addr specifies the the redis instance available to the application.
Addr string `yaml:"addr,omitempty"`
// Usernames can be used as a finer-grained permission control since the introduction of the redis 6.0.
Username string `yaml:"username,omitempty"`
// Password string to use when making a connection.
Password string `yaml:"password,omitempty"`
// DB specifies the database to connect to on the redis instance.
DB int `yaml:"db,omitempty"`
// TLS configures settings for redis in-transit encryption
TLS struct {
Enabled bool `yaml:"enabled,omitempty"`
} `yaml:"tls,omitempty"`
DialTimeout time.Duration `yaml:"dialtimeout,omitempty"` // timeout for connect
ReadTimeout time.Duration `yaml:"readtimeout,omitempty"` // timeout for reads of data
WriteTimeout time.Duration `yaml:"writetimeout,omitempty"` // timeout for writes of data
// Pool configures the behavior of the redis connection pool.
Pool struct {
// MaxIdle sets the maximum number of idle connections.
MaxIdle int `yaml:"maxidle,omitempty"`
// MaxActive sets the maximum number of connections that should be
// opened before blocking a connection request.
MaxActive int `yaml:"maxactive,omitempty"`
// IdleTimeout sets the amount time to wait before closing
// inactive connections.
IdleTimeout time.Duration `yaml:"idletimeout,omitempty"`
} `yaml:"pool,omitempty"`
}
// HTTPChecker is a type of entry in the health section for checking HTTP URIs. // HTTPChecker is a type of entry in the health section for checking HTTP URIs.
type HTTPChecker struct { type HTTPChecker struct {
// Timeout is the duration to wait before timing out the HTTP request // Timeout is the duration to wait before timing out the HTTP request
@ -360,6 +312,13 @@ type Health struct {
} `yaml:"storagedriver,omitempty"` } `yaml:"storagedriver,omitempty"`
} }
type Platform struct {
// Architecture is the architecture for this platform
Architecture string `yaml:"architecture,omitempty"`
// OS is the operating system for this platform
OS string `yaml:"os,omitempty"`
}
// v0_1Configuration is a Version 0.1 Configuration struct // v0_1Configuration is a Version 0.1 Configuration struct
// This is currently aliased to Configuration, as it is the current version // This is currently aliased to Configuration, as it is the current version
type v0_1Configuration Configuration type v0_1Configuration Configuration
@ -435,6 +394,8 @@ func (storage Storage) Type() string {
// allow configuration of delete // allow configuration of delete
case "redirect": case "redirect":
// allow configuration of redirect // allow configuration of redirect
case "tag":
// allow configuration of tag
default: default:
storageType = append(storageType, k) storageType = append(storageType, k)
} }
@ -448,6 +409,19 @@ func (storage Storage) Type() string {
return "" return ""
} }
// TagParameters returns the Parameters map for a Storage tag configuration
func (storage Storage) TagParameters() Parameters {
return storage["tag"]
}
// setTagParameter changes the parameter at the provided key to the new value
func (storage Storage) setTagParameter(key string, value interface{}) {
if _, ok := storage["tag"]; !ok {
storage["tag"] = make(Parameters)
}
storage["tag"][key] = value
}
// Parameters returns the Parameters map for a Storage configuration // Parameters returns the Parameters map for a Storage configuration
func (storage Storage) Parameters() Parameters { func (storage Storage) Parameters() Parameters {
return storage[storage.Type()] return storage[storage.Type()]
@ -476,6 +450,8 @@ func (storage *Storage) UnmarshalYAML(unmarshal func(interface{}) error) error {
// allow configuration of delete // allow configuration of delete
case "redirect": case "redirect":
// allow configuration of redirect // allow configuration of redirect
case "tag":
// allow configuration of tag
default: default:
types = append(types, k) types = append(types, k)
} }
@ -630,6 +606,62 @@ type Proxy struct {
TTL *time.Duration `yaml:"ttl,omitempty"` TTL *time.Duration `yaml:"ttl,omitempty"`
} }
type Validation struct {
// Enabled enables the other options in this section. This field is
// deprecated in favor of Disabled.
Enabled bool `yaml:"enabled,omitempty"`
// Disabled disables the other options in this section.
Disabled bool `yaml:"disabled,omitempty"`
// Manifests configures manifest validation.
Manifests ValidationManifests `yaml:"manifests,omitempty"`
}
type ValidationManifests struct {
// URLs configures validation for URLs in pushed manifests.
URLs struct {
// Allow specifies regular expressions (https://godoc.org/regexp/syntax)
// that URLs in pushed manifests must match.
Allow []string `yaml:"allow,omitempty"`
// Deny specifies regular expressions (https://godoc.org/regexp/syntax)
// that URLs in pushed manifests must not match.
Deny []string `yaml:"deny,omitempty"`
} `yaml:"urls,omitempty"`
// ImageIndexes configures validation of image indexes
Indexes ValidationIndexes `yaml:"indexes,omitempty"`
}
type ValidationIndexes struct {
// Platforms configures the validation applies to the platform images included in an image index
Platforms Platforms `yaml:"platforms"`
// PlatformList filters the set of platforms to validate for image existence.
PlatformList []Platform `yaml:"platformlist,omitempty"`
}
// Platforms configures the validation applies to the platform images included in an image index
// This can be all, none, or list
type Platforms string
// UnmarshalYAML implements the yaml.Umarshaler interface
// Unmarshals a string into a Platforms option, lowercasing the string and validating that it represents a
// valid option
func (platforms *Platforms) UnmarshalYAML(unmarshal func(interface{}) error) error {
var platformsString string
err := unmarshal(&platformsString)
if err != nil {
return err
}
platformsString = strings.ToLower(platformsString)
switch platformsString {
case "all", "none", "list":
default:
return fmt.Errorf("invalid platforms option %s Must be one of [all, none, list]", platformsString)
}
*platforms = Platforms(platformsString)
return nil
}
// Parse parses an input configuration yaml document into a Configuration struct // Parse parses an input configuration yaml document into a Configuration struct
// This should generally be capable of handling old configuration format versions // This should generally be capable of handling old configuration format versions
// //
@ -682,3 +714,172 @@ func Parse(rd io.Reader) (*Configuration, error) {
return config, nil return config, nil
} }
type RedisOptions = redis.UniversalOptions
type RedisTLSOptions struct {
Certificate string `yaml:"certificate,omitempty"`
Key string `yaml:"key,omitempty"`
ClientCAs []string `yaml:"clientcas,omitempty"`
}
type Redis struct {
Options RedisOptions `yaml:",inline"`
TLS RedisTLSOptions `yaml:"tls,omitempty"`
}
func (c Redis) MarshalYAML() (interface{}, error) {
fields := make(map[string]interface{})
val := reflect.ValueOf(c.Options)
typ := val.Type()
for i := 0; i < val.NumField(); i++ {
field := typ.Field(i)
fieldValue := val.Field(i)
// ignore funcs fields in redis.UniversalOptions
if fieldValue.Kind() == reflect.Func {
continue
}
fields[strings.ToLower(field.Name)] = fieldValue.Interface()
}
// Add TLS fields if they're not empty
if c.TLS.Certificate != "" || c.TLS.Key != "" || len(c.TLS.ClientCAs) > 0 {
fields["tls"] = c.TLS
}
return fields, nil
}
func (c *Redis) UnmarshalYAML(unmarshal func(interface{}) error) error {
var fields map[string]interface{}
err := unmarshal(&fields)
if err != nil {
return err
}
val := reflect.ValueOf(&c.Options).Elem()
typ := val.Type()
for i := 0; i < typ.NumField(); i++ {
field := typ.Field(i)
fieldName := strings.ToLower(field.Name)
if value, ok := fields[fieldName]; ok {
fieldValue := val.Field(i)
if fieldValue.CanSet() {
switch field.Type {
case reflect.TypeOf(time.Duration(0)):
durationStr, ok := value.(string)
if !ok {
return fmt.Errorf("invalid duration value for field: %s", fieldName)
}
duration, err := time.ParseDuration(durationStr)
if err != nil {
return fmt.Errorf("failed to parse duration for field: %s, error: %v", fieldName, err)
}
fieldValue.Set(reflect.ValueOf(duration))
default:
if err := setFieldValue(fieldValue, value); err != nil {
return fmt.Errorf("failed to set value for field: %s, error: %v", fieldName, err)
}
}
}
}
}
// Handle TLS fields
if tlsData, ok := fields["tls"]; ok {
tlsMap, ok := tlsData.(map[interface{}]interface{})
if !ok {
return fmt.Errorf("invalid TLS data structure")
}
if cert, ok := tlsMap["certificate"]; ok {
var isString bool
c.TLS.Certificate, isString = cert.(string)
if !isString {
return fmt.Errorf("Redis TLS certificate must be a string")
}
}
if key, ok := tlsMap["key"]; ok {
var isString bool
c.TLS.Key, isString = key.(string)
if !isString {
return fmt.Errorf("Redis TLS (private) key must be a string")
}
}
if cas, ok := tlsMap["clientcas"]; ok {
caList, ok := cas.([]interface{})
if !ok {
return fmt.Errorf("invalid clientcas data structure")
}
for _, ca := range caList {
if caStr, ok := ca.(string); ok {
c.TLS.ClientCAs = append(c.TLS.ClientCAs, caStr)
}
}
}
}
return nil
}
func setFieldValue(field reflect.Value, value interface{}) error {
if value == nil {
return nil
}
switch field.Kind() {
case reflect.String:
stringValue, ok := value.(string)
if !ok {
return fmt.Errorf("failed to convert value to string")
}
field.SetString(stringValue)
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
intValue, ok := value.(int)
if !ok {
return fmt.Errorf("failed to convert value to integer")
}
field.SetInt(int64(intValue))
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
uintValue, ok := value.(uint)
if !ok {
return fmt.Errorf("failed to convert value to unsigned integer")
}
field.SetUint(uint64(uintValue))
case reflect.Float32, reflect.Float64:
floatValue, ok := value.(float64)
if !ok {
return fmt.Errorf("failed to convert value to float")
}
field.SetFloat(floatValue)
case reflect.Bool:
boolValue, ok := value.(bool)
if !ok {
return fmt.Errorf("failed to convert value to boolean")
}
field.SetBool(boolValue)
case reflect.Slice:
slice := reflect.MakeSlice(field.Type(), 0, 0)
valueSlice, ok := value.([]interface{})
if !ok {
return fmt.Errorf("failed to convert value to slice")
}
for _, item := range valueSlice {
sliceValue := reflect.New(field.Type().Elem()).Elem()
if err := setFieldValue(sliceValue, item); err != nil {
return err
}
slice = reflect.Append(slice, sliceValue)
}
field.Set(slice)
default:
return fmt.Errorf("unsupported field type: %v", field.Type())
}
return nil
}

View file

@ -8,6 +8,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/redis/go-redis/v9"
"github.com/stretchr/testify/suite" "github.com/stretchr/testify/suite"
"gopkg.in/yaml.v2" "gopkg.in/yaml.v2"
) )
@ -39,6 +40,9 @@ var configStruct = Configuration{
"url1": "https://foo.example.com", "url1": "https://foo.example.com",
"path1": "/some-path", "path1": "/some-path",
}, },
"tag": Parameters{
"concurrencylimit": 10,
},
}, },
Auth: Auth{ Auth: Auth{
"silly": Parameters{ "silly": Parameters{
@ -97,6 +101,9 @@ var configStruct = Configuration{
HTTP2 struct { HTTP2 struct {
Disabled bool `yaml:"disabled,omitempty"` Disabled bool `yaml:"disabled,omitempty"`
} `yaml:"http2,omitempty"` } `yaml:"http2,omitempty"`
H2C struct {
Enabled bool `yaml:"enabled,omitempty"`
} `yaml:"h2c,omitempty"`
}{ }{
TLS: struct { TLS: struct {
Certificate string `yaml:"certificate,omitempty"` Certificate string `yaml:"certificate,omitempty"`
@ -121,24 +128,37 @@ var configStruct = Configuration{
}{ }{
Disabled: false, Disabled: false,
}, },
H2C: struct {
Enabled bool `yaml:"enabled,omitempty"`
}{
Enabled: true,
},
}, },
Redis: Redis{ Redis: Redis{
Addr: "localhost:6379", Options: redis.UniversalOptions{
Username: "alice", Addrs: []string{"localhost:6379"},
Password: "123456", Username: "alice",
DB: 1, Password: "123456",
Pool: struct { DB: 1,
MaxIdle int `yaml:"maxidle,omitempty"` MaxIdleConns: 16,
MaxActive int `yaml:"maxactive,omitempty"` PoolSize: 64,
IdleTimeout time.Duration `yaml:"idletimeout,omitempty"` ConnMaxIdleTime: time.Second * 300,
}{ DialTimeout: time.Millisecond * 10,
MaxIdle: 16, ReadTimeout: time.Millisecond * 10,
MaxActive: 64, WriteTimeout: time.Millisecond * 10,
IdleTimeout: time.Second * 300, },
TLS: RedisTLSOptions{
Certificate: "/foo/cert.crt",
Key: "/foo/key.pem",
ClientCAs: []string{"/path/to/ca.pem"},
},
},
Validation: Validation{
Manifests: ValidationManifests{
Indexes: ValidationIndexes{
Platforms: "none",
},
}, },
DialTimeout: time.Millisecond * 10,
ReadTimeout: time.Millisecond * 10,
WriteTimeout: time.Millisecond * 10,
}, },
} }
@ -159,6 +179,8 @@ storage:
int1: 42 int1: 42
url1: "https://foo.example.com" url1: "https://foo.example.com"
path1: "/some-path" path1: "/some-path"
tag:
concurrencylimit: 10
auth: auth:
silly: silly:
realm: silly realm: silly
@ -177,22 +199,31 @@ notifications:
actions: actions:
- pull - pull
http: http:
clientcas: tls:
- /path/to/ca.pem clientcas:
- /path/to/ca.pem
headers: headers:
X-Content-Type-Options: [nosniff] X-Content-Type-Options: [nosniff]
redis: redis:
addr: localhost:6379 tls:
certificate: /foo/cert.crt
key: /foo/key.pem
clientcas:
- /path/to/ca.pem
addrs: [localhost:6379]
username: alice username: alice
password: 123456 password: "123456"
db: 1 db: 1
pool: maxidleconns: 16
maxidle: 16 poolsize: 64
maxactive: 64 connmaxidletime: 300s
idletimeout: 300s
dialtimeout: 10ms dialtimeout: 10ms
readtimeout: 10ms readtimeout: 10ms
writetimeout: 10ms writetimeout: 10ms
validation:
manifests:
indexes:
platforms: none
` `
// inmemoryConfigYamlV0_1 is a Version 0.1 yaml document specifying an inmemory // inmemoryConfigYamlV0_1 is a Version 0.1 yaml document specifying an inmemory
@ -222,6 +253,10 @@ notifications:
http: http:
headers: headers:
X-Content-Type-Options: [nosniff] X-Content-Type-Options: [nosniff]
validation:
manifests:
indexes:
platforms: none
` `
type ConfigSuite struct { type ConfigSuite struct {
@ -261,6 +296,7 @@ func (suite *ConfigSuite) TestParseSimple() {
func (suite *ConfigSuite) TestParseInmemory() { func (suite *ConfigSuite) TestParseInmemory() {
suite.expectedConfig.Storage = Storage{"inmemory": Parameters{}} suite.expectedConfig.Storage = Storage{"inmemory": Parameters{}}
suite.expectedConfig.Log.Fields = nil suite.expectedConfig.Log.Fields = nil
suite.expectedConfig.HTTP.TLS.ClientCAs = nil
suite.expectedConfig.Redis = Redis{} suite.expectedConfig.Redis = Redis{}
config, err := Parse(bytes.NewReader([]byte(inmemoryConfigYamlV0_1))) config, err := Parse(bytes.NewReader([]byte(inmemoryConfigYamlV0_1)))
@ -281,7 +317,9 @@ func (suite *ConfigSuite) TestParseIncomplete() {
suite.expectedConfig.Auth = Auth{"silly": Parameters{"realm": "silly"}} suite.expectedConfig.Auth = Auth{"silly": Parameters{"realm": "silly"}}
suite.expectedConfig.Notifications = Notifications{} suite.expectedConfig.Notifications = Notifications{}
suite.expectedConfig.HTTP.Headers = nil suite.expectedConfig.HTTP.Headers = nil
suite.expectedConfig.HTTP.TLS.ClientCAs = nil
suite.expectedConfig.Redis = Redis{} suite.expectedConfig.Redis = Redis{}
suite.expectedConfig.Validation.Manifests.Indexes.Platforms = ""
// Note: this also tests that REGISTRY_STORAGE and // Note: this also tests that REGISTRY_STORAGE and
// REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY can be used together // REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY can be used together
@ -534,6 +572,9 @@ func copyConfig(config Configuration) *Configuration {
for k, v := range config.Storage.Parameters() { for k, v := range config.Storage.Parameters() {
configCopy.Storage.setParameter(k, v) configCopy.Storage.setParameter(k, v)
} }
for k, v := range config.Storage.TagParameters() {
configCopy.Storage.setTagParameter(k, v)
}
configCopy.Auth = Auth{config.Auth.Type(): Parameters{}} configCopy.Auth = Auth{config.Auth.Type(): Parameters{}}
for k, v := range config.Auth.Parameters() { for k, v := range config.Auth.Parameters() {
@ -547,8 +588,20 @@ func copyConfig(config Configuration) *Configuration {
for k, v := range config.HTTP.Headers { for k, v := range config.HTTP.Headers {
configCopy.HTTP.Headers[k] = v configCopy.HTTP.Headers[k] = v
} }
configCopy.HTTP.TLS.ClientCAs = make([]string, 0, len(config.HTTP.TLS.ClientCAs))
configCopy.HTTP.TLS.ClientCAs = append(configCopy.HTTP.TLS.ClientCAs, config.HTTP.TLS.ClientCAs...)
configCopy.Redis = config.Redis configCopy.Redis = config.Redis
configCopy.Redis.TLS.Certificate = config.Redis.TLS.Certificate
configCopy.Redis.TLS.Key = config.Redis.TLS.Key
configCopy.Redis.TLS.ClientCAs = make([]string, 0, len(config.Redis.TLS.ClientCAs))
configCopy.Redis.TLS.ClientCAs = append(configCopy.Redis.TLS.ClientCAs, config.Redis.TLS.ClientCAs...)
configCopy.Validation = Validation{
Enabled: config.Validation.Enabled,
Disabled: config.Validation.Disabled,
Manifests: config.Validation.Manifests,
}
return configCopy return configCopy
} }

View file

@ -39,11 +39,7 @@ target "update-vendor" {
target "mod-outdated" { target "mod-outdated" {
dockerfile = "./dockerfiles/vendor.Dockerfile" dockerfile = "./dockerfiles/vendor.Dockerfile"
target = "outdated" target = "outdated"
args = { no-cache-filter = ["outdated"]
// used to invalidate cache for outdated run stage
// can be dropped when https://github.com/moby/buildkit/issues/1213 fixed
_RANDOM = uuidv4()
}
output = ["type=cacheonly"] output = ["type=cacheonly"]
} }
@ -95,15 +91,8 @@ target "image-all" {
] ]
} }
variable "DOCS_BASEURL" {
default = null
}
target "_common_docs" { target "_common_docs" {
dockerfile = "./dockerfiles/docs.Dockerfile" dockerfile = "./dockerfiles/docs.Dockerfile"
args = {
DOCS_BASEURL = DOCS_BASEURL
}
} }
target "docs-export" { target "docs-export" {
@ -124,3 +113,15 @@ target "docs-test" {
target = "test" target = "test"
output = ["type=cacheonly"] output = ["type=cacheonly"]
} }
target "authors" {
dockerfile = "./dockerfiles/authors.Dockerfile"
target = "update"
output = ["."]
}
target "validate-authors" {
dockerfile = "./dockerfiles/authors.Dockerfile"
target = "validate"
output = ["type=cacheonly"]
}

View file

@ -0,0 +1,34 @@
# syntax=docker/dockerfile:1
ARG ALPINE_VERSION=3.20
FROM alpine:${ALPINE_VERSION} AS gen
RUN apk add --no-cache git
WORKDIR /src
RUN --mount=type=bind,target=. <<EOT
set -e
mkdir /out
# see also ".mailmap" for how email addresses and names are deduplicated
{
echo "# This file lists all individuals having contributed content to the repository."
echo "# For how it is generated, see dockerfiles/authors.Dockerfile."
echo
git log --format='%aN <%aE>' | LC_ALL=C.UTF-8 sort -uf
} > /out/AUTHORS
cat /out/AUTHORS
EOT
FROM scratch AS update
COPY --from=gen /out /
FROM gen AS validate
RUN --mount=type=bind,target=.,rw <<EOT
set -e
git add -A
cp -rf /out/* .
if [ -n "$(git status --porcelain -- AUTHORS)" ]; then
echo >&2 'ERROR: Authors result differs. Please update with "make authors"'
git status --porcelain -- AUTHORS
exit 1
fi
EOT

View file

@ -1,7 +1,7 @@
# syntax=docker/dockerfile:1 # syntax=docker/dockerfile:1
ARG GO_VERSION=1.21.5 ARG GO_VERSION=1.22.4
ARG ALPINE_VERSION=3.18 ARG ALPINE_VERSION=3.20
FROM golang:${GO_VERSION}-alpine${ALPINE_VERSION} AS base FROM golang:${GO_VERSION}-alpine${ALPINE_VERSION} AS base
RUN apk add --no-cache git RUN apk add --no-cache git
@ -16,9 +16,8 @@ COPY --from=hugo $GOPATH/bin/hugo /bin/hugo
WORKDIR /src WORKDIR /src
FROM build-base AS build FROM build-base AS build
ARG DOCS_BASEURL=/
RUN --mount=type=bind,rw,source=docs,target=. \ RUN --mount=type=bind,rw,source=docs,target=. \
hugo --gc --minify --destination /out -b $DOCS_BASEURL hugo --gc --minify --destination /out
FROM build-base AS server FROM build-base AS server
COPY docs . COPY docs .
@ -29,8 +28,12 @@ FROM scratch AS out
COPY --from=build /out / COPY --from=build /out /
FROM wjdp/htmltest:v0.17.0 AS test FROM wjdp/htmltest:v0.17.0 AS test
# Copy the site to a public/distribution subdirectory
# This is a workaround for a limitation in htmltest, see:
# https://github.com/wjdp/htmltest/issues/45
WORKDIR /test/public/distribution
COPY --from=build /out .
WORKDIR /test WORKDIR /test
COPY --from=build /out ./public
ADD docs/.htmltest.yml .htmltest.yml ADD docs/.htmltest.yml .htmltest.yml
RUN --mount=type=cache,target=tmp/.htmltest \ RUN --mount=type=cache,target=tmp/.htmltest \
htmltest htmltest

View file

@ -1,7 +1,7 @@
# syntax=docker/dockerfile:1 # syntax=docker/dockerfile:1
ARG GO_VERSION=1.20.12 ARG GO_VERSION=1.22.4
ARG ALPINE_VERSION=3.18 ARG ALPINE_VERSION=3.20
FROM alpine:${ALPINE_VERSION} AS base FROM alpine:${ALPINE_VERSION} AS base
RUN apk add --no-cache git gpg RUN apk add --no-cache git gpg

View file

@ -1,8 +1,8 @@
# syntax=docker/dockerfile:1 # syntax=docker/dockerfile:1
ARG GO_VERSION=1.20.12 ARG GO_VERSION=1.22.4
ARG ALPINE_VERSION=3.18 ARG ALPINE_VERSION=3.20
ARG GOLANGCI_LINT_VERSION=v1.55.2 ARG GOLANGCI_LINT_VERSION=v1.59.1
ARG BUILDTAGS="" ARG BUILDTAGS=""
FROM golangci/golangci-lint:${GOLANGCI_LINT_VERSION}-alpine AS golangci-lint FROM golangci/golangci-lint:${GOLANGCI_LINT_VERSION}-alpine AS golangci-lint

View file

@ -1,7 +1,7 @@
# syntax=docker/dockerfile:1 # syntax=docker/dockerfile:1
ARG GO_VERSION=1.20.12 ARG GO_VERSION=1.22.4
ARG ALPINE_VERSION=3.18 ARG ALPINE_VERSION=3.20
ARG MODOUTDATED_VERSION=v0.8.0 ARG MODOUTDATED_VERSION=v0.8.0
FROM golang:${GO_VERSION}-alpine${ALPINE_VERSION} AS base FROM golang:${GO_VERSION}-alpine${ALPINE_VERSION} AS base
@ -40,7 +40,6 @@ EOT
FROM psampaz/go-mod-outdated:${MODOUTDATED_VERSION} AS go-mod-outdated FROM psampaz/go-mod-outdated:${MODOUTDATED_VERSION} AS go-mod-outdated
FROM base AS outdated FROM base AS outdated
ARG _RANDOM
RUN --mount=target=.,ro \ RUN --mount=target=.,ro \
--mount=target=/go/pkg/mod,type=cache \ --mount=target=/go/pkg/mod,type=cache \
--mount=from=go-mod-outdated,source=/home/go-mod-outdated,target=/usr/bin/go-mod-outdated \ --mount=from=go-mod-outdated,source=/home/go-mod-outdated,target=/usr/bin/go-mod-outdated \

View file

@ -50,7 +50,7 @@ specify it in the `docker run` command:
```bash ```bash
$ docker run -d -p 5000:5000 --restart=always --name registry \ $ docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/config.yml:/etc/docker/registry/config.yml \ -v `pwd`/config.yml:/etc/distribution/config.yml \
registry:2 registry:2
``` ```
@ -141,6 +141,8 @@ storage:
usedualstack: false usedualstack: false
loglevel: debug loglevel: debug
inmemory: # This driver takes no parameters inmemory: # This driver takes no parameters
tag:
concurrencylimit: 8
delete: delete:
enabled: false enabled: false
redirect: redirect:
@ -166,6 +168,10 @@ auth:
service: token-service service: token-service
issuer: registry-token-issuer issuer: registry-token-issuer
rootcertbundle: /root/certs/bundle rootcertbundle: /root/certs/bundle
jwks: /path/to/jwks
signingalgorithms:
- EdDSA
- HS256
htpasswd: htpasswd:
realm: basic-realm realm: basic-realm
path: /path/to/htpasswd path: /path/to/htpasswd
@ -220,6 +226,8 @@ http:
X-Content-Type-Options: [nosniff] X-Content-Type-Options: [nosniff]
http2: http2:
disabled: false disabled: false
h2c:
enabled: false
notifications: notifications:
events: events:
includereferences: true includereferences: true
@ -239,16 +247,20 @@ notifications:
actions: actions:
- pull - pull
redis: redis:
addr: localhost:6379 tls:
certificate: /path/to/cert.crt
key: /path/to/key.pem
clientcas:
- /path/to/ca.pem
addrs: [localhost:6379]
password: asecret password: asecret
db: 0 db: 0
dialtimeout: 10ms dialtimeout: 10ms
readtimeout: 10ms readtimeout: 10ms
writetimeout: 10ms writetimeout: 10ms
pool: maxidleconns: 16
maxidle: 16 poolsize: 64
maxactive: 64 connmaxidletime: 300s
idletimeout: 300s
tls: tls:
enabled: false enabled: false
health: health:
@ -284,6 +296,11 @@ validation:
- ^https?://([^/]+\.)*example\.com/ - ^https?://([^/]+\.)*example\.com/
deny: deny:
- ^https?://www\.example\.com/ - ^https?://www\.example\.com/
indexes:
platforms: List
platformlist:
- architecture: amd64
os: linux
``` ```
In some instances a configuration option is **optional** but it contains child In some instances a configuration option is **optional** but it contains child
@ -434,17 +451,17 @@ The `storage` option is **required** and defines which storage backend is in
use. You must configure exactly one backend. If you configure more, the registry use. You must configure exactly one backend. If you configure more, the registry
returns an error. You can choose any of these backend storage drivers: returns an error. You can choose any of these backend storage drivers:
| Storage driver | Description | | Storage driver | Description |
|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `filesystem` | Uses the local disk to store registry files. It is ideal for development and may be appropriate for some small-scale production applications. See the [driver's reference documentation](/storage-drivers/filesystem). | | `filesystem` | Uses the local disk to store registry files. It is ideal for development and may be appropriate for some small-scale production applications. See the [driver's reference documentation](../storage-drivers/filesystem.md). |
| `azure` | Uses Microsoft Azure Blob Storage. See the [driver's reference documentation](/storage-drivers/azure). | | `azure` | Uses Microsoft Azure Blob Storage. See the [driver's reference documentation](../storage-drivers/azure.md). |
| `gcs` | Uses Google Cloud Storage. See the [driver's reference documentation](/storage-drivers/gcs). | | `gcs` | Uses Google Cloud Storage. See the [driver's reference documentation](../storage-drivers/gcs.md). |
| `s3` | Uses Amazon Simple Storage Service (S3) and compatible Storage Services. See the [driver's reference documentation](/storage-drivers/s3). | | `s3` | Uses Amazon Simple Storage Service (S3) and compatible Storage Services. See the [driver's reference documentation](../storage-drivers/s3.md). |
For testing only, you can use the [`inmemory` storage For testing only, you can use the [`inmemory` storage
driver](/storage-drivers/inmemory). driver](../storage-drivers/inmemory.md).
If you would like to run a registry from volatile memory, use the If you would like to run a registry from volatile memory, use the
[`filesystem` driver](/storage-drivers/filesystem) [`filesystem` driver](../storage-drivers/filesystem.md)
on a ramdisk. on a ramdisk.
If you are deploying a registry on Windows, a Windows volume mounted from the If you are deploying a registry on Windows, a Windows volume mounted from the
@ -519,6 +536,26 @@ parameter sets a limit on the number of descriptors to store in the cache.
The default value is 10000. If this parameter is set to 0, the cache is allowed The default value is 10000. If this parameter is set to 0, the cache is allowed
to grow with no size limit. to grow with no size limit.
### `tag`
The `tag` subsection provides configuration to set concurrency limit for tag lookup.
When user calls into the registry to delete the manifest, which in turn then does a
lookup for all tags that reference the deleted manifest. To find the tag references,
the registry will iterate every tag in the repository and read it's link file to check
if it matches the deleted manifest (i.e. to see if uses the same sha256 digest).
So, the more tags in repository, the worse the performance will be (as there will
be more S3 API calls occurring for the tag directory lookups and tag file reads if
using S3 storage driver).
Therefore, add a single flag `concurrencylimit` to set concurrency limit to optimize tag
lookup performance under the `tag` section. When a value is not provided or equal to 0,
`GOMAXPROCS` will be used.
```yaml
tag:
concurrencylimit: 8
```
### `redirect` ### `redirect`
The `redirect` subsection provides configuration for managing redirects from The `redirect` subsection provides configuration for managing redirects from
@ -548,6 +585,11 @@ auth:
service: token-service service: token-service
issuer: registry-token-issuer issuer: registry-token-issuer
rootcertbundle: /root/certs/bundle rootcertbundle: /root/certs/bundle
jwks: /path/to/jwks
signingalgorithms:
- EdDSA
- HS256
- ES512
htpasswd: htpasswd:
realm: basic-realm realm: basic-realm
path: /path/to/htpasswd path: /path/to/htpasswd
@ -583,17 +625,49 @@ Token-based authentication allows you to decouple the authentication system from
the registry. It is an established authentication paradigm with a high degree of the registry. It is an established authentication paradigm with a high degree of
security. security.
| Parameter | Required | Description | | Parameter | Required | Description |
|-----------|----------|-------------------------------------------------------| |----------------------|----------|-------------------------------------------------------|
| `realm` | yes | The realm in which the registry server authenticates. | | `realm` | yes | The realm in which the registry server authenticates. |
| `service` | yes | The service being authenticated. | | `service` | yes | The service being authenticated. |
| `issuer` | yes | The name of the token issuer. The issuer inserts this into the token so it must match the value configured for the issuer. | | `issuer` | yes | The name of the token issuer. The issuer inserts this into the token so it must match the value configured for the issuer. |
| `rootcertbundle` | yes | The absolute path to the root certificate bundle. This bundle contains the public part of the certificates used to sign authentication tokens. | | `rootcertbundle` | yes | The absolute path to the root certificate bundle. This bundle contains the public part of the certificates used to sign authentication tokens. |
| `autoredirect` | no | When set to `true`, `realm` will automatically be set using the Host header of the request as the domain and a path of `/auth/token/`| | `autoredirect` | no | When set to `true`, `realm` will be set to the Host header of the request as the domain and a path of `/auth/token/`(or specified by `autoredirectpath`), the `realm` URL Scheme will use `X-Forwarded-Proto` header if set, otherwise it will be set to `https`. |
| `autoredirectpath` | no | The path to redirect to if `autoredirect` is set to `true`, default: `/auth/token/`. |
| `signingalgorithms` | no | A list of token signing algorithms to use for verifying token signatures. If left empty the default list of signing algorithms is used. Please see below for allowed values and default. |
| `jwks` | no | The absolute path to the JSON Web Key Set (JWKS) file. The JWKS file contains the trusted keys used to verify the signature of authentication tokens. |
Available `signingalgorithms`:
- EdDSA
- HS256
- HS384
- HS512
- RS256
- RS384
- RS512
- ES256
- ES384
- ES512
- PS256
- PS384
- PS512
Default `signingalgorithms`:
- EdDSA
- HS256
- HS384
- HS512
- RS256
- RS384
- RS512
- ES256
- ES384
- ES512
- PS256
- PS384
- PS512
For more information about Token based authentication configuration, see the For more information about Token based authentication configuration, see the
[specification](/spec/auth/token). [specification](../spec/auth/token.md).
### `htpasswd` ### `htpasswd`
@ -724,6 +798,8 @@ http:
X-Content-Type-Options: [nosniff] X-Content-Type-Options: [nosniff]
http2: http2:
disabled: false disabled: false
h2c:
enabled: false
``` ```
The `http` option details the configuration for the HTTP server that hosts the The `http` option details the configuration for the HTTP server that hosts the
@ -870,13 +946,24 @@ registry. This header is included in the example configuration file.
### `http2` ### `http2`
The `http2` structure within `http` is **optional**. Use this to control http2 The `http2` structure within `http` is **optional**. Use this to control HTTP/2 over TLS
settings for the registry. settings for the registry.
If `tls` is not configured this option is ignored. To enable HTTP/2 over non TLS connections use `h2c` instead.
| Parameter | Required | Description | | Parameter | Required | Description |
|-----------|----------|-------------------------------------------------------| |-----------|----------|-------------------------------------------------------|
| `disabled` | no | If `true`, then `http2` support is disabled. | | `disabled` | no | If `true`, then `http2` support is disabled. |
### `h2c`
The `h2c` structure within `http` is **optional**. Use this to control H2C (HTTP/2 Cleartext)
settings for the registry.
Useful when deploying the registry behind a load balancer (e.g. Google Cloud Run)
| Parameter | Required | Description |
|-----------|----------|-------------------------------------------------------|
| `enabled` | no | If `true`, then `h2c` support is enabled. |
## `notifications` ## `notifications`
```yaml ```yaml
@ -937,72 +1024,46 @@ The `events` structure configures the information provided in event notification
## `redis` ## `redis`
Declare parameters for constructing the `redis` connections. Registry instances
may use the Redis instance for several applications. Currently, it caches
information about immutable blobs. Most of the `redis` options control
how the registry connects to the `redis` instance.
You should configure Redis with the **allkeys-lru** eviction policy, because the
registry does not set an expiration value on keys.
Under the hood distribution uses [`go-redis`](https://github.com/redis/go-redis) Go module for
Redis connectivity and its [`UniversalOptions`](https://pkg.go.dev/github.com/redis/go-redis/v9#UniversalOptions)
struct.
You can optionally specify TLS configuration on top of the `UniversalOptions` settings.
Use these settings to configure Redis TLS:
| Parameter | Required | Description |
|-----------|----------|-------------------------------------------------------|
| `certificate` | yes | Absolute path to the x509 certificate file. |
| `key` | yes | Absolute path to the x509 private key file. |
| `clientcas` | no | An array of absolute paths to x509 CA files. |
```yaml ```yaml
redis: redis:
addr: localhost:6379 tls:
certificate: /path/to/cert.crt
key: /path/to/key.pem
clientcas:
- /path/to/ca.pem
addrs: [localhost:6379]
password: asecret password: asecret
db: 0 db: 0
dialtimeout: 10ms dialtimeout: 10ms
readtimeout: 10ms readtimeout: 10ms
writetimeout: 10ms writetimeout: 10ms
pool: maxidleconns: 16
maxidle: 16 poolsize: 64
maxactive: 64 connmaxidletime: 300s
idletimeout: 300s
tls:
enabled: false
``` ```
Declare parameters for constructing the `redis` connections. Registry instances
may use the Redis instance for several applications. Currently, it caches
information about immutable blobs. Most of the `redis` options control
how the registry connects to the `redis` instance. You can control the pool's
behavior with the [pool](#pool) subsection. Additionally, you can control
TLS connection settings with the [tls](#tls) subsection (in-transit encryption).
You should configure Redis with the **allkeys-lru** eviction policy, because the
registry does not set an expiration value on keys.
| Parameter | Required | Description |
|-----------|----------|-------------------------------------------------------|
| `addr` | yes | The address (host and port) of the Redis instance. |
| `password`| no | A password used to authenticate to the Redis instance.|
| `db` | no | The name of the database to use for each connection. |
| `dialtimeout` | no | The timeout for connecting to the Redis instance. |
| `readtimeout` | no | The timeout for reading from the Redis instance. |
| `writetimeout` | no | The timeout for writing to the Redis instance. |
### `pool`
```yaml
pool:
maxidle: 16
maxactive: 64
idletimeout: 300s
```
Use these settings to configure the behavior of the Redis connection pool.
| Parameter | Required | Description |
|-----------|----------|-------------------------------------------------------|
| `maxidle` | no | The maximum number of idle connections in the pool. |
| `maxactive`| no | The maximum number of connections which can be open before blocking a connection request. |
| `idletimeout`| no | How long to wait before closing inactive connections. |
### `tls`
```yaml
tls:
enabled: false
```
Use these settings to configure Redis TLS.
| Parameter | Required | Description |
|-----------|----------|-------------------------------------- |
| `enabled` | no | Whether or not to use TLS in-transit. |
## `health` ## `health`
```yaml ```yaml
@ -1100,7 +1161,7 @@ proxy:
The `proxy` structure allows a registry to be configured as a pull-through cache The `proxy` structure allows a registry to be configured as a pull-through cache
to Docker Hub. See to Docker Hub. See
[mirror](/recipes/mirror) [mirror](../recipes/mirror.md)
for more information. Pushing to a registry configured as a pull-through cache for more information. Pushing to a registry configured as a pull-through cache
is unsupported. is unsupported.
@ -1122,14 +1183,14 @@ username (such as `batman`) and the password for that username.
```yaml ```yaml
validation: validation:
manifests: disabled: false
urls:
allow:
- ^https?://([^/]+\.)*example\.com/
deny:
- ^https?://www\.example\.com/
``` ```
Use these settings to configure what validation the registry performs on content.
Validation is performed when content is uploaded to the registry. Changing these
settings will not validate content that has already been accepting into the registry.
### `disabled` ### `disabled`
The `disabled` flag disables the other options in the `validation` The `disabled` flag disables the other options in the `validation`
@ -1142,6 +1203,16 @@ Use the `manifests` subsection to configure validation of manifests. If
#### `urls` #### `urls`
```yaml
validation:
manifests:
urls:
allow:
- ^https?://([^/]+\.)*example\.com/
deny:
- ^https?://www\.example\.com/
```
The `allow` and `deny` options are each a list of The `allow` and `deny` options are each a list of
[regular expressions](https://pkg.go.dev/regexp/syntax) that restrict the URLs in [regular expressions](https://pkg.go.dev/regexp/syntax) that restrict the URLs in
pushed manifests. pushed manifests.
@ -1155,6 +1226,54 @@ one of the `allow` regular expressions **and** one of the following holds:
2. `deny` is set but no URLs within the manifest match any of the `deny` regular 2. `deny` is set but no URLs within the manifest match any of the `deny` regular
expressions. expressions.
#### `indexes`
By default the registry will validate that all platform images exist when an image
index is uploaded to the registry. Disabling this validatation is experimental
because other tooling that uses the registry may expect the image index to be complete.
validation:
manifests:
indexes:
platforms: [all|none|list]
platformlist:
- os: linux
architecture: amd64
Use these settings to configure what validation the registry performs on image
index manifests uploaded to the registry.
##### `platforms`
Set `platformexist` to `all` (the default) to validate all platform images exist.
The registry will validate that the images referenced by the index exist in the
registry before accepting the image index.
Set `platforms` to `none` to disable all validation that images exist when an
image index manifest is uploaded. This allows image lists to be uploaded to the
registry without their associated images. This setting is experimental because
other tooling that uses the registry may expect the image index to be complete.
Set `platforms` to `list` to selectively validate the existence of platforms
within image index manifests. This setting is experimental because other tooling
that uses the registry may expect the image index to be complete.
##### `platformlist`
When `platforms` is set to `list`, set `platformlist` to an array of
platforms to validate. If a platform is included in this the array and in the images
contained within an index, the registry will validate that the platform specific image
exists in the registry before accepting the index. The registry will not validate the
existence of platform specific images in the index that do not appear in the
`platformlist` array.
This parameter does not validate that the configured platforms are included in every
index. If an image index does not include one of the platform specific images configured
in the `platformlist` array, it may still be accepted by the registry.
Each platform is a map with two keys, `os` and `architecture`, as defined in the
[OCI Image Index specification](https://github.com/opencontainers/image-spec/blob/main/image-index.md#image-index-property-descriptions).
## Example: Development configuration ## Example: Development configuration
You can use this simple example for local development: You can use this simple example for local development:

View file

@ -9,7 +9,7 @@ A registry is an instance of the `registry` image, and runs within Docker.
This topic provides basic information about deploying and configuring a This topic provides basic information about deploying and configuring a
registry. For an exhaustive list of configuration options, see the registry. For an exhaustive list of configuration options, see the
[configuration reference](../configuration). [configuration reference](configuration.md).
If you have an air-gapped datacenter, see If you have an air-gapped datacenter, see
[Considerations for air-gapped registries](#considerations-for-air-gapped-registries). [Considerations for air-gapped registries](#considerations-for-air-gapped-registries).
@ -27,7 +27,7 @@ The registry is now ready to use.
> **Warning**: These first few examples show registry configurations that are > **Warning**: These first few examples show registry configurations that are
> only appropriate for testing. A production-ready registry must be protected by > only appropriate for testing. A production-ready registry must be protected by
> TLS and should ideally use an access-control mechanism. Keep reading and then > TLS and should ideally use an access-control mechanism. Keep reading and then
> continue to the [configuration guide](../configuration) to deploy a > continue to the [configuration guide](configuration.md) to deploy a
> production-ready registry. > production-ready registry.
## Copy an image from Docker Hub to your registry ## Copy an image from Docker Hub to your registry
@ -94,7 +94,7 @@ To configure the container, you can pass additional or modified options to the
`docker run` command. `docker run` command.
The following sections provide basic guidelines for configuring your registry. The following sections provide basic guidelines for configuring your registry.
For more details, see the [registry configuration reference](../configuration). For more details, see the [registry configuration reference](configuration.md).
### Start the registry automatically ### Start the registry automatically
@ -166,8 +166,8 @@ $ docker run -d \
By default, the registry stores its data on the local filesystem, whether you By default, the registry stores its data on the local filesystem, whether you
use a bind mount or a volume. You can store the registry data in an Amazon S3 use a bind mount or a volume. You can store the registry data in an Amazon S3
bucket, Google Cloud Platform, or on another storage back-end by using bucket, Google Cloud Platform, or on another storage back-end by using
[storage drivers](/storage-drivers). For more information, see [storage drivers](../storage-drivers/_index.md). For more information, see
[storage configuration options](../configuration#storage). [storage configuration options](configuration.md#storage).
## Run an externally-accessible registry ## Run an externally-accessible registry
@ -252,13 +252,13 @@ The registry supports using Let's Encrypt to automatically obtain a
browser-trusted certificate. For more information on Let's Encrypt, see browser-trusted certificate. For more information on Let's Encrypt, see
[https://letsencrypt.org/how-it-works/](https://letsencrypt.org/how-it-works/) [https://letsencrypt.org/how-it-works/](https://letsencrypt.org/how-it-works/)
and the relevant section of the and the relevant section of the
[registry configuration](../configuration#letsencrypt). [registry configuration](configuration.md#letsencrypt).
### Use an insecure registry (testing only) ### Use an insecure registry (testing only)
It is possible to use a self-signed certificate, or to use our registry It is possible to use a self-signed certificate, or to use our registry
insecurely. Unless you have set up verification for your self-signed insecurely. Unless you have set up verification for your self-signed
certificate, this is for testing only. See [run an insecure registry](../insecure). certificate, this is for testing only. See [run an insecure registry](insecure.md).
## Run the registry as a service ## Run the registry as a service
@ -462,20 +462,20 @@ using htpasswd, all authentication attempts will fail.
{{< hint type=note title="X509 errors" >}} {{< hint type=note title="X509 errors" >}}
X509 errors usually indicate that you are attempting to use X509 errors usually indicate that you are attempting to use
a self-signed certificate without configuring the Docker daemon correctly. a self-signed certificate without configuring the Docker daemon correctly.
See [run an insecure registry](../insecure). See [run an insecure registry](insecure.md).
{{< /hint >}} {{< /hint >}}
### More advanced authentication ### More advanced authentication
You may want to leverage more advanced basic auth implementations by using a You may want to leverage more advanced basic auth implementations by using a
proxy in front of the registry. See the [recipes list](/recipes/). proxy in front of the registry. See the [recipes list](../recipes/_index.md).
The registry also supports delegated authentication which redirects users to a The registry also supports delegated authentication which redirects users to a
specific trusted token server. This approach is more complicated to set up, and specific trusted token server. This approach is more complicated to set up, and
only makes sense if you need to fully configure ACLs and need more control over only makes sense if you need to fully configure ACLs and need more control over
the registry's integration into your global authorization and authentication the registry's integration into your global authorization and authentication
systems. Refer to the following [background information](/spec/auth/token) and systems. Refer to the following [background information](../spec/auth/token.md) and
[configuration information here](../configuration#auth). [configuration information here](configuration.md#auth).
This approach requires you to implement your own authentication system or This approach requires you to implement your own authentication system or
leverage a third-party implementation. leverage a third-party implementation.
@ -572,9 +572,9 @@ artifacts.
More specific and advanced information is available in the following sections: More specific and advanced information is available in the following sections:
- [Configuration reference](../configuration) - [Configuration reference](configuration.md)
- [Working with notifications](../notifications) - [Working with notifications](notifications.md)
- [Advanced "recipes"](/recipes) - [Advanced "recipes"](../recipes/_index.md)
- [Registry API](/spec/api) - [Registry API](../spec/api.md)
- [Storage driver model](/storage-drivers) - [Storage driver model](../storage-drivers/_index.md)
- [Token authentication](/spec/auth/token) - [Token authentication](../spec/auth/token.md)

View file

@ -21,15 +21,15 @@ that certain layers no longer exist on the filesystem.
Filesystem layers are stored by their content address in the Registry. This Filesystem layers are stored by their content address in the Registry. This
has many advantages, one of which is that data is stored once and referred to by manifests. has many advantages, one of which is that data is stored once and referred to by manifests.
See [here](../compatibility#content-addressable-storage-cas) for more details. See [here](compatibility.md#content-addressable-storage-cas) for more details.
Layers are therefore shared amongst manifests; each manifest maintains a reference Layers are therefore shared amongst manifests; each manifest maintains a reference
to the layer. As long as a layer is referenced by one manifest, it cannot be garbage to the layer. As long as a layer is referenced by one manifest, it cannot be garbage
collected. collected.
Manifests and layers can be `deleted` with the registry API (refer to the API Manifests and layers can be `deleted` with the registry API (refer to the API
documentation [here](/spec/api#deleting-a-layer) and documentation [here](../spec/api.md#deleting-a-layer) and
[here](/spec/api#deleting-an-image) for details). This API removes references [here](../spec/api.md#deleting-an-image) for details). This API removes references
to the target and makes them eligible for garbage collection. It also makes them to the target and makes them eligible for garbage collection. It also makes them
unable to be read via the API. unable to be read via the API.

View file

@ -72,7 +72,7 @@ This is more secure than the insecure registry solution.
Be sure to use the name `myregistry.domain.com` as a CN. Be sure to use the name `myregistry.domain.com` as a CN.
2. Use the result to [start your registry with TLS enabled](../deploying#get-a-certificate). 2. Use the result to [start your registry with TLS enabled](deploying.md#get-a-certificate).
3. Instruct every Docker daemon to trust that certificate. The way to do this 3. Instruct every Docker daemon to trust that certificate. The way to do this
depends on your OS. depends on your OS.

View file

@ -10,7 +10,7 @@ pushes and pulls and layer pushes and pulls. These actions are serialized into
events. The events are queued into a registry-internal broadcast system which events. The events are queued into a registry-internal broadcast system which
queues and dispatches events to [_Endpoints_](#endpoints). queues and dispatches events to [_Endpoints_](#endpoints).
![Workflow of registry notifications](../../images/notifications.png) ![Workflow of registry notifications](/distribution/images/notifications.png)
## Endpoints ## Endpoints
@ -24,7 +24,7 @@ order is not guaranteed.
## Configuration ## Configuration
To setup a registry instance to send notifications to endpoints, one must add To set up a registry instance to send notifications to endpoints, one must add
them to the configuration. A simple example follows: them to the configuration. A simple example follows:
```yaml ```yaml
@ -45,7 +45,7 @@ The above would configure the registry with an endpoint to send events to
5 failures happen consecutively, the registry backs off for 1 second before 5 failures happen consecutively, the registry backs off for 1 second before
trying again. trying again.
For details on the fields, see the [configuration documentation](../configuration/#notifications). For details on the fields, see the [configuration documentation](configuration.md#notifications).
A properly configured endpoint should lead to a log message from the registry A properly configured endpoint should lead to a log message from the registry
upon startup: upon startup:

View file

@ -12,7 +12,7 @@ Usually, that includes enterprise setups using LDAP/AD on the backend and a SSO
### Alternatives ### Alternatives
If you just want authentication for your registry, and are happy maintaining users access separately, you should really consider sticking with the native [basic auth registry feature](/about/deploying#native-basic-auth). If you just want authentication for your registry, and are happy maintaining users access separately, you should really consider sticking with the native [basic auth registry feature](../about/deploying.md#native-basic-auth).
### Solution ### Solution

View file

@ -38,7 +38,7 @@ The following table shows examples of allowed and disallowed mirror URLs.
> **Note** > **Note**
> >
> Mirrors of Docker Hub are still subject to Docker's [fair usage policy](https://www.docker.com/pricing/resource-consumption-updates){: target="blank" rel="noopener" class=“”}. > Mirrors of Docker Hub are still subject to Docker's [fair usage policy](https://www.docker.com/pricing/resource-consumption-updates).
### Solution ### Solution
@ -72,7 +72,7 @@ be configured to use the `filesystem` driver for storage.
The easiest way to run a registry as a pull through cache is to run the official The easiest way to run a registry as a pull through cache is to run the official
Registry image. Registry image.
At least, you need to specify `proxy.remoteurl` within `/etc/docker/registry/config.yml` At least, you need to specify `proxy.remoteurl` within `/etc/distribution/config.yml`
as described in the following subsection. as described in the following subsection.
Multiple registry caches can be deployed over the same back-end. A single Multiple registry caches can be deployed over the same back-end. A single
@ -107,7 +107,7 @@ proxy:
> **Warning**: For the scheduler to clean up old entries, `delete` must > **Warning**: For the scheduler to clean up old entries, `delete` must
> be enabled in the registry configuration. See > be enabled in the registry configuration. See
> [Registry Configuration](/about/configuration) for more details. > [Registry Configuration](../about/configuration.md) for more details.
### Configure the Docker daemon ### Configure the Docker daemon

View file

@ -17,7 +17,7 @@ mechanism fronting their internal http portal.
If you just want authentication for your registry, and are happy maintaining If you just want authentication for your registry, and are happy maintaining
users access separately, you should really consider sticking with the native users access separately, you should really consider sticking with the native
[basic auth registry feature](/about/deploying#native-basic-auth). [basic auth registry feature](../about/deploying.md#native-basic-auth).
### Solution ### Solution

View file

@ -6,7 +6,7 @@ keywords: registry, service, images, repository, json
# Docker Registry Reference # Docker Registry Reference
* [HTTP API V2](api) * [HTTP API V2](api.md)
* [Storage Driver](/storage-drivers/) * [Storage Driver](../storage-drivers/_index.md)
* [Token Authentication Specification](auth/token) * [Token Authentication Specification](auth/token.md)
* [Token Authentication Implementation](auth/jwt) * [Token Authentication Implementation](auth/jwt.md)

View file

@ -416,7 +416,7 @@ reference may include a tag or digest.
The client should include an Accept header indicating which manifest content The client should include an Accept header indicating which manifest content
types it supports. For more details on the manifest format and content types, types it supports. For more details on the manifest format and content types,
see [Image Manifest Version 2, Schema 2](../manifest-v2-2). see [Image Manifest Version 2, Schema 2](manifest-v2-2.md).
In a successful response, the Content-Type header will indicate which manifest type is being returned. In a successful response, the Content-Type header will indicate which manifest type is being returned.
A `404 Not Found` response will be returned if the image is unknown to the A `404 Not Found` response will be returned if the image is unknown to the
@ -840,7 +840,7 @@ Content-Type: <manifest media type>
The `name` and `reference` fields of the response body must match those The `name` and `reference` fields of the response body must match those
specified in the URL. The `reference` field may be a "tag" or a "digest". The specified in the URL. The `reference` field may be a "tag" or a "digest". The
content type should match the type of the manifest being uploaded, as specified content type should match the type of the manifest being uploaded, as specified
in [Image Manifest Version 2, Schema 2](../manifest-v2-2). in [Image Manifest Version 2, Schema 2](manifest-v2-2.md).
If there is a problem with pushing the manifest, a relevant 4xx response will If there is a problem with pushing the manifest, a relevant 4xx response will
be returned with a JSON error message. Please see the be returned with a JSON error message. Please see the
@ -1088,7 +1088,7 @@ response will be issued instead.
Accept: application/vnd.docker.distribution.manifest.v2+json Accept: application/vnd.docker.distribution.manifest.v2+json
> for more details, see: [compatibility](/about/compatibility#content-addressable-storage-cas) > for more details, see: [compatibility](../about/compatibility.md#content-addressable-storage-cas)
## Detail ## Detail

View file

@ -12,7 +12,7 @@ reference for the protocol and HTTP endpoints described here.
**Note**: Not all token servers implement oauth2. If the request to the endpoint **Note**: Not all token servers implement oauth2. If the request to the endpoint
returns `404` using the HTTP `POST` method, refer to returns `404` using the HTTP `POST` method, refer to
[Token Documentation](../token) for using the HTTP `GET` method supported by all [Token Documentation](token.md) for using the HTTP `GET` method supported by all
token servers. token servers.
## Refresh token format ## Refresh token format

View file

@ -144,7 +144,7 @@ Each JWT access token may only have a single subject and audience but multiple
resource scopes. The subject and audience are put into standard JWT fields resource scopes. The subject and audience are put into standard JWT fields
`sub` and `aud`. The resource scope is put into the `access` field. The `sub` and `aud`. The resource scope is put into the `access` field. The
structure of the access field can be seen in the structure of the access field can be seen in the
[jwt documentation](../jwt). [jwt documentation](jwt.md).
## Refresh Tokens ## Refresh Tokens

View file

@ -8,7 +8,7 @@ keywords: registry, on-prem, images, tags, repository, distribution, Bearer auth
This document outlines the v2 Distribution registry authentication scheme: This document outlines the v2 Distribution registry authentication scheme:
![v2 registry auth](../../../images/v2-registry-auth.png) ![v2 registry auth](/distribution/images/v2-registry-auth.png)
1. Attempt to begin a push/pull operation with the registry. 1. Attempt to begin a push/pull operation with the registry.
2. If the registry requires authorization it will return a `401 Unauthorized` 2. If the registry requires authorization it will return a `401 Unauthorized`
@ -188,7 +188,7 @@ https://auth.docker.io/token?service=registry.docker.io&scope=repository:samalba
The token server should first attempt to authenticate the client using any The token server should first attempt to authenticate the client using any
authentication credentials provided with the request. From Docker 1.11 the authentication credentials provided with the request. From Docker 1.11 the
Docker engine supports both Basic Authentication and [OAuth2](../oauth) for Docker engine supports both Basic Authentication and [OAuth2](oauth.md) for
getting tokens. Docker 1.10 and before, the registry client in the Docker Engine getting tokens. Docker 1.10 and before, the registry client in the Docker Engine
only supports Basic Authentication. If an attempt to authenticate to the token only supports Basic Authentication. If an attempt to authenticate to the token
server fails, the token server should return a `401 Unauthorized` response server fails, the token server should return a `401 Unauthorized` response

View file

@ -71,7 +71,7 @@ image manifest based on the Content-Type returned in the HTTP response.
- **`digest`** *string* - **`digest`** *string*
The digest of the content, as defined by the The digest of the content, as defined by the
[Registry V2 HTTP API Specificiation](../api#digest-parameter). [Registry V2 HTTP API Specification](api.md#digest-parameter).
- **`platform`** *object* - **`platform`** *object*
@ -187,7 +187,7 @@ image. It's the direct replacement for the schema-1 manifest.
- **`digest`** *string* - **`digest`** *string*
The digest of the content, as defined by the The digest of the content, as defined by the
[Registry V2 HTTP API Specificiation](../api#digest-parameter). [Registry V2 HTTP API Specification](api.md#digest-parameter).
- **`layers`** *array* - **`layers`** *array*
@ -213,7 +213,7 @@ image. It's the direct replacement for the schema-1 manifest.
- **`digest`** *string* - **`digest`** *string*
The digest of the content, as defined by the The digest of the content, as defined by the
[Registry V2 HTTP API Specificiation](../api#digest-parameter). [Registry V2 HTTP API Specification](api.md#digest-parameter).
- **`urls`** *array* - **`urls`** *array*

View file

@ -20,7 +20,22 @@ An implementation of the `storagedriver.StorageDriver` interface which uses [Mic
## Related information ## Related information
* To get information about * To get information about Azure blob storage [the offical docs](https://azure.microsoft.com/en-us/services/storage/).
[azure-blob-storage](https://azure.microsoft.com/en-us/services/storage/), visit * You can use Azure [Blob Service REST API](https://docs.microsoft.com/en-us/rest/api/storageservices/Blob-Service-REST-API) to [create a storage container](https://docs.microsoft.com/en-us/rest/api/storageservices/Create-Container).
the Microsoft website.
* You can use Microsoft's [Blob Service REST API](https://docs.microsoft.com/en-us/rest/api/storageservices/Blob-Service-REST-API) to [create a storage container](https://docs.microsoft.com/en-us/rest/api/storageservices/Create-Container). ## Azure identity
In order to use managed identity to access Azure blob storage you can use [Microsoft Bicep](https://learn.microsoft.com/en-us/azure/templates/microsoft.app/managedenvironments/storages?pivots=deployment-language-bicep).
The following will configure credentials that will be used by the Azure storage driver to construct AZ Identity that will be used to access the blob storage:
```
properties: {
azure: {
accountname: accountname
container: containername
credentials: {
type: default
}
}
}
```

View file

@ -17,4 +17,8 @@ An implementation of the `storagedriver.StorageDriver` interface which uses Goog
{{< hint type=note >}} {{< hint type=note >}}
Instead of a key file you can use [Google Application Default Credentials](https://developers.google.com/identity/protocols/application-default-credentials). Instead of a key file you can use [Google Application Default Credentials](https://developers.google.com/identity/protocols/application-default-credentials).
To use redirects with default credentials assigned to a virtual machine you have to enable "IAM Service Account Credentials API" and grant `iam.serviceAccounts.signBlob` permission on the used service account.
To use redirects with default credentials from Google Cloud CLI, in addition to the permissions mentioned above, you have to [impersonate the service account intended to be used by the registry](https://cloud.google.com/sdk/gcloud/reference#--impersonate-service-account).
{{< /hint >}} {{< /hint >}}

View file

@ -7,7 +7,7 @@ title: In-memory storage driver (testing only)
For purely tests purposes, you can use the `inmemory` storage driver. This For purely tests purposes, you can use the `inmemory` storage driver. This
driver is an implementation of the `storagedriver.StorageDriver` interface which driver is an implementation of the `storagedriver.StorageDriver` interface which
uses local memory for object storage. If you would like to run a registry from uses local memory for object storage. If you would like to run a registry from
volatile memory, use the [`filesystem` driver](../filesystem) on a ramdisk. volatile memory, use the [`filesystem` driver](filesystem.md) on a ramdisk.
{{< hint type=important >}} {{< hint type=important >}}
This storage driver *does not* persist data across runs. This is why it is only suitable for testing. *Never* use this driver in production. This storage driver *does not* persist data across runs. This is why it is only suitable for testing. *Never* use this driver in production.

View file

@ -0,0 +1,15 @@
---
description: Explains how to use storage middleware
keywords: registry, on-prem, images, tags, repository, distribution, storage drivers, advanced
title: Storage middleware
---
This document describes the registry storage middleware.
## Provided middleware
This storage driver package comes bundled with several middleware options:
- cloudfront
- redirect
- [rewrite](rewrite): Partially rewrites the URL returned by the storage driver.

View file

@ -0,0 +1,32 @@
---
description: Explains how to use the rewrite storage middleware
keywords: registry, service, driver, images, storage, middleware, rewrite
title: Rewrite middleware
---
A storage middleware which allows to rewrite the URL returned by the storage driver.
For example, it can be used to rewrite the Blob Storage URL returned by the Azure Blob Storage driver to use Azure CDN.
## Parameters
* `scheme`: (optional): Rewrite the returned URL scheme (if set).
* `host`: (optional): Rewrite the returned URL host (if set).
* `trimpathprefix` (optional): Trim the prefix from the returned URL path (if set).
## Example configuration
```yaml
storage:
azure:
accountname: "ACCOUNT_NAME"
accountkey: "******"
container: container-name
middleware:
storage:
- name: rewrite
options:
scheme: https
host: example-cdn-endpoint.azurefd.net
trimpathprefix: /container-name
```

View file

@ -15,7 +15,7 @@ Amazon S3 or S3 compatible services for object storage.
| `secretkey` | no | Your AWS Secret Key. If you use [IAM roles](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html), omit to fetch temporary credentials from IAM. | | `secretkey` | no | Your AWS Secret Key. If you use [IAM roles](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html), omit to fetch temporary credentials from IAM. |
| `region` | yes | The AWS region in which your bucket exists. | | `region` | yes | The AWS region in which your bucket exists. |
| `regionendpoint` | no | Endpoint for S3 compatible storage services (Minio, etc). | | `regionendpoint` | no | Endpoint for S3 compatible storage services (Minio, etc). |
| `forcepathstyle` | no | To enable path-style addressing when the value is set to `true`. The default is `true`. | | `forcepathstyle` | no | To enable path-style addressing when the value is set to `true`. The default is `false`. |
| `bucket` | yes | The bucket name in which you want to store the registry's data. | | `bucket` | yes | The bucket name in which you want to store the registry's data. |
| `encrypt` | no | Specifies whether the registry stores the image in encrypted format or not. A boolean value. The default is `false`. | | `encrypt` | no | Specifies whether the registry stores the image in encrypted format or not. A boolean value. The default is `false`. |
| `keyid` | no | Optional KMS key ID to use for encryption (encrypt must be true, or this parameter is ignored). The default is `none`. | | `keyid` | no | Optional KMS key ID to use for encryption (encrypt must be true, or this parameter is ignored). The default is `none`. |
@ -43,7 +43,7 @@ Amazon S3 or S3 compatible services for object storage.
`regionendpoint`: (optional) Endpoint URL for S3 compatible APIs. This should not be provided when using Amazon S3. `regionendpoint`: (optional) Endpoint URL for S3 compatible APIs. This should not be provided when using Amazon S3.
`forcepathstyle`: (optional) The force path style for S3 compatible APIs. Some manufacturers only support force path style, while others only support DNS based bucket routing. Amazon S3 supports both. `forcepathstyle`: (optional) Force path style for S3 compatible APIs. Some manufacturers only support force path style, while others only support DNS based bucket routing. Amazon S3 supports both. The value of this parameter applies, regardless of the region settings.
`bucket`: The name of your S3 bucket where you wish to store objects. The bucket must exist prior to the driver initialization. `bucket`: The name of your S3 bucket where you wish to store objects. The bucket must exist prior to the driver initialization.

View file

@ -5,7 +5,7 @@ This repository provides container images for the Open Source Registry implement
<img src="https://raw.githubusercontent.com/distribution/distribution/main/distribution-logo.svg" width="200px" /> <img src="https://raw.githubusercontent.com/distribution/distribution/main/distribution-logo.svg" width="200px" />
[![Build Status](https://github.com/distribution/distribution/workflows/CI/badge.svg?branch=main&event=push)](https://github.com/distribution/distribution/actions?query=workflow%3ACI) [![Build Status](https://github.com/distribution/distribution/workflows/build/badge.svg?branch=main&event=push)](https://github.com/distribution/distribution/actions/workflows/build.yml?query=workflow%3Abuild)
[![OCI Conformance](https://github.com/distribution/distribution/workflows/conformance/badge.svg)](https://github.com/distribution/distribution/actions?query=workflow%3Aconformance) [![OCI Conformance](https://github.com/distribution/distribution/workflows/conformance/badge.svg)](https://github.com/distribution/distribution/actions?query=workflow%3Aconformance)
[![License: Apache-2.0](https://img.shields.io/badge/License-Apache--2.0-blue.svg)](LICENSE) [![License: Apache-2.0](https://img.shields.io/badge/License-Apache--2.0-blue.svg)](LICENSE)
@ -31,12 +31,12 @@ docker tag alpine localhost:5000/alpine
docker push localhost:5000/alpine docker push localhost:5000/alpine
``` ```
⚠️ Beware the default configuration uses [`filesystem` storage driver](https://github.com/distribution/distribution/blob/main/docs/storage-drivers/filesystem.md) ⚠️ Beware the default configuration uses [`filesystem` storage driver](https://github.com/distribution/distribution/blob/main/docs/content/storage-drivers/filesystem.md)
and the above example command does not mount a local filesystem volume into the running container. and the above example command does not mount a local filesystem volume into the running container.
If you wish to mount the local filesystem to the `rootdirectory` of the If you wish to mount the local filesystem to the `rootdirectory` of the
`filesystem` storage driver run the following command: `filesystem` storage driver run the following command:
``` ```
docker run -d -p 5000:5000 $PWD/FS/PATH:/var/lib/registry --restart always --name registry distribution/distribution:edge docker run -d -p 5000:5000 -v $PWD/FS/PATH:/var/lib/registry --restart always --name registry distribution/distribution:edge
``` ```
### Custom configuration ### Custom configuration
@ -44,7 +44,7 @@ docker run -d -p 5000:5000 $PWD/FS/PATH:/var/lib/registry --restart always --nam
If you don't wan to use the default configuration file, you can supply If you don't wan to use the default configuration file, you can supply
your own custom configuration file as follows: your own custom configuration file as follows:
``` ```
docker run -d -p 5000:5000 $PWD/PATH/TO/config.yml:/etc/docker/registry/config.yml --restart always --name registry distribution/distribution:edge docker run -d -p 5000:5000 -v $PWD/PATH/TO/config.yml:/etc/distribution/config.yml --restart always --name registry distribution/distribution:edge
``` ```
## Communication ## Communication

View file

@ -1,4 +1,4 @@
baseURL: / baseURL: https://distribution.github.io/distribution
languageCode: en-us languageCode: en-us
title: CNCF Distribution title: CNCF Distribution
theme: hugo-geekdoc theme: hugo-geekdoc
@ -22,3 +22,7 @@ disablePathToLower: true
params: params:
geekdocRepo: "https://github.com/distribution/distribution" geekdocRepo: "https://github.com/distribution/distribution"
geekdocEditPath: edit/main/docs geekdocEditPath: edit/main/docs
geekdocLegalNotice: "https://www.linuxfoundation.org/legal/trademark-usage"
geekdocContentLicense:
name: CC BY 4.0
link: https://creativecommons.org/licenses/by/4.0/

1
docs/i18n/en.yaml Normal file
View file

@ -0,0 +1 @@
footer_legal_notice: Trademarks

View file

@ -0,0 +1,5 @@
{{- if (strings.HasPrefix .Destination "http") -}}
<a href="{{ safe.URL .Destination }}" target="_blank">{{ safe.HTML .Text }}</a>
{{- else -}}
<a href="{{ ref .Page .Destination | safe.URL }}">{{ safe.HTML .Text }}</a>
{{- end -}}

148
go.mod
View file

@ -5,27 +5,27 @@ go 1.21
toolchain go1.21.4 toolchain go1.21.4
require ( require (
cloud.google.com/go/storage v1.30.1 cloud.google.com/go/storage v1.36.0
git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20230825064515-46a214d065f8 git.frostfs.info/TrueCloudLab/frostfs-sdk-go v0.0.0-20240716083621-e18b91623138
git.frostfs.info/TrueCloudLab/tzhash v1.8.0 git.frostfs.info/TrueCloudLab/tzhash v1.8.0
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.6.0 github.com/Azure/azure-sdk-for-go/sdk/azcore v1.11.1
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.3.0 github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.6.0
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.0.0 github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.0.0
github.com/aws/aws-sdk-go v1.48.10 github.com/aws/aws-sdk-go v1.48.10
github.com/bshuster-repo/logrus-logstash-hook v1.0.0 github.com/bshuster-repo/logrus-logstash-hook v1.0.0
github.com/coreos/go-systemd/v22 v22.5.0 github.com/coreos/go-systemd/v22 v22.5.0
github.com/distribution/reference v0.5.0 github.com/distribution/reference v0.6.0
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c
github.com/docker/go-metrics v0.0.1 github.com/docker/go-metrics v0.0.1
github.com/go-jose/go-jose/v3 v3.0.1 github.com/go-jose/go-jose/v4 v4.0.2
github.com/google/uuid v1.6.0 github.com/google/uuid v1.6.0
github.com/gorilla/handlers v1.5.1 github.com/gorilla/handlers v1.5.2
github.com/gorilla/mux v1.8.1 github.com/gorilla/mux v1.8.1
github.com/hashicorp/golang-lru/arc/v2 v2.0.5 github.com/hashicorp/golang-lru/arc/v2 v2.0.5
github.com/klauspost/compress v1.17.4 github.com/klauspost/compress v1.17.4
github.com/mitchellh/mapstructure v1.1.2 github.com/mitchellh/mapstructure v1.5.0
github.com/nspcc-dev/neo-go v0.101.2-0.20230601131642-a0117042e8fc github.com/nspcc-dev/neo-go v0.106.2
github.com/opencontainers/go-digest v1.0.0 github.com/opencontainers/go-digest v1.0.0
github.com/opencontainers/image-spec v1.1.0 github.com/opencontainers/image-spec v1.1.0
github.com/redis/go-redis/extra/redisotel/v9 v9.0.5 github.com/redis/go-redis/extra/redisotel/v9 v9.0.5
@ -35,49 +35,69 @@ require (
github.com/stretchr/testify v1.9.0 github.com/stretchr/testify v1.9.0
github.com/testcontainers/testcontainers-go v0.29.1 github.com/testcontainers/testcontainers-go v0.29.1
go.opentelemetry.io/contrib/exporters/autoexport v0.46.1 go.opentelemetry.io/contrib/exporters/autoexport v0.46.1
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.47.0
go.opentelemetry.io/otel v1.22.0
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.21.0
go.opentelemetry.io/otel/sdk v1.21.0 go.opentelemetry.io/otel/sdk v1.21.0
golang.org/x/crypto v0.17.0 go.opentelemetry.io/otel/trace v1.22.0
golang.org/x/oauth2 v0.11.0 go.uber.org/zap v1.27.0
google.golang.org/api v0.126.0 golang.org/x/crypto v0.24.0
golang.org/x/net v0.26.0
golang.org/x/oauth2 v0.16.0
golang.org/x/sync v0.7.0
google.golang.org/api v0.162.0
google.golang.org/grpc v1.62.0
gopkg.in/yaml.v2 v2.4.0 gopkg.in/yaml.v2 v2.4.0
) )
require ( require (
cloud.google.com/go v0.110.7 // indirect cloud.google.com/go v0.112.0 // indirect
cloud.google.com/go/compute v1.23.0 // indirect cloud.google.com/go/compute v1.24.0 // indirect
cloud.google.com/go/compute/metadata v0.2.3 // indirect cloud.google.com/go/compute/metadata v0.2.3 // indirect
cloud.google.com/go/iam v1.1.1 // indirect cloud.google.com/go/iam v1.1.6 // indirect
dario.cat/mergo v1.0.0 // indirect dario.cat/mergo v1.0.0 // indirect
git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20240327095603-491a47e7fe24 // indirect git.frostfs.info/TrueCloudLab/frostfs-api-go/v2 v2.16.1-0.20240530152826-2f6d3209e1d3 // indirect
git.frostfs.info/TrueCloudLab/frostfs-contract v0.0.0-20230307110621-19a8ef2d02fb // indirect git.frostfs.info/TrueCloudLab/frostfs-contract v0.19.3 // indirect
git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0 // indirect git.frostfs.info/TrueCloudLab/frostfs-crypto v0.6.0 // indirect
git.frostfs.info/TrueCloudLab/hrw v1.2.1 // indirect git.frostfs.info/TrueCloudLab/hrw v1.2.1 // indirect
git.frostfs.info/TrueCloudLab/rfc6979 v0.4.0 // indirect git.frostfs.info/TrueCloudLab/rfc6979 v0.4.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 // indirect github.com/Azure/azure-sdk-for-go/sdk/internal v1.8.0 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.0.0 // indirect github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/Microsoft/hcsshim v0.11.4 // indirect github.com/Microsoft/hcsshim v0.11.4 // indirect
github.com/antlr4-go/antlr/v4 v4.13.0 // indirect github.com/antlr4-go/antlr/v4 v4.13.0 // indirect
github.com/benbjohnson/clock v1.1.0 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/containerd/containerd v1.7.12 // indirect github.com/containerd/containerd v1.7.12 // indirect
github.com/containerd/log v0.1.0 // indirect github.com/containerd/log v0.1.0 // indirect
github.com/cpuguy83/dockercfg v0.3.1 // indirect github.com/cpuguy83/dockercfg v0.3.1 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.2.0 // indirect github.com/decred/dcrd/dcrec/secp256k1/v4 v4.2.0 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/docker/docker v25.0.3+incompatible // indirect github.com/docker/docker v25.0.3+incompatible // indirect
github.com/docker/go-connections v0.5.0 // indirect github.com/docker/go-connections v0.5.0 // indirect
github.com/docker/go-units v0.5.0 // indirect github.com/docker/go-units v0.5.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-logr/logr v1.4.1 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect github.com/go-ole/go-ole v1.2.6 // indirect
github.com/gogo/protobuf v1.3.2 // indirect github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.5.0 // indirect github.com/golang-jwt/jwt/v5 v5.2.1 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect github.com/golang/protobuf v1.5.3 // indirect
github.com/google/go-cmp v0.6.0 // indirect github.com/golang/snappy v0.0.4 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.2.3 // indirect github.com/google/s2a-go v0.1.7 // indirect
github.com/googleapis/gax-go/v2 v2.11.0 // indirect github.com/googleapis/enterprise-certificate-proxy v0.3.2 // indirect
github.com/gorilla/websocket v1.5.0 // indirect github.com/googleapis/gax-go/v2 v2.12.0 // indirect
github.com/hashicorp/golang-lru v0.6.0 // indirect github.com/gorilla/websocket v1.5.1 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/magiconair/properties v1.8.7 // indirect github.com/magiconair/properties v1.8.7 // indirect
github.com/moby/patternmatcher v0.6.0 // indirect github.com/moby/patternmatcher v0.6.0 // indirect
@ -86,54 +106,31 @@ require (
github.com/moby/term v0.5.0 // indirect github.com/moby/term v0.5.0 // indirect
github.com/morikuni/aec v1.0.0 // indirect github.com/morikuni/aec v1.0.0 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect github.com/mr-tron/base58 v1.2.0 // indirect
github.com/nspcc-dev/go-ordered-json v0.0.0-20220111165707-25110be27d22 // indirect github.com/nspcc-dev/go-ordered-json v0.0.0-20240301084351-0246b013f8b2 // indirect
github.com/nspcc-dev/neo-go/pkg/interop v0.0.0-20230615193820-9185820289ce // indirect github.com/nspcc-dev/neo-go/pkg/interop v0.0.0-20240521091047-78685785716d // indirect
github.com/nspcc-dev/rfc6979 v0.2.0 // indirect github.com/nspcc-dev/rfc6979 v0.2.1 // indirect
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/prometheus/client_golang v1.19.0 // indirect; updated to latest
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/common v0.48.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/redis/go-redis/extra/rediscmd/v9 v9.0.5 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/shirou/gopsutil/v3 v3.23.12 // indirect github.com/shirou/gopsutil/v3 v3.23.12 // indirect
github.com/shoenig/go-m1cpu v0.1.6 // indirect github.com/shoenig/go-m1cpu v0.1.6 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/syndtr/goleveldb v1.0.1-0.20210305035536-64b5b1c73954 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect github.com/tklauser/numcpus v0.6.1 // indirect
github.com/twmb/murmur3 v1.1.8 // indirect github.com/twmb/murmur3 v1.1.8 // indirect
github.com/urfave/cli v1.22.12 // indirect github.com/urfave/cli v1.22.12 // indirect
github.com/yusufpapurcu/wmi v1.2.3 // indirect github.com/yusufpapurcu/wmi v1.2.3 // indirect
go.uber.org/atomic v1.10.0 // indirect go.etcd.io/bbolt v1.3.9 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.24.0
golang.org/x/exp v0.0.0-20230515195305-f3d0a9c9a5cc // indirect
golang.org/x/mod v0.16.0 // indirect
golang.org/x/tools v0.13.0 // indirect
)
require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-logr/logr v1.3.0 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/google/s2a-go v0.1.4 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.5 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_golang v1.17.0 // indirect; updated to latest
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/common v0.44.0 // indirect
github.com/prometheus/procfs v0.11.1 // indirect
github.com/redis/go-redis/extra/rediscmd/v9 v9.0.5 // indirect
github.com/spf13/pflag v1.0.5 // indirect
go.opencensus.io v0.24.0 // indirect go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.47.0 // indirect
go.opentelemetry.io/otel v1.21.0
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.44.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.44.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v0.44.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v0.44.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect
@ -141,21 +138,20 @@ require (
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/prometheus v0.44.0 // indirect go.opentelemetry.io/otel/exporters/prometheus v0.44.0 // indirect
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v0.44.0 // indirect go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v0.44.0 // indirect
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.21.0 // indirect go.opentelemetry.io/otel/metric v1.22.0 // indirect
go.opentelemetry.io/otel/metric v1.21.0 // indirect
go.opentelemetry.io/otel/sdk/metric v1.21.0 // indirect go.opentelemetry.io/otel/sdk/metric v1.21.0 // indirect
go.opentelemetry.io/otel/trace v1.21.0 // indirect
go.opentelemetry.io/proto/otlp v1.0.0 // indirect go.opentelemetry.io/proto/otlp v1.0.0 // indirect
golang.org/x/net v0.18.0 // indirect go.uber.org/multierr v1.11.0 // indirect
golang.org/x/sync v0.3.0 // indirect golang.org/x/exp v0.0.0-20240222234643-814bf88cf225 // indirect
golang.org/x/sys v0.16.0 // indirect golang.org/x/mod v0.17.0 // indirect
golang.org/x/text v0.14.0 // indirect golang.org/x/sys v0.21.0 // indirect
golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2 // indirect golang.org/x/text v0.16.0 // indirect
google.golang.org/appengine v1.6.7 // indirect golang.org/x/time v0.5.0 // indirect
google.golang.org/genproto v0.0.0-20230822172742-b8732ec3820d // indirect golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20230822172742-b8732ec3820d // indirect google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20230822172742-b8732ec3820d // indirect google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9 // indirect
google.golang.org/grpc v1.59.0 // indirect google.golang.org/genproto/googleapis/api v0.0.0-20240205150955-31a09d347014 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240221002015-b0ce06bbee7c // indirect
google.golang.org/protobuf v1.33.0 // indirect google.golang.org/protobuf v1.33.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
) )

730
go.sum

File diff suppressed because it is too large Load diff

View file

@ -72,7 +72,7 @@ type Manager interface {
// AddResponse adds the response to the challenge // AddResponse adds the response to the challenge
// manager. The challenges will be parsed out of // manager. The challenges will be parsed out of
// the WWW-Authenicate headers and added to the // the WWW-Authenticate headers and added to the
// URL which was produced the response. If the // URL which was produced the response. If the
// response was authorized, any challenges for the // response was authorized, any challenges for the
// endpoint will be cleared. // endpoint will be cleared.

View file

@ -29,9 +29,9 @@ var (
const defaultClientID = "registry-client" const defaultClientID = "registry-client"
// AuthenticationHandler is an interface for authorizing a request from // AuthenticationHandler is an interface for authorizing a request from
// params from a "WWW-Authenicate" header for a single scheme. // params from a "WWW-Authenticate" header for a single scheme.
type AuthenticationHandler interface { type AuthenticationHandler interface {
// Scheme returns the scheme as expected from the "WWW-Authenicate" header. // Scheme returns the scheme as expected from the "WWW-Authenticate" header.
Scheme() string Scheme() string
// AuthorizeRequest adds the authorization header to a request (if needed) // AuthorizeRequest adds the authorization header to a request (if needed)

View file

@ -46,8 +46,14 @@ func parseHTTPErrorResponse(resp *http.Response) error {
} }
statusCode := resp.StatusCode statusCode := resp.StatusCode
ctHeader := resp.Header.Get("Content-Type")
// A HEAD request for example validly does not contain any body, while
// still returning a JSON content-type.
if len(body) == 0 {
return makeError(statusCode, "")
}
ctHeader := resp.Header.Get("Content-Type")
if ctHeader == "" { if ctHeader == "" {
return makeError(statusCode, string(body)) return makeError(statusCode, string(body))
} }

View file

@ -57,6 +57,22 @@ func TestHandleHTTPResponseError401WithInvalidBody(t *testing.T) {
} }
} }
func TestHandleHTTPResponseError401WithNoBody(t *testing.T) {
json := ""
response := &http.Response{
Status: "401 Unauthorized",
StatusCode: 401,
Body: nopCloser{bytes.NewBufferString(json)},
Header: http.Header{"Content-Type": []string{"application/json; charset=utf-8"}},
}
err := HandleHTTPResponseError(response)
expectedMsg := "unauthorized: "
if !strings.Contains(err.Error(), expectedMsg) {
t.Errorf("Expected %q, got: %q", expectedMsg, err.Error())
}
}
func TestHandleHTTPResponseErrorExpectedStatusCode400ValidBody(t *testing.T) { func TestHandleHTTPResponseErrorExpectedStatusCode400ValidBody(t *testing.T) {
json := `{"errors":[{"code":"DIGEST_INVALID","message":"provided digest does not match"}]}` json := `{"errors":[{"code":"DIGEST_INVALID","message":"provided digest does not match"}]}`
response := &http.Response{ response := &http.Response{

View file

@ -26,11 +26,6 @@ var (
ErrWrongCodeForByteRange = errors.New("expected HTTP 206 from byte range request") ErrWrongCodeForByteRange = errors.New("expected HTTP 206 from byte range request")
) )
// ReadSeekCloser combines io.ReadSeeker with io.Closer.
//
// Deprecated: use [io.ReadSeekCloser].
type ReadSeekCloser = io.ReadSeekCloser
// NewHTTPReadSeeker handles reading from an HTTP endpoint using a GET // NewHTTPReadSeeker handles reading from an HTTP endpoint using a GET
// request. When seeking and starting a read from a non-zero offset // request. When seeking and starting a read from a non-zero offset
// the a "Range" header will be added which sets the offset. // the a "Range" header will be added which sets the offset.

View file

@ -47,7 +47,7 @@ type ManifestBuilder interface {
AppendReference(dependency Describable) error AppendReference(dependency Describable) error
} }
// ManifestService describes operations on image manifests. // ManifestService describes operations on manifests.
type ManifestService interface { type ManifestService interface {
// Exists returns true if the manifest exists. // Exists returns true if the manifest exists.
Exists(ctx context.Context, dgst digest.Digest) (bool, error) Exists(ctx context.Context, dgst digest.Digest) (bool, error)

View file

@ -269,7 +269,7 @@ type RouteDescriptor struct {
// should match. // should match.
Path string Path string
// Entity should be a short, human-readalbe description of the object // Entity should be a short, human-readable description of the object
// targeted by the endpoint. // targeted by the endpoint.
Entity string Entity string

View file

@ -202,7 +202,7 @@ func (ub *URLBuilder) BuildBlobUploadChunkURL(name reference.Named, uuid string,
return appendValuesURL(uploadURL, values...).String(), nil return appendValuesURL(uploadURL, values...).String(), nil
} }
// clondedRoute returns a clone of the named route from the router. Routes // cloneRoute returns a clone of the named route from the router. Routes
// must be cloned to avoid modifying them during url generation. // must be cloned to avoid modifying them during url generation.
func (ub *URLBuilder) cloneRoute(name string) clonedRoute { func (ub *URLBuilder) cloneRoute(name string) clonedRoute {
route := new(mux.Route) route := new(mux.Route)

View file

@ -46,7 +46,7 @@ var (
) )
// InitFunc is the type of an AccessController factory function and is used // InitFunc is the type of an AccessController factory function and is used
// to register the constructor for different AccesController backends. // to register the constructor for different AccessController backends.
type InitFunc func(options map[string]interface{}) (AccessController, error) type InitFunc func(options map[string]interface{}) (AccessController, error)
var accessControllers map[string]InitFunc var accessControllers map[string]InitFunc
@ -56,7 +56,7 @@ func init() {
} }
// UserInfo carries information about // UserInfo carries information about
// an autenticated/authorized client. // an authenticated/authorized client.
type UserInfo struct { type UserInfo struct {
Name string Name string
} }

View file

@ -9,11 +9,12 @@ import (
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
"net/url"
"os" "os"
"strings" "strings"
"github.com/distribution/distribution/v3/registry/auth" "github.com/distribution/distribution/v3/registry/auth"
"github.com/go-jose/go-jose/v3" "github.com/go-jose/go-jose/v4"
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
) )
@ -83,11 +84,12 @@ var (
// authChallenge implements the auth.Challenge interface. // authChallenge implements the auth.Challenge interface.
type authChallenge struct { type authChallenge struct {
err error err error
realm string realm string
autoRedirect bool autoRedirect bool
service string autoRedirectPath string
accessSet accessSet service string
accessSet accessSet
} }
var _ auth.Challenge = authChallenge{} var _ auth.Challenge = authChallenge{}
@ -102,13 +104,28 @@ func (ac authChallenge) Status() int {
return http.StatusUnauthorized return http.StatusUnauthorized
} }
func buildAutoRedirectURL(r *http.Request, autoRedirectPath string) string {
scheme := "https"
if forwardedProto := r.Header.Get("X-Forwarded-Proto"); len(forwardedProto) > 0 {
scheme = forwardedProto
}
u := &url.URL{
Scheme: scheme,
Host: r.Host,
Path: autoRedirectPath,
}
return u.String()
}
// challengeParams constructs the value to be used in // challengeParams constructs the value to be used in
// the WWW-Authenticate response challenge header. // the WWW-Authenticate response challenge header.
// See https://tools.ietf.org/html/rfc6750#section-3 // See https://tools.ietf.org/html/rfc6750#section-3
func (ac authChallenge) challengeParams(r *http.Request) string { func (ac authChallenge) challengeParams(r *http.Request) string {
var realm string var realm string
if ac.autoRedirect { if ac.autoRedirect {
realm = fmt.Sprintf("https://%s/auth/token", r.Host) realm = buildAutoRedirectURL(r, ac.autoRedirectPath)
} else { } else {
realm = ac.realm realm = ac.realm
} }
@ -127,30 +144,38 @@ func (ac authChallenge) challengeParams(r *http.Request) string {
return str return str
} }
// SetChallenge sets the WWW-Authenticate value for the response. // SetHeaders sets the WWW-Authenticate value for the response.
func (ac authChallenge) SetHeaders(r *http.Request, w http.ResponseWriter) { func (ac authChallenge) SetHeaders(r *http.Request, w http.ResponseWriter) {
w.Header().Add("WWW-Authenticate", ac.challengeParams(r)) w.Header().Add("WWW-Authenticate", ac.challengeParams(r))
} }
// accessController implements the auth.AccessController interface. // accessController implements the auth.AccessController interface.
type accessController struct { type accessController struct {
realm string realm string
autoRedirect bool autoRedirect bool
issuer string autoRedirectPath string
service string issuer string
rootCerts *x509.CertPool service string
trustedKeys map[string]crypto.PublicKey rootCerts *x509.CertPool
trustedKeys map[string]crypto.PublicKey
signingAlgorithms []jose.SignatureAlgorithm
} }
const (
defaultAutoRedirectPath = "/auth/token"
)
// tokenAccessOptions is a convenience type for handling // tokenAccessOptions is a convenience type for handling
// options to the contstructor of an accessController. // options to the constructor of an accessController.
type tokenAccessOptions struct { type tokenAccessOptions struct {
realm string realm string
autoRedirect bool autoRedirect bool
issuer string autoRedirectPath string
service string issuer string
rootCertBundle string service string
jwks string rootCertBundle string
jwks string
signingAlgorithms []string
} }
// checkOptions gathers the necessary options // checkOptions gathers the necessary options
@ -183,10 +208,32 @@ func checkOptions(options map[string]interface{}) (tokenAccessOptions, error) {
if ok { if ok {
autoRedirect, ok := autoRedirectVal.(bool) autoRedirect, ok := autoRedirectVal.(bool)
if !ok { if !ok {
return opts, fmt.Errorf("token auth requires a valid option bool: autoredirect") return opts, errors.New("token auth requires a valid option bool: autoredirect")
} }
opts.autoRedirect = autoRedirect opts.autoRedirect = autoRedirect
} }
if opts.autoRedirect {
autoRedirectPathVal, ok := options["autoredirectpath"]
if ok {
autoRedirectPath, ok := autoRedirectPathVal.(string)
if !ok {
return opts, errors.New("token auth requires a valid option string: autoredirectpath")
}
opts.autoRedirectPath = autoRedirectPath
}
if opts.autoRedirectPath == "" {
opts.autoRedirectPath = defaultAutoRedirectPath
}
}
signingAlgos, ok := options["signingalgorithms"]
if ok {
signingAlgorithmsVals, ok := signingAlgos.([]string)
if !ok {
return opts, errors.New("signingalgorithms must be a list of signing algorithms")
}
opts.signingAlgorithms = signingAlgorithmsVals
}
return opts, nil return opts, nil
} }
@ -243,6 +290,18 @@ func getJwks(path string) (*jose.JSONWebKeySet, error) {
return &jwks, nil return &jwks, nil
} }
func getSigningAlgorithms(algos []string) ([]jose.SignatureAlgorithm, error) {
signAlgVals := make([]jose.SignatureAlgorithm, 0, len(algos))
for _, alg := range algos {
alg, ok := signingAlgorithms[alg]
if !ok {
return nil, fmt.Errorf("unsupported signing algorithm: %s", alg)
}
signAlgVals = append(signAlgVals, alg)
}
return signAlgVals, nil
}
// newAccessController creates an accessController using the given options. // newAccessController creates an accessController using the given options.
func newAccessController(options map[string]interface{}) (auth.AccessController, error) { func newAccessController(options map[string]interface{}) (auth.AccessController, error) {
config, err := checkOptions(options) config, err := checkOptions(options)
@ -253,6 +312,7 @@ func newAccessController(options map[string]interface{}) (auth.AccessController,
var ( var (
rootCerts []*x509.Certificate rootCerts []*x509.Certificate
jwks *jose.JSONWebKeySet jwks *jose.JSONWebKeySet
signAlgos []jose.SignatureAlgorithm
) )
if config.rootCertBundle != "" { if config.rootCertBundle != "" {
@ -286,13 +346,25 @@ func newAccessController(options map[string]interface{}) (auth.AccessController,
} }
} }
signAlgos, err = getSigningAlgorithms(config.signingAlgorithms)
if err != nil {
return nil, err
}
if len(signAlgos) == 0 {
// NOTE: this is to maintain backwards compat
// with existing registry deployments
signAlgos = defaultSigningAlgorithms
}
return &accessController{ return &accessController{
realm: config.realm, realm: config.realm,
autoRedirect: config.autoRedirect, autoRedirect: config.autoRedirect,
issuer: config.issuer, autoRedirectPath: config.autoRedirectPath,
service: config.service, issuer: config.issuer,
rootCerts: rootPool, service: config.service,
trustedKeys: trustedKeys, rootCerts: rootPool,
trustedKeys: trustedKeys,
signingAlgorithms: signAlgos,
}, nil }, nil
} }
@ -300,10 +372,11 @@ func newAccessController(options map[string]interface{}) (auth.AccessController,
// for actions on resources described by the given access items. // for actions on resources described by the given access items.
func (ac *accessController) Authorized(req *http.Request, accessItems ...auth.Access) (*auth.Grant, error) { func (ac *accessController) Authorized(req *http.Request, accessItems ...auth.Access) (*auth.Grant, error) {
challenge := &authChallenge{ challenge := &authChallenge{
realm: ac.realm, realm: ac.realm,
autoRedirect: ac.autoRedirect, autoRedirect: ac.autoRedirect,
service: ac.service, autoRedirectPath: ac.autoRedirectPath,
accessSet: newAccessSet(accessItems...), service: ac.service,
accessSet: newAccessSet(accessItems...),
} }
prefix, rawToken, ok := strings.Cut(req.Header.Get("Authorization"), " ") prefix, rawToken, ok := strings.Cut(req.Header.Get("Authorization"), " ")
@ -312,7 +385,7 @@ func (ac *accessController) Authorized(req *http.Request, accessItems ...auth.Ac
return nil, challenge return nil, challenge
} }
token, err := NewToken(rawToken) token, err := NewToken(rawToken, ac.signingAlgorithms)
if err != nil { if err != nil {
challenge.err = err challenge.err = err
return nil, challenge return nil, challenge

View file

@ -0,0 +1,89 @@
package token
import (
"net/http"
"net/http/httptest"
"testing"
)
func TestBuildAutoRedirectURL(t *testing.T) {
cases := []struct {
name string
reqGetter func() *http.Request
autoRedirectPath string
expectedURL string
}{{
name: "http",
reqGetter: func() *http.Request {
req := httptest.NewRequest("GET", "http://example.com/", nil)
return req
},
autoRedirectPath: "/auth",
expectedURL: "https://example.com/auth",
}, {
name: "x-forwarded",
reqGetter: func() *http.Request {
req := httptest.NewRequest("GET", "http://example.com/", nil)
req.Header.Set("X-Forwarded-Proto", "http")
return req
},
autoRedirectPath: "/auth/token",
expectedURL: "http://example.com/auth/token",
}}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
req := tc.reqGetter()
result := buildAutoRedirectURL(req, tc.autoRedirectPath)
if result != tc.expectedURL {
t.Errorf("expected %s, got %s", tc.expectedURL, result)
}
})
}
}
func TestCheckOptions(t *testing.T) {
realm := "https://auth.example.com/token/"
issuer := "test-issuer.example.com"
service := "test-service.example.com"
options := map[string]interface{}{
"realm": realm,
"issuer": issuer,
"service": service,
"rootcertbundle": "",
"autoredirect": true,
"autoredirectpath": "/auth",
}
ta, err := checkOptions(options)
if err != nil {
t.Fatal(err)
}
if ta.autoRedirect != true {
t.Fatal("autoredirect should be true")
}
if ta.autoRedirectPath != "/auth" {
t.Fatal("autoredirectpath should be /auth")
}
options = map[string]interface{}{
"realm": realm,
"issuer": issuer,
"service": service,
"rootcertbundle": "",
"autoredirect": true,
"autoredirectforcetlsdisabled": true,
}
ta, err = checkOptions(options)
if err != nil {
t.Fatal(err)
}
if ta.autoRedirect != true {
t.Fatal("autoredirect should be true")
}
if ta.autoRedirectPath != "/auth/token" {
t.Fatal("autoredirectpath should be /auth/token")
}
}

View file

@ -4,6 +4,7 @@ import (
"testing" "testing"
fuzz "github.com/AdaLogics/go-fuzz-headers" fuzz "github.com/AdaLogics/go-fuzz-headers"
"github.com/go-jose/go-jose/v4"
) )
func FuzzToken1(f *testing.F) { func FuzzToken1(f *testing.F) {
@ -18,7 +19,7 @@ func FuzzToken1(f *testing.F) {
if err != nil { if err != nil {
return return
} }
token, err := NewToken(rawToken) token, err := NewToken(rawToken, []jose.SignatureAlgorithm{jose.EdDSA, jose.RS384})
if err != nil { if err != nil {
return return
} }

View file

@ -7,8 +7,8 @@ import (
"fmt" "fmt"
"time" "time"
"github.com/go-jose/go-jose/v3" "github.com/go-jose/go-jose/v4"
"github.com/go-jose/go-jose/v3/jwt" "github.com/go-jose/go-jose/v4/jwt"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/distribution/distribution/v3/registry/auth" "github.com/distribution/distribution/v3/registry/auth"
@ -23,6 +23,38 @@ const (
Leeway = 60 * time.Second Leeway = 60 * time.Second
) )
var signingAlgorithms = map[string]jose.SignatureAlgorithm{
"EdDSA": jose.EdDSA,
"HS256": jose.HS256,
"HS384": jose.HS384,
"HS512": jose.HS512,
"RS256": jose.RS256,
"RS384": jose.RS384,
"RS512": jose.RS512,
"ES256": jose.ES256,
"ES384": jose.ES384,
"ES512": jose.ES512,
"PS256": jose.PS256,
"PS384": jose.PS384,
"PS512": jose.PS512,
}
var defaultSigningAlgorithms = []jose.SignatureAlgorithm{
jose.EdDSA,
jose.HS256,
jose.HS384,
jose.HS512,
jose.RS256,
jose.RS384,
jose.RS512,
jose.ES256,
jose.ES384,
jose.ES512,
jose.PS256,
jose.PS384,
jose.PS512,
}
// Errors used by token parsing and verification. // Errors used by token parsing and verification.
var ( var (
ErrMalformedToken = errors.New("malformed token") ErrMalformedToken = errors.New("malformed token")
@ -69,8 +101,8 @@ type VerifyOptions struct {
// NewToken parses the given raw token string // NewToken parses the given raw token string
// and constructs an unverified JSON Web Token. // and constructs an unverified JSON Web Token.
func NewToken(rawToken string) (*Token, error) { func NewToken(rawToken string, signingAlgs []jose.SignatureAlgorithm) (*Token, error) {
token, err := jwt.ParseSigned(rawToken) token, err := jwt.ParseSigned(rawToken, signingAlgs)
if err != nil { if err != nil {
return nil, ErrMalformedToken return nil, ErrMalformedToken
} }
@ -140,6 +172,13 @@ func (t *Token) VerifySigningKey(verifyOpts VerifyOptions) (signingKey crypto.Pu
// verifying the first one in the list only at the moment. // verifying the first one in the list only at the moment.
header := t.JWT.Headers[0] header := t.JWT.Headers[0]
signingKey, err = verifyCertChain(header, verifyOpts.Roots)
// NOTE(milosgajdos): if the x5c header is missing
// the token may have been signed by a JWKS.
if err != nil && err != jose.ErrMissingX5cHeader {
return
}
switch { switch {
case header.JSONWebKey != nil: case header.JSONWebKey != nil:
signingKey, err = verifyJWK(header, verifyOpts) signingKey, err = verifyJWK(header, verifyOpts)
@ -149,7 +188,7 @@ func (t *Token) VerifySigningKey(verifyOpts VerifyOptions) (signingKey crypto.Pu
err = fmt.Errorf("token signed by untrusted key with ID: %q", header.KeyID) err = fmt.Errorf("token signed by untrusted key with ID: %q", header.KeyID)
} }
default: default:
signingKey, err = verifyCertChain(header, verifyOpts.Roots) err = ErrInvalidToken
} }
return return
@ -226,7 +265,7 @@ func getCertPubKey(chains [][]*x509.Certificate) crypto.PublicKey {
// NOTE: we dont have to verify that the public key in the leaf cert // NOTE: we dont have to verify that the public key in the leaf cert
// *is* the signing key: if it's not the signing then token claims // *is* the signing key: if it's not the signing then token claims
// verifcation with this key fails // verification with this key fails
return cert.PublicKey.(crypto.PublicKey) return cert.PublicKey.(crypto.PublicKey)
} }

View file

@ -19,8 +19,8 @@ import (
"time" "time"
"github.com/distribution/distribution/v3/registry/auth" "github.com/distribution/distribution/v3/registry/auth"
"github.com/go-jose/go-jose/v3" "github.com/go-jose/go-jose/v4"
"github.com/go-jose/go-jose/v3/jwt" "github.com/go-jose/go-jose/v4/jwt"
) )
func makeRootKeys(numKeys int) ([]*ecdsa.PrivateKey, error) { func makeRootKeys(numKeys int) ([]*ecdsa.PrivateKey, error) {
@ -123,12 +123,12 @@ func makeTestToken(jwk *jose.JSONWebKey, issuer, audience string, access []*Reso
Access: access, Access: access,
} }
tokenString, err := jwt.Signed(signer).Claims(claimSet).CompactSerialize() tokenString, err := jwt.Signed(signer).Claims(claimSet).Serialize()
if err != nil { if err != nil {
return nil, fmt.Errorf("unable to build token string: %v", err) return nil, fmt.Errorf("unable to build token string: %v", err)
} }
return NewToken(tokenString) return NewToken(tokenString, []jose.SignatureAlgorithm{signingKey.Algorithm})
} }
// NOTE(milosgajdos): certTemplateInfo type as well // NOTE(milosgajdos): certTemplateInfo type as well

View file

@ -1709,6 +1709,33 @@ func testManifestAPISchema2(t *testing.T, env *testEnv, imageName reference.Name
// ------------------ // ------------------
// Fetch by tag name // Fetch by tag name
// HEAD requests should not contain a body
headReq, err := http.NewRequest(http.MethodHead, manifestURL, nil)
if err != nil {
t.Fatalf("Error constructing request: %s", err)
}
headResp, err := http.DefaultClient.Do(headReq)
if err != nil {
t.Fatalf("unexpected error head manifest: %v", err)
}
defer headResp.Body.Close()
checkResponse(t, "head uploaded manifest", headResp, http.StatusOK)
checkHeaders(t, headResp, http.Header{
"Docker-Content-Digest": []string{dgst.String()},
"ETag": []string{fmt.Sprintf(`"%s"`, dgst)},
})
headBody, err := io.ReadAll(headResp.Body)
if err != nil {
t.Fatalf("reading body for head manifest: %v", err)
}
if len(headBody) > 0 {
t.Fatalf("unexpected body length for head manifest: %d", len(headBody))
}
req, err := http.NewRequest(http.MethodGet, manifestURL, nil) req, err := http.NewRequest(http.MethodGet, manifestURL, nil)
if err != nil { if err != nil {
t.Fatalf("Error constructing request: %s", err) t.Fatalf("Error constructing request: %s", err)
@ -1744,6 +1771,32 @@ func testManifestAPISchema2(t *testing.T, env *testEnv, imageName reference.Name
// --------------- // ---------------
// Fetch by digest // Fetch by digest
// HEAD requests should not contain a body
headReq, err = http.NewRequest(http.MethodHead, manifestDigestURL, nil)
if err != nil {
t.Fatalf("Error constructing request: %s", err)
}
headResp, err = http.DefaultClient.Do(headReq)
if err != nil {
t.Fatalf("unexpected error head manifest: %v", err)
}
defer headResp.Body.Close()
checkResponse(t, "head uploaded manifest by digest", headResp, http.StatusOK)
checkHeaders(t, headResp, http.Header{
"Docker-Content-Digest": []string{dgst.String()},
"ETag": []string{fmt.Sprintf(`"%s"`, dgst)},
})
headBody, err = io.ReadAll(headResp.Body)
if err != nil {
t.Fatalf("reading body for head manifest by digest: %v", err)
}
if len(headBody) > 0 {
t.Fatalf("unexpected body length for head manifest: %d", len(headBody))
}
req, err = http.NewRequest(http.MethodGet, manifestDigestURL, nil) req, err = http.NewRequest(http.MethodGet, manifestDigestURL, nil)
if err != nil { if err != nil {
t.Fatalf("Error constructing request: %s", err) t.Fatalf("Error constructing request: %s", err)
@ -2461,7 +2514,7 @@ func pushChunk(t *testing.T, ub *v2.URLBuilder, name reference.Named, uploadURLB
func checkResponse(t *testing.T, msg string, resp *http.Response, expectedStatus int) { func checkResponse(t *testing.T, msg string, resp *http.Response, expectedStatus int) {
if resp.StatusCode != expectedStatus { if resp.StatusCode != expectedStatus {
t.Logf("unexpected status %s: %v != %v", msg, resp.StatusCode, expectedStatus) t.Logf("unexpected status %s: expected %v, got %v", msg, resp.StatusCode, expectedStatus)
maybeDumpResponse(t, resp) maybeDumpResponse(t, resp)
t.FailNow() t.FailNow()
} }
@ -2543,6 +2596,8 @@ func maybeDumpResponse(t *testing.T, resp *http.Response) {
// test will fail. If a passed in header value is "*", any non-zero value will // test will fail. If a passed in header value is "*", any non-zero value will
// suffice as a match. // suffice as a match.
func checkHeaders(t *testing.T, resp *http.Response, headers http.Header) { func checkHeaders(t *testing.T, resp *http.Response, headers http.Header) {
t.Helper()
for k, vs := range headers { for k, vs := range headers {
if resp.Header.Get(k) == "" { if resp.Header.Get(k) == "" {
t.Fatalf("response missing header %q", k) t.Fatalf("response missing header %q", k)

View file

@ -3,6 +3,8 @@ package handlers
import ( import (
"context" "context"
"crypto/rand" "crypto/rand"
"crypto/tls"
"crypto/x509"
"expvar" "expvar"
"fmt" "fmt"
"math" "math"
@ -77,7 +79,7 @@ type App struct {
source notifications.SourceRecord source notifications.SourceRecord
} }
redis *redis.Client redis redis.UniversalClient
// isCache is true if this registry is configured as a pull through cache // isCache is true if this registry is configured as a pull through cache
isCache bool isCache bool
@ -114,7 +116,7 @@ func NewApp(ctx context.Context, config *configuration.Configuration) *App {
storageParams = make(configuration.Parameters) storageParams = make(configuration.Parameters)
} }
if storageParams["useragent"] == "" { if storageParams["useragent"] == "" {
storageParams["useragent"] = fmt.Sprintf("distribution/%s %s", version.Version, runtime.Version()) storageParams["useragent"] = fmt.Sprintf("distribution/%s %s", version.Version(), runtime.Version())
} }
var err error var err error
@ -155,7 +157,11 @@ func NewApp(ctx context.Context, config *configuration.Configuration) *App {
panic(err) panic(err)
} }
app.configureSecret(config) // Do not configure HTTP secret for a proxy registry as HTTP secret
// is only used for blob uploads and a proxy registry does not support blob uploads.
if !app.isCache {
app.configureSecret(config)
}
app.configureEvents(config) app.configureEvents(config)
app.configureRedis(config) app.configureRedis(config)
app.configureLogHook(config) app.configureLogHook(config)
@ -184,6 +190,21 @@ func NewApp(ctx context.Context, config *configuration.Configuration) *App {
} }
} }
// configure tag lookup concurrency limit
if p := config.Storage.TagParameters(); p != nil {
l, ok := p["concurrencylimit"]
if ok {
limit, ok := l.(int)
if !ok {
panic("tag lookup concurrency limit config key must have a integer value")
}
if limit < 0 {
panic("tag lookup concurrency limit should be a non-negative integer value")
}
options = append(options, storage.TagLookupConcurrencyLimit(limit))
}
}
// configure redirects // configure redirects
var redirectDisabled bool var redirectDisabled bool
if redirectConfig, ok := config.Storage["redirect"]; ok { if redirectConfig, ok := config.Storage["redirect"]; ok {
@ -236,6 +257,21 @@ func NewApp(ctx context.Context, config *configuration.Configuration) *App {
options = append(options, storage.ManifestURLsDenyRegexp(re)) options = append(options, storage.ManifestURLsDenyRegexp(re))
} }
} }
switch config.Validation.Manifests.Indexes.Platforms {
case "list":
options = append(options, storage.EnableValidateImageIndexImagesExist)
for _, platform := range config.Validation.Manifests.Indexes.PlatformList {
options = append(options, storage.AddValidateImageIndexImagesExistPlatform(platform.Architecture, platform.OS))
}
fallthrough
case "none":
dcontext.GetLogger(app).Warn("Image index completeness validation has been disabled, which is an experimental option because other container tooling might expect all image indexes to be complete")
case "all":
fallthrough
default:
options = append(options, storage.EnableValidateImageIndexImagesExist)
}
} }
// configure storage caches // configure storage caches
@ -411,6 +447,14 @@ func (app *App) RegisterHealthChecks(healthRegistries ...*health.Registry) {
} }
} }
// Shutdown close the underlying registry
func (app *App) Shutdown() error {
if r, ok := app.registry.(proxy.Closer); ok {
return r.Close()
}
return nil
}
// register a handler with the application, by route name. The handler will be // register a handler with the application, by route name. The handler will be
// passed through the application filters and context will be constructed at // passed through the application filters and context will be constructed at
// request time. // request time.
@ -487,12 +531,41 @@ func (app *App) configureEvents(configuration *configuration.Configuration) {
} }
func (app *App) configureRedis(cfg *configuration.Configuration) { func (app *App) configureRedis(cfg *configuration.Configuration) {
if cfg.Redis.Addr == "" { if len(cfg.Redis.Options.Addrs) == 0 {
dcontext.GetLogger(app).Infof("redis not configured") dcontext.GetLogger(app).Infof("redis not configured")
return return
} }
app.redis = app.createPool(cfg.Redis) // redis TLS config
if cfg.Redis.TLS.Certificate != "" || cfg.Redis.TLS.Key != "" {
var err error
tlsConf := &tls.Config{}
tlsConf.Certificates = make([]tls.Certificate, 1)
tlsConf.Certificates[0], err = tls.LoadX509KeyPair(cfg.Redis.TLS.Certificate, cfg.Redis.TLS.Key)
if err != nil {
panic(err)
}
if len(cfg.Redis.TLS.ClientCAs) != 0 {
pool := x509.NewCertPool()
for _, ca := range cfg.Redis.TLS.ClientCAs {
caPem, err := os.ReadFile(ca)
if err != nil {
dcontext.GetLogger(app).Errorf("failed reading redis client CA: %v", err)
return
}
if ok := pool.AppendCertsFromPEM(caPem); !ok {
dcontext.GetLogger(app).Error("could not add CA to pool")
return
}
}
tlsConf.ClientAuth = tls.RequireAndVerifyClientCert
tlsConf.ClientCAs = pool
}
cfg.Redis.Options.TLSConfig = tlsConf
}
app.redis = app.createPool(cfg.Redis.Options)
// Enable metrics instrumentation. // Enable metrics instrumentation.
if err := redisotel.InstrumentMetrics(app.redis); err != nil { if err := redisotel.InstrumentMetrics(app.redis); err != nil {
@ -514,25 +587,12 @@ func (app *App) configureRedis(cfg *configuration.Configuration) {
})) }))
} }
func (app *App) createPool(cfg configuration.Redis) *redis.Client { func (app *App) createPool(cfg redis.UniversalOptions) redis.UniversalClient {
return redis.NewClient(&redis.Options{ cfg.OnConnect = func(ctx context.Context, cn *redis.Conn) error {
Addr: cfg.Addr, res := cn.Ping(ctx)
OnConnect: func(ctx context.Context, cn *redis.Conn) error { return res.Err()
res := cn.Ping(ctx) }
return res.Err() return redis.NewUniversalClient(&cfg)
},
Username: cfg.Username,
Password: cfg.Password,
DB: cfg.DB,
MaxRetries: 3,
DialTimeout: cfg.DialTimeout,
ReadTimeout: cfg.ReadTimeout,
WriteTimeout: cfg.WriteTimeout,
PoolFIFO: false,
MaxIdleConns: cfg.Pool.MaxIdle,
PoolSize: cfg.Pool.MaxActive,
ConnMaxIdleTime: cfg.Pool.IdleTimeout,
})
} }
// configureLogHook prepares logging hook parameters. // configureLogHook prepares logging hook parameters.

View file

@ -6,6 +6,7 @@ import (
"mime" "mime"
"net/http" "net/http"
"strings" "strings"
"sync"
"github.com/distribution/distribution/v3" "github.com/distribution/distribution/v3"
"github.com/distribution/distribution/v3/internal/dcontext" "github.com/distribution/distribution/v3/internal/dcontext"
@ -13,11 +14,13 @@ import (
"github.com/distribution/distribution/v3/manifest/ocischema" "github.com/distribution/distribution/v3/manifest/ocischema"
"github.com/distribution/distribution/v3/manifest/schema2" "github.com/distribution/distribution/v3/manifest/schema2"
"github.com/distribution/distribution/v3/registry/api/errcode" "github.com/distribution/distribution/v3/registry/api/errcode"
"github.com/distribution/distribution/v3/registry/storage"
"github.com/distribution/distribution/v3/registry/storage/driver" "github.com/distribution/distribution/v3/registry/storage/driver"
"github.com/distribution/reference" "github.com/distribution/reference"
"github.com/gorilla/handlers" "github.com/gorilla/handlers"
"github.com/opencontainers/go-digest" "github.com/opencontainers/go-digest"
v1 "github.com/opencontainers/image-spec/specs-go/v1" v1 "github.com/opencontainers/image-spec/specs-go/v1"
"golang.org/x/sync/errgroup"
) )
const ( const (
@ -212,6 +215,11 @@ func (imh *manifestHandler) GetManifest(w http.ResponseWriter, r *http.Request)
w.Header().Set("Content-Length", fmt.Sprint(len(p))) w.Header().Set("Content-Length", fmt.Sprint(len(p)))
w.Header().Set("Docker-Content-Digest", imh.Digest.String()) w.Header().Set("Docker-Content-Digest", imh.Digest.String())
w.Header().Set("Etag", fmt.Sprintf(`"%s"`, imh.Digest)) w.Header().Set("Etag", fmt.Sprintf(`"%s"`, imh.Digest))
if r.Method == http.MethodHead {
return
}
if _, err := w.Write(p); err != nil { if _, err := w.Write(p); err != nil {
w.WriteHeader(http.StatusInternalServerError) w.WriteHeader(http.StatusInternalServerError)
} }
@ -476,12 +484,26 @@ func (imh *manifestHandler) DeleteManifest(w http.ResponseWriter, r *http.Reques
return return
} }
var (
errs []error
mu sync.Mutex
)
g := errgroup.Group{}
g.SetLimit(storage.DefaultConcurrencyLimit)
for _, tag := range referencedTags { for _, tag := range referencedTags {
if err := tagService.Untag(imh, tag); err != nil { tag := tag
imh.Errors = append(imh.Errors, err)
return g.Go(func() error {
} if err := tagService.Untag(imh, tag); err != nil {
mu.Lock()
errs = append(errs, err)
mu.Unlock()
}
return nil
})
} }
_ = g.Wait() // imh will record all errors, so ignore the error of Wait()
imh.Errors = errs
w.WriteHeader(http.StatusAccepted) w.WriteHeader(http.StatusAccepted)
} }

View file

@ -17,14 +17,23 @@ type userpass struct {
password string password string
} }
func (u userpass) Basic(_ *url.URL) (string, string) {
return u.username, u.password
}
func (u userpass) RefreshToken(_ *url.URL, service string) string {
return ""
}
func (u userpass) SetRefreshToken(_ *url.URL, service, token string) {
}
type credentials struct { type credentials struct {
creds map[string]userpass creds map[string]userpass
} }
func (c credentials) Basic(u *url.URL) (string, string) { func (c credentials) Basic(u *url.URL) (string, string) {
up := c.creds[u.String()] return c.creds[u.String()].Basic(u)
return up.username, up.password
} }
func (c credentials) RefreshToken(u *url.URL, service string) string { func (c credentials) RefreshToken(u *url.URL, service string) string {
@ -35,12 +44,12 @@ func (c credentials) SetRefreshToken(u *url.URL, service, token string) {
} }
// configureAuth stores credentials for challenge responses // configureAuth stores credentials for challenge responses
func configureAuth(username, password, remoteURL string) (auth.CredentialStore, error) { func configureAuth(username, password, remoteURL string) (auth.CredentialStore, auth.CredentialStore, error) {
creds := map[string]userpass{} creds := map[string]userpass{}
authURLs, err := getAuthURLs(remoteURL) authURLs, err := getAuthURLs(remoteURL)
if err != nil { if err != nil {
return nil, err return nil, nil, err
} }
for _, url := range authURLs { for _, url := range authURLs {
@ -51,7 +60,7 @@ func configureAuth(username, password, remoteURL string) (auth.CredentialStore,
} }
} }
return credentials{creds: creds}, nil return credentials{creds: creds}, userpass{username: username, password: password}, nil
} }
func getAuthURLs(remoteURL string) ([]string, error) { func getAuthURLs(remoteURL string) ([]string, error) {

View file

@ -33,22 +33,20 @@ var inflight = make(map[digest.Digest]struct{})
// mu protects inflight // mu protects inflight
var mu sync.Mutex var mu sync.Mutex
func setResponseHeaders(w http.ResponseWriter, length int64, mediaType string, digest digest.Digest) { func setResponseHeaders(h http.Header, length int64, mediaType string, digest digest.Digest) {
w.Header().Set("Content-Length", strconv.FormatInt(length, 10)) h.Set("Content-Length", strconv.FormatInt(length, 10))
w.Header().Set("Content-Type", mediaType) h.Set("Content-Type", mediaType)
w.Header().Set("Docker-Content-Digest", digest.String()) h.Set("Docker-Content-Digest", digest.String())
w.Header().Set("Etag", digest.String()) h.Set("Etag", digest.String())
} }
func (pbs *proxyBlobStore) copyContent(ctx context.Context, dgst digest.Digest, writer io.Writer) (distribution.Descriptor, error) { func (pbs *proxyBlobStore) copyContent(ctx context.Context, dgst digest.Digest, writer io.Writer, h http.Header) (distribution.Descriptor, error) {
desc, err := pbs.remoteStore.Stat(ctx, dgst) desc, err := pbs.remoteStore.Stat(ctx, dgst)
if err != nil { if err != nil {
return distribution.Descriptor{}, err return distribution.Descriptor{}, err
} }
if w, ok := writer.(http.ResponseWriter); ok { setResponseHeaders(h, desc.Size, desc.MediaType, dgst)
setResponseHeaders(w, desc.Size, desc.MediaType, dgst)
}
remoteReader, err := pbs.remoteStore.Open(ctx, dgst) remoteReader, err := pbs.remoteStore.Open(ctx, dgst)
if err != nil { if err != nil {
@ -102,7 +100,7 @@ func (pbs *proxyBlobStore) ServeBlob(ctx context.Context, w http.ResponseWriter,
// Will return the blob from the remote store directly. // Will return the blob from the remote store directly.
// TODO Maybe we could reuse the these blobs are serving remotely and caching locally. // TODO Maybe we could reuse the these blobs are serving remotely and caching locally.
mu.Unlock() mu.Unlock()
_, err := pbs.copyContent(ctx, dgst, w) _, err := pbs.copyContent(ctx, dgst, w, w.Header())
return err return err
} }
inflight[dgst] = struct{}{} inflight[dgst] = struct{}{}
@ -122,7 +120,7 @@ func (pbs *proxyBlobStore) ServeBlob(ctx context.Context, w http.ResponseWriter,
// Serving client and storing locally over same fetching request. // Serving client and storing locally over same fetching request.
// This can prevent a redundant blob fetching. // This can prevent a redundant blob fetching.
multiWriter := io.MultiWriter(w, bw) multiWriter := io.MultiWriter(w, bw)
desc, err := pbs.copyContent(ctx, dgst, multiWriter) desc, err := pbs.copyContent(ctx, dgst, multiWriter, w.Header())
if err != nil { if err != nil {
return err return err
} }

View file

@ -448,12 +448,22 @@ func testProxyStoreServe(t *testing.T, te *testEnv, numClients int) {
return return
} }
bodyBytes := w.Body.Bytes() resp := w.Result()
bodyBytes, err := io.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
t.Errorf(err.Error())
return
}
localDigest := digest.FromBytes(bodyBytes) localDigest := digest.FromBytes(bodyBytes)
if localDigest != remoteBlob.Digest { if localDigest != remoteBlob.Digest {
t.Errorf("Mismatching blob fetch from proxy") t.Errorf("Mismatching blob fetch from proxy")
return return
} }
if resp.Header.Get("Docker-Content-Digest") != localDigest.String() {
t.Errorf("Mismatching digest in response header")
return
}
desc, err := te.store.localStore.Stat(te.ctx, remoteBlob.Digest) desc, err := te.store.localStore.Stat(te.ctx, remoteBlob.Digest)
if err != nil { if err != nil {

View file

@ -62,6 +62,16 @@ func init() {
})) }))
metrics.Register(prometheus.ProxyNamespace) metrics.Register(prometheus.ProxyNamespace)
initPrometheusMetrics("blob")
initPrometheusMetrics("manifest")
}
func initPrometheusMetrics(value string) {
requests.WithValues(value).Inc(0)
hits.WithValues(value).Inc(0)
misses.WithValues(value).Inc(0)
pulledBytes.WithValues(value).Inc(0)
pushedBytes.WithValues(value).Inc(0)
} }
// BlobPull tracks metrics about blobs pulled into the cache // BlobPull tracks metrics about blobs pulled into the cache

View file

@ -8,6 +8,8 @@ import (
"sync" "sync"
"time" "time"
"github.com/distribution/reference"
"github.com/distribution/distribution/v3" "github.com/distribution/distribution/v3"
"github.com/distribution/distribution/v3/configuration" "github.com/distribution/distribution/v3/configuration"
"github.com/distribution/distribution/v3/internal/client" "github.com/distribution/distribution/v3/internal/client"
@ -18,7 +20,6 @@ import (
"github.com/distribution/distribution/v3/registry/proxy/scheduler" "github.com/distribution/distribution/v3/registry/proxy/scheduler"
"github.com/distribution/distribution/v3/registry/storage" "github.com/distribution/distribution/v3/registry/storage"
"github.com/distribution/distribution/v3/registry/storage/driver" "github.com/distribution/distribution/v3/registry/storage/driver"
"github.com/distribution/reference"
) )
var repositoryTTL = 24 * 7 * time.Hour var repositoryTTL = 24 * 7 * time.Hour
@ -30,6 +31,7 @@ type proxyingRegistry struct {
ttl *time.Duration ttl *time.Duration
remoteURL url.URL remoteURL url.URL
authChallenger authChallenger authChallenger authChallenger
basicAuth auth.CredentialStore
} }
// NewRegistryPullThroughCache creates a registry acting as a pull through cache // NewRegistryPullThroughCache creates a registry acting as a pull through cache
@ -112,7 +114,7 @@ func NewRegistryPullThroughCache(ctx context.Context, registry distribution.Name
} }
} }
cs, err := configureAuth(config.Username, config.Password, config.RemoteURL) cs, b, err := configureAuth(config.Username, config.Password, config.RemoteURL)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -127,6 +129,7 @@ func NewRegistryPullThroughCache(ctx context.Context, registry distribution.Name
cm: challenge.NewSimpleManager(), cm: challenge.NewSimpleManager(),
cs: cs, cs: cs,
}, },
basicAuth: b,
}, nil }, nil
} }
@ -155,7 +158,8 @@ func (pr *proxyingRegistry) Repository(ctx context.Context, name reference.Named
tr := transport.NewTransport(http.DefaultTransport, tr := transport.NewTransport(http.DefaultTransport,
auth.NewAuthorizer(c.challengeManager(), auth.NewAuthorizer(c.challengeManager(),
auth.NewTokenHandlerWithOptions(tkopts))) auth.NewTokenHandlerWithOptions(tkopts),
auth.NewBasicHandler(pr.basicAuth)))
localRepo, err := pr.embedded.Repository(ctx, name) localRepo, err := pr.embedded.Repository(ctx, name)
if err != nil { if err != nil {
@ -211,6 +215,15 @@ func (pr *proxyingRegistry) BlobStatter() distribution.BlobStatter {
return pr.embedded.BlobStatter() return pr.embedded.BlobStatter()
} }
type Closer interface {
// Close release all resources used by this object
Close() error
}
func (pr *proxyingRegistry) Close() error {
return pr.scheduler.Stop()
}
// authChallenger encapsulates a request to the upstream to establish credential challenges // authChallenger encapsulates a request to the upstream to establish credential challenges
type authChallenger interface { type authChallenger interface {
tryEstablishChallenges(context.Context) error tryEstablishChallenges(context.Context) error

View file

@ -206,12 +206,13 @@ func (ttles *TTLExpirationScheduler) startTimer(entry *schedulerEntry, ttl time.
} }
// Stop stops the scheduler. // Stop stops the scheduler.
func (ttles *TTLExpirationScheduler) Stop() { func (ttles *TTLExpirationScheduler) Stop() error {
ttles.Lock() ttles.Lock()
defer ttles.Unlock() defer ttles.Unlock()
if err := ttles.writeState(); err != nil { err := ttles.writeState()
dcontext.GetLogger(ttles.ctx).Errorf("Error writing scheduler state: %s", err) if err != nil {
err = fmt.Errorf("error writing scheduler state: %w", err)
} }
for _, entry := range ttles.entries { for _, entry := range ttles.entries {
@ -221,6 +222,7 @@ func (ttles *TTLExpirationScheduler) Stop() {
close(ttles.doneChan) close(ttles.doneChan)
ttles.saveTimer.Stop() ttles.saveTimer.Stop()
ttles.stopped = true ttles.stopped = true
return err
} }
func (ttles *TTLExpirationScheduler) writeState() error { func (ttles *TTLExpirationScheduler) writeState() error {

View file

@ -136,7 +136,12 @@ func TestRestoreOld(t *testing.T) {
if err != nil { if err != nil {
t.Fatalf("Error starting ttlExpirationScheduler: %s", err) t.Fatalf("Error starting ttlExpirationScheduler: %s", err)
} }
defer s.Stop() defer func(s *TTLExpirationScheduler) {
err := s.Stop()
if err != nil {
t.Fatalf("Error stopping ttlExpirationScheduler: %s", err)
}
}(s)
wg.Wait() wg.Wait()
mu.Lock() mu.Lock()
@ -177,7 +182,10 @@ func TestStopRestore(t *testing.T) {
// Start and stop before all operations complete // Start and stop before all operations complete
// state will be written to fs // state will be written to fs
s.Stop() err = s.Stop()
if err != nil {
t.Fatalf(err.Error())
}
time.Sleep(10 * time.Millisecond) time.Sleep(10 * time.Millisecond)
// v2 will restore state from fs // v2 will restore state from fs

View file

@ -4,6 +4,7 @@ import (
"context" "context"
"crypto/tls" "crypto/tls"
"crypto/x509" "crypto/x509"
"errors"
"fmt" "fmt"
"net/http" "net/http"
"os" "os"
@ -20,6 +21,8 @@ import (
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp" "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
"golang.org/x/crypto/acme" "golang.org/x/crypto/acme"
"golang.org/x/crypto/acme/autocert" "golang.org/x/crypto/acme/autocert"
"golang.org/x/net/http2"
"golang.org/x/net/http2/h2c"
"github.com/distribution/distribution/v3/configuration" "github.com/distribution/distribution/v3/configuration"
"github.com/distribution/distribution/v3/health" "github.com/distribution/distribution/v3/health"
@ -79,9 +82,6 @@ var tlsVersions = map[string]uint16{
// defaultLogFormatter is the default formatter to use for logs. // defaultLogFormatter is the default formatter to use for logs.
const defaultLogFormatter = "text" const defaultLogFormatter = "text"
// this channel gets notified when process receives signal. It is global to ease unit testing
var quit = make(chan os.Signal, 1)
// HandlerFunc defines an http middleware // HandlerFunc defines an http middleware
type HandlerFunc func(config *configuration.Configuration, handler http.Handler) http.Handler type HandlerFunc func(config *configuration.Configuration, handler http.Handler) http.Handler
@ -99,7 +99,7 @@ var ServeCmd = &cobra.Command{
Long: "`serve` stores and distributes Docker images.", Long: "`serve` stores and distributes Docker images.",
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
// setup context // setup context
ctx := dcontext.WithVersion(dcontext.Background(), version.Version) ctx := dcontext.WithVersion(dcontext.Background(), version.Version())
config, err := resolveConfiguration(args) config, err := resolveConfiguration(args)
if err != nil { if err != nil {
@ -128,6 +128,7 @@ type Registry struct {
config *configuration.Configuration config *configuration.Configuration
app *handlers.App app *handlers.App
server *http.Server server *http.Server
quit chan os.Signal
} }
// NewRegistry creates a new registry from a context and configuration struct. // NewRegistry creates a new registry from a context and configuration struct.
@ -158,6 +159,9 @@ func NewRegistry(ctx context.Context, config *configuration.Configuration) (*Reg
if err != nil { if err != nil {
return nil, fmt.Errorf("error during open telemetry initialization: %v", err) return nil, fmt.Errorf("error during open telemetry initialization: %v", err)
} }
if config.HTTP.H2C.Enabled {
handler = h2c.NewHandler(handler, &http2.Server{})
}
handler = otelHandler(handler) handler = otelHandler(handler)
server := &http.Server{ server := &http.Server{
@ -168,6 +172,7 @@ func NewRegistry(ctx context.Context, config *configuration.Configuration) (*Reg
app: app, app: app,
config: config, config: config,
server: server, server: server,
quit: make(chan os.Signal, 1),
}, nil }, nil
} }
@ -308,7 +313,7 @@ func (registry *Registry) ListenAndServe() error {
} }
// setup channel to get notified on SIGTERM signal // setup channel to get notified on SIGTERM signal
signal.Notify(quit, syscall.SIGTERM) signal.Notify(registry.quit, os.Interrupt, syscall.SIGTERM)
serveErr := make(chan error) serveErr := make(chan error)
// Start serving in goroutine and listen for stop signal in main thread // Start serving in goroutine and listen for stop signal in main thread
@ -319,15 +324,24 @@ func (registry *Registry) ListenAndServe() error {
select { select {
case err := <-serveErr: case err := <-serveErr:
return err return err
case <-quit: case <-registry.quit:
dcontext.GetLogger(registry.app).Info("stopping server gracefully. Draining connections for ", config.HTTP.DrainTimeout) dcontext.GetLogger(registry.app).Info("stopping server gracefully. Draining connections for ", config.HTTP.DrainTimeout)
// shutdown the server with a grace period of configured timeout // shutdown the server with a grace period of configured timeout
c, cancel := context.WithTimeout(context.Background(), config.HTTP.DrainTimeout) c, cancel := context.WithTimeout(context.Background(), config.HTTP.DrainTimeout)
defer cancel() defer cancel()
return registry.server.Shutdown(c) return registry.Shutdown(c)
} }
} }
// Shutdown gracefully shuts down the registry's HTTP server and application object.
func (registry *Registry) Shutdown(ctx context.Context) error {
err := registry.server.Shutdown(ctx)
if appErr := registry.app.Shutdown(); appErr != nil {
err = errors.Join(err, appErr)
}
return err
}
func configureDebugServer(config *configuration.Configuration) { func configureDebugServer(config *configuration.Configuration) {
if config.HTTP.Debug.Addr != "" { if config.HTTP.Debug.Addr != "" {
go func(addr string) { go func(addr string) {

View file

@ -103,7 +103,7 @@ func TestGracefulShutdown(t *testing.T) {
fmt.Fprintf(conn, "GET /v2/ ") fmt.Fprintf(conn, "GET /v2/ ")
// send stop signal // send stop signal
quit <- os.Interrupt registry.quit <- os.Interrupt
time.Sleep(100 * time.Millisecond) time.Sleep(100 * time.Millisecond)
// try connecting again. it shouldn't // try connecting again. it shouldn't
@ -325,7 +325,7 @@ func TestRegistrySupportedCipherSuite(t *testing.T) {
} }
// send stop signal // send stop signal
quit <- os.Interrupt registry.quit <- os.Interrupt
time.Sleep(100 * time.Millisecond) time.Sleep(100 * time.Millisecond)
} }
@ -369,7 +369,7 @@ func TestRegistryUnsupportedCipherSuite(t *testing.T) {
} }
// send stop signal // send stop signal
quit <- os.Interrupt registry.quit <- os.Interrupt
time.Sleep(100 * time.Millisecond) time.Sleep(100 * time.Millisecond)
} }

View file

@ -40,7 +40,7 @@ func TestWriteSeek(t *testing.T) {
} }
contents := []byte{1, 2, 3} contents := []byte{1, 2, 3}
if _, err := blobUpload.Write(contents); err != nil { if _, err := blobUpload.Write(contents); err != nil {
t.Fatalf("unexpected error writing contets: %v", err) t.Fatalf("unexpected error writing contents: %v", err)
} }
blobUpload.Close() blobUpload.Close()
offset := blobUpload.Size() offset := blobUpload.Size()

View file

@ -230,7 +230,7 @@ func (bw *blobWriter) validateBlob(ctx context.Context, desc distribution.Descri
} }
if fullHash { if fullHash {
// a fantastic optimization: if the the written data and the size are // a fantastic optimization: if the written data and the size are
// the same, we don't need to read the data from the backend. This is // the same, we don't need to read the data from the backend. This is
// because we've written the entire file in the lifecycle of the // because we've written the entire file in the lifecycle of the
// current instance. // current instance.

View file

@ -25,7 +25,7 @@ import (
// Note that there is no implied relationship between these two caches. The // Note that there is no implied relationship between these two caches. The
// layer may exist in one, both or none and the code must be written this way. // layer may exist in one, both or none and the code must be written this way.
type redisBlobDescriptorService struct { type redisBlobDescriptorService struct {
pool *redis.Client pool redis.UniversalClient
// TODO(stevvooe): We use a pool because we don't have great control over // TODO(stevvooe): We use a pool because we don't have great control over
// the cache lifecycle to manage connections. A new connection if fetched // the cache lifecycle to manage connections. A new connection if fetched
@ -37,7 +37,7 @@ var _ distribution.BlobDescriptorService = &redisBlobDescriptorService{}
// NewRedisBlobDescriptorCacheProvider returns a new redis-based // NewRedisBlobDescriptorCacheProvider returns a new redis-based
// BlobDescriptorCacheProvider using the provided redis connection pool. // BlobDescriptorCacheProvider using the provided redis connection pool.
func NewRedisBlobDescriptorCacheProvider(pool *redis.Client) cache.BlobDescriptorCacheProvider { func NewRedisBlobDescriptorCacheProvider(pool redis.UniversalClient) cache.BlobDescriptorCacheProvider {
return metrics.NewPrometheusCacheProvider( return metrics.NewPrometheusCacheProvider(
&redisBlobDescriptorService{ &redisBlobDescriptorService{
pool: pool, pool: pool,

View file

@ -20,7 +20,7 @@ func init() {
// implementation. // implementation.
func TestRedisBlobDescriptorCacheProvider(t *testing.T) { func TestRedisBlobDescriptorCacheProvider(t *testing.T) {
if redisAddr == "" { if redisAddr == "" {
// fallback to an environement variable // fallback to an environment variable
redisAddr = os.Getenv("TEST_REGISTRY_STORAGE_CACHE_REDIS_ADDR") redisAddr = os.Getenv("TEST_REGISTRY_STORAGE_CACHE_REDIS_ADDR")
} }

View file

@ -46,12 +46,20 @@ import (
"github.com/distribution/distribution/v3/internal/dcontext" "github.com/distribution/distribution/v3/internal/dcontext"
prometheus "github.com/distribution/distribution/v3/metrics" prometheus "github.com/distribution/distribution/v3/metrics"
storagedriver "github.com/distribution/distribution/v3/registry/storage/driver" storagedriver "github.com/distribution/distribution/v3/registry/storage/driver"
"github.com/distribution/distribution/v3/tracing"
"github.com/docker/go-metrics" "github.com/docker/go-metrics"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/trace"
) )
// storageAction is the metrics of blob related operations // storageAction is the metrics of blob related operations
var storageAction = prometheus.StorageNamespace.NewLabeledTimer("action", "The number of seconds that the storage action takes", "driver", "action") var storageAction = prometheus.StorageNamespace.NewLabeledTimer("action", "The number of seconds that the storage action takes", "driver", "action")
// tracer is the OpenTelemetry tracer utilized for tracing operations within
// this package's code.
var tracer = otel.Tracer("github.com/distribution/distribution/v3/registry/storage/driver/base")
func init() { func init() {
metrics.Register(prometheus.StorageNamespace) metrics.Register(prometheus.StorageNamespace)
} }
@ -89,8 +97,16 @@ func (base *Base) setDriverName(e error) error {
// GetContent wraps GetContent of underlying storage driver. // GetContent wraps GetContent of underlying storage driver.
func (base *Base) GetContent(ctx context.Context, path string) ([]byte, error) { func (base *Base) GetContent(ctx context.Context, path string) ([]byte, error) {
ctx, done := dcontext.WithTrace(ctx) attrs := []attribute.KeyValue{
defer done("%s.GetContent(%q)", base.Name(), path) attribute.String(tracing.AttributePrefix+"storage.driver.name", base.Name()),
attribute.String(tracing.AttributePrefix+"storage.path", path),
}
ctx, span := tracer.Start(
ctx,
"GetContent",
trace.WithAttributes(attrs...))
defer span.End()
if !storagedriver.PathRegexp.MatchString(path) { if !storagedriver.PathRegexp.MatchString(path) {
return nil, storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()} return nil, storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()}
@ -104,8 +120,17 @@ func (base *Base) GetContent(ctx context.Context, path string) ([]byte, error) {
// PutContent wraps PutContent of underlying storage driver. // PutContent wraps PutContent of underlying storage driver.
func (base *Base) PutContent(ctx context.Context, path string, content []byte) error { func (base *Base) PutContent(ctx context.Context, path string, content []byte) error {
ctx, done := dcontext.WithTrace(ctx) attrs := []attribute.KeyValue{
defer done("%s.PutContent(%q)", base.Name(), path) attribute.String(tracing.AttributePrefix+"storage.driver.name", base.Name()),
attribute.String(tracing.AttributePrefix+"storage.path", path),
attribute.Int(tracing.AttributePrefix+"storage.content.length", len(content)),
}
ctx, span := tracer.Start(
ctx,
"PutContent",
trace.WithAttributes(attrs...))
defer span.End()
if !storagedriver.PathRegexp.MatchString(path) { if !storagedriver.PathRegexp.MatchString(path) {
return storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()} return storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()}
@ -119,8 +144,17 @@ func (base *Base) PutContent(ctx context.Context, path string, content []byte) e
// Reader wraps Reader of underlying storage driver. // Reader wraps Reader of underlying storage driver.
func (base *Base) Reader(ctx context.Context, path string, offset int64) (io.ReadCloser, error) { func (base *Base) Reader(ctx context.Context, path string, offset int64) (io.ReadCloser, error) {
ctx, done := dcontext.WithTrace(ctx) attrs := []attribute.KeyValue{
defer done("%s.Reader(%q, %d)", base.Name(), path, offset) attribute.String(tracing.AttributePrefix+"storage.driver.name", base.Name()),
attribute.String(tracing.AttributePrefix+"storage.path", path),
attribute.Int64(tracing.AttributePrefix+"storage.offset", offset),
}
ctx, span := tracer.Start(
ctx,
"Reader",
trace.WithAttributes(attrs...))
defer span.End()
if offset < 0 { if offset < 0 {
return nil, storagedriver.InvalidOffsetError{Path: path, Offset: offset, DriverName: base.StorageDriver.Name()} return nil, storagedriver.InvalidOffsetError{Path: path, Offset: offset, DriverName: base.StorageDriver.Name()}
@ -136,8 +170,17 @@ func (base *Base) Reader(ctx context.Context, path string, offset int64) (io.Rea
// Writer wraps Writer of underlying storage driver. // Writer wraps Writer of underlying storage driver.
func (base *Base) Writer(ctx context.Context, path string, append bool) (storagedriver.FileWriter, error) { func (base *Base) Writer(ctx context.Context, path string, append bool) (storagedriver.FileWriter, error) {
ctx, done := dcontext.WithTrace(ctx) attrs := []attribute.KeyValue{
defer done("%s.Writer(%q, %v)", base.Name(), path, append) attribute.String(tracing.AttributePrefix+"storage.driver.name", base.Name()),
attribute.String(tracing.AttributePrefix+"storage.path", path),
attribute.Bool(tracing.AttributePrefix+"storage.append", append),
}
ctx, span := tracer.Start(
ctx,
"Writer",
trace.WithAttributes(attrs...))
defer span.End()
if !storagedriver.PathRegexp.MatchString(path) { if !storagedriver.PathRegexp.MatchString(path) {
return nil, storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()} return nil, storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()}
@ -149,8 +192,16 @@ func (base *Base) Writer(ctx context.Context, path string, append bool) (storage
// Stat wraps Stat of underlying storage driver. // Stat wraps Stat of underlying storage driver.
func (base *Base) Stat(ctx context.Context, path string) (storagedriver.FileInfo, error) { func (base *Base) Stat(ctx context.Context, path string) (storagedriver.FileInfo, error) {
ctx, done := dcontext.WithTrace(ctx) attrs := []attribute.KeyValue{
defer done("%s.Stat(%q)", base.Name(), path) attribute.String(tracing.AttributePrefix+"storage.driver.name", base.Name()),
attribute.String(tracing.AttributePrefix+"storage.path", path),
}
ctx, span := tracer.Start(
ctx,
"Stat",
trace.WithAttributes(attrs...))
defer span.End()
if !storagedriver.PathRegexp.MatchString(path) && path != "/" { if !storagedriver.PathRegexp.MatchString(path) && path != "/" {
return nil, storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()} return nil, storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()}
@ -164,8 +215,16 @@ func (base *Base) Stat(ctx context.Context, path string) (storagedriver.FileInfo
// List wraps List of underlying storage driver. // List wraps List of underlying storage driver.
func (base *Base) List(ctx context.Context, path string) ([]string, error) { func (base *Base) List(ctx context.Context, path string) ([]string, error) {
ctx, done := dcontext.WithTrace(ctx) attrs := []attribute.KeyValue{
defer done("%s.List(%q)", base.Name(), path) attribute.String(tracing.AttributePrefix+"storage.driver.name", base.Name()),
attribute.String(tracing.AttributePrefix+"storage.path", path),
}
ctx, span := tracer.Start(
ctx,
"List",
trace.WithAttributes(attrs...))
defer span.End()
if !storagedriver.PathRegexp.MatchString(path) && path != "/" { if !storagedriver.PathRegexp.MatchString(path) && path != "/" {
return nil, storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()} return nil, storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()}
@ -179,6 +238,18 @@ func (base *Base) List(ctx context.Context, path string) ([]string, error) {
// Move wraps Move of underlying storage driver. // Move wraps Move of underlying storage driver.
func (base *Base) Move(ctx context.Context, sourcePath string, destPath string) error { func (base *Base) Move(ctx context.Context, sourcePath string, destPath string) error {
attrs := []attribute.KeyValue{
attribute.String(tracing.AttributePrefix+"storage.driver.name", base.Name()),
attribute.String(tracing.AttributePrefix+"storage.source.path", sourcePath),
attribute.String(tracing.AttributePrefix+"storage.dest.path", destPath),
}
ctx, span := tracer.Start(
ctx,
"Move",
trace.WithAttributes(attrs...))
defer span.End()
ctx, done := dcontext.WithTrace(ctx) ctx, done := dcontext.WithTrace(ctx)
defer done("%s.Move(%q, %q", base.Name(), sourcePath, destPath) defer done("%s.Move(%q, %q", base.Name(), sourcePath, destPath)
@ -196,8 +267,16 @@ func (base *Base) Move(ctx context.Context, sourcePath string, destPath string)
// Delete wraps Delete of underlying storage driver. // Delete wraps Delete of underlying storage driver.
func (base *Base) Delete(ctx context.Context, path string) error { func (base *Base) Delete(ctx context.Context, path string) error {
ctx, done := dcontext.WithTrace(ctx) attrs := []attribute.KeyValue{
defer done("%s.Delete(%q)", base.Name(), path) attribute.String(tracing.AttributePrefix+"storage.driver.name", base.Name()),
attribute.String(tracing.AttributePrefix+"storage.path", path),
}
ctx, span := tracer.Start(
ctx,
"Delete",
trace.WithAttributes(attrs...))
defer span.End()
if !storagedriver.PathRegexp.MatchString(path) { if !storagedriver.PathRegexp.MatchString(path) {
return storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()} return storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()}
@ -211,8 +290,16 @@ func (base *Base) Delete(ctx context.Context, path string) error {
// RedirectURL wraps RedirectURL of the underlying storage driver. // RedirectURL wraps RedirectURL of the underlying storage driver.
func (base *Base) RedirectURL(r *http.Request, path string) (string, error) { func (base *Base) RedirectURL(r *http.Request, path string) (string, error) {
ctx, done := dcontext.WithTrace(r.Context()) attrs := []attribute.KeyValue{
defer done("%s.RedirectURL(%q)", base.Name(), path) attribute.String(tracing.AttributePrefix+"storage.driver.name", base.Name()),
attribute.String(tracing.AttributePrefix+"storage.path", path),
}
ctx, span := tracer.Start(
r.Context(),
"RedirectURL",
trace.WithAttributes(attrs...))
defer span.End()
if !storagedriver.PathRegexp.MatchString(path) { if !storagedriver.PathRegexp.MatchString(path) {
return "", storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()} return "", storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()}
@ -226,8 +313,16 @@ func (base *Base) RedirectURL(r *http.Request, path string) (string, error) {
// Walk wraps Walk of underlying storage driver. // Walk wraps Walk of underlying storage driver.
func (base *Base) Walk(ctx context.Context, path string, f storagedriver.WalkFn, options ...func(*storagedriver.WalkOptions)) error { func (base *Base) Walk(ctx context.Context, path string, f storagedriver.WalkFn, options ...func(*storagedriver.WalkOptions)) error {
ctx, done := dcontext.WithTrace(ctx) attrs := []attribute.KeyValue{
defer done("%s.Walk(%q)", base.Name(), path) attribute.String(tracing.AttributePrefix+"storage.driver.name", base.Name()),
attribute.String(tracing.AttributePrefix+"storage.path", path),
}
ctx, span := tracer.Start(
ctx,
"Walk",
trace.WithAttributes(attrs...))
defer span.End()
if !storagedriver.PathRegexp.MatchString(path) && path != "/" { if !storagedriver.PathRegexp.MatchString(path) && path != "/" {
return storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()} return storagedriver.InvalidPathError{Path: path, DriverName: base.StorageDriver.Name()}

View file

@ -492,7 +492,7 @@ func (d *driver) formObject(path string) *object.Object {
attrTimestamp.SetValue(strconv.FormatInt(time.Now().UTC().Unix(), 10)) attrTimestamp.SetValue(strconv.FormatInt(time.Now().UTC().Unix(), 10))
obj := object.New() obj := object.New()
obj.SetOwnerID(d.owner) obj.SetOwnerID(*d.owner)
obj.SetContainerID(d.containerID) obj.SetContainerID(d.containerID)
obj.SetAttributes(*attrFilePath, *attrFileName, *attrTimestamp) obj.SetAttributes(*attrFilePath, *attrFileName, *attrTimestamp)

View file

@ -155,6 +155,7 @@ func FromParameters(ctx context.Context, parameters map[string]interface{}) (sto
jwtConf := new(jwt.Config) jwtConf := new(jwt.Config)
var err error var err error
var gcs *storage.Client var gcs *storage.Client
var options []option.ClientOption
if keyfile, ok := parameters["keyfile"]; ok { if keyfile, ok := parameters["keyfile"]; ok {
jsonKey, err := os.ReadFile(fmt.Sprint(keyfile)) jsonKey, err := os.ReadFile(fmt.Sprint(keyfile))
if err != nil { if err != nil {
@ -165,10 +166,7 @@ func FromParameters(ctx context.Context, parameters map[string]interface{}) (sto
return nil, err return nil, err
} }
ts = jwtConf.TokenSource(ctx) ts = jwtConf.TokenSource(ctx)
gcs, err = storage.NewClient(ctx, option.WithCredentialsFile(fmt.Sprint(keyfile))) options = append(options, option.WithCredentialsFile(fmt.Sprint(keyfile)))
if err != nil {
return nil, err
}
} else if credentials, ok := parameters["credentials"]; ok { } else if credentials, ok := parameters["credentials"]; ok {
credentialMap, ok := credentials.(map[interface{}]interface{}) credentialMap, ok := credentials.(map[interface{}]interface{})
if !ok { if !ok {
@ -194,10 +192,7 @@ func FromParameters(ctx context.Context, parameters map[string]interface{}) (sto
return nil, err return nil, err
} }
ts = jwtConf.TokenSource(ctx) ts = jwtConf.TokenSource(ctx)
gcs, err = storage.NewClient(ctx, option.WithCredentialsJSON(data)) options = append(options, option.WithCredentialsJSON(data))
if err != nil {
return nil, err
}
} else { } else {
var err error var err error
// DefaultTokenSource is a convenience method. It first calls FindDefaultCredentials, // DefaultTokenSource is a convenience method. It first calls FindDefaultCredentials,
@ -207,12 +202,19 @@ func FromParameters(ctx context.Context, parameters map[string]interface{}) (sto
if err != nil { if err != nil {
return nil, err return nil, err
} }
gcs, err = storage.NewClient(ctx) }
if err != nil {
return nil, err if userAgent, ok := parameters["useragent"]; ok {
if ua, ok := userAgent.(string); ok && ua != "" {
options = append(options, option.WithUserAgent(ua))
} }
} }
gcs, err = storage.NewClient(ctx, options...)
if err != nil {
return nil, err
}
maxConcurrency, err := base.GetLimitFromParameter(parameters["maxconcurrency"], minConcurrency, defaultMaxConcurrency) maxConcurrency, err := base.GetLimitFromParameter(parameters["maxconcurrency"], minConcurrency, defaultMaxConcurrency)
if err != nil { if err != nil {
return nil, fmt.Errorf("maxconcurrency config error: %s", err) return nil, fmt.Errorf("maxconcurrency config error: %s", err)
@ -783,10 +785,6 @@ func (d *driver) Delete(ctx context.Context, path string) error {
// RedirectURL returns a URL which may be used to retrieve the content stored at // RedirectURL returns a URL which may be used to retrieve the content stored at
// the given path, possibly using the given options. // the given path, possibly using the given options.
func (d *driver) RedirectURL(r *http.Request, path string) (string, error) { func (d *driver) RedirectURL(r *http.Request, path string) (string, error) {
if d.privateKey == nil {
return "", nil
}
if r.Method != http.MethodGet && r.Method != http.MethodHead { if r.Method != http.MethodGet && r.Method != http.MethodHead {
return "", nil return "", nil
} }

View file

@ -34,40 +34,40 @@ func init() {
} }
} }
jsonKey, err := os.ReadFile(credentials)
if err != nil {
panic(fmt.Sprintf("Error reading JSON key : %v", err))
}
var ts oauth2.TokenSource
var email string
var privateKey []byte
ts, err = google.DefaultTokenSource(dcontext.Background(), storage.ScopeFullControl)
if err != nil {
// Assume that the file contents are within the environment variable since it exists
// but does not contain a valid file path
jwtConfig, err := google.JWTConfigFromJSON(jsonKey, storage.ScopeFullControl)
if err != nil {
panic(fmt.Sprintf("Error reading JWT config : %s", err))
}
email = jwtConfig.Email
privateKey = jwtConfig.PrivateKey
if len(privateKey) == 0 {
panic("Error reading JWT config : missing private_key property")
}
if email == "" {
panic("Error reading JWT config : missing client_email property")
}
ts = jwtConfig.TokenSource(dcontext.Background())
}
gcs, err := storage.NewClient(dcontext.Background(), option.WithCredentialsJSON(jsonKey))
if err != nil {
panic(fmt.Sprintf("Error initializing gcs client : %v", err))
}
gcsDriverConstructor = func(rootDirectory string) (storagedriver.StorageDriver, error) { gcsDriverConstructor = func(rootDirectory string) (storagedriver.StorageDriver, error) {
jsonKey, err := os.ReadFile(credentials)
if err != nil {
panic(fmt.Sprintf("Error reading JSON key : %v", err))
}
var ts oauth2.TokenSource
var email string
var privateKey []byte
ts, err = google.DefaultTokenSource(dcontext.Background(), storage.ScopeFullControl)
if err != nil {
// Assume that the file contents are within the environment variable since it exists
// but does not contain a valid file path
jwtConfig, err := google.JWTConfigFromJSON(jsonKey, storage.ScopeFullControl)
if err != nil {
panic(fmt.Sprintf("Error reading JWT config : %s", err))
}
email = jwtConfig.Email
privateKey = jwtConfig.PrivateKey
if len(privateKey) == 0 {
panic("Error reading JWT config : missing private_key property")
}
if email == "" {
panic("Error reading JWT config : missing client_email property")
}
ts = jwtConfig.TokenSource(dcontext.Background())
}
gcs, err := storage.NewClient(dcontext.Background(), option.WithCredentialsJSON(jsonKey))
if err != nil {
panic(fmt.Sprintf("Error initializing gcs client : %v", err))
}
parameters := driverParameters{ parameters := driverParameters{
bucket: bucket, bucket: bucket,
rootDirectory: rootDirectory, rootDirectory: rootDirectory,

View file

@ -50,6 +50,6 @@ pZeMRablbPQdp8/1NyIwimq1VlG0ohQ4P6qhW7E09ZMC
t.Fatal(err) t.Fatal(err)
} }
if storageDriver == nil { if storageDriver == nil {
t.Fatal("Driver couldnt be initialized.") t.Fatal("Driver could not be initialized")
} }
} }

View file

@ -0,0 +1,86 @@
package middleware
import (
"context"
"fmt"
"net/http"
"net/url"
"strings"
storagedriver "github.com/distribution/distribution/v3/registry/storage/driver"
storagemiddleware "github.com/distribution/distribution/v3/registry/storage/driver/middleware"
"github.com/sirupsen/logrus"
)
func init() {
if err := storagemiddleware.Register("rewrite", newRewriteStorageMiddleware); err != nil {
logrus.Errorf("tailed to register redirect storage middleware: %v", err)
}
}
type rewriteStorageMiddleware struct {
storagedriver.StorageDriver
overrideScheme string
overrideHost string
trimPathPrefix string
}
var _ storagedriver.StorageDriver = &rewriteStorageMiddleware{}
func getStringOption(key string, options map[string]interface{}) (string, error) {
o, ok := options[key]
if !ok {
return "", nil
}
s, ok := o.(string)
if !ok {
return "", fmt.Errorf("%s must be a string", key)
}
return s, nil
}
func newRewriteStorageMiddleware(ctx context.Context, sd storagedriver.StorageDriver, options map[string]interface{}) (storagedriver.StorageDriver, error) {
var err error
r := &rewriteStorageMiddleware{StorageDriver: sd}
if r.overrideScheme, err = getStringOption("scheme", options); err != nil {
return nil, err
}
if r.overrideHost, err = getStringOption("host", options); err != nil {
return nil, err
}
if r.trimPathPrefix, err = getStringOption("trimpathprefix", options); err != nil {
return nil, err
}
return r, nil
}
func (r *rewriteStorageMiddleware) RedirectURL(req *http.Request, path string) (string, error) {
storagePath, err := r.StorageDriver.RedirectURL(req, path)
if err != nil {
return "", err
}
u, err := url.Parse(storagePath)
if err != nil {
return "", err
}
if r.overrideScheme != "" {
u.Scheme = r.overrideScheme
}
if r.overrideHost != "" {
u.Host = r.overrideHost
}
if r.trimPathPrefix != "" {
u.Path = strings.TrimPrefix(u.Path, r.trimPathPrefix)
}
return u.String(), nil
}

Some files were not shown because too many files have changed in this diff Show more