Compare commits

...

24 commits

Author SHA1 Message Date
Stephen Day
0bae7512b2 Merge pull request #2344 from stevvooe/prepare-2.5.2
[release/2.5] release: prepare for 2.5.2 release
2017-07-20 14:21:03 -07:00
Stephen J Day
48cb60af7c
release: prepare for 2.5.2 release
Signed-off-by: Stephen J Day <stephen.day@docker.com>
2017-07-20 14:10:39 -07:00
Stephen Day
2b0952dca1 Merge pull request #2342 from stevvooe/limit-payload-size-25
[release/2.5] registry/{storage,handlers}: limit content sizes
2017-07-20 13:55:19 -07:00
Stephen J Day
58d239d723
registry/{storage,handlers}: limit content sizes
Under certain circumstances, the use of `StorageDriver.GetContent` can
result in unbounded memory allocations. In particualr, this happens when
accessing a layer through the manifests endpoint.

This problem is mitigated by setting a 4MB limit when using to access
content that may have been accepted from a user. In practice, this means
setting the limit with the use of `BlobProvider.Get` by wrapping
`StorageDriver.GetContent` in a helper that uses `StorageDriver.Reader`
with a `limitReader` that returns an error.

When mitigating this security issue, we also noticed that the size of
manifests uploaded to the registry is also unlimited. We apply similar
logic to the request body of payloads that are full buffered.

Signed-off-by: Stephen J Day <stephen.day@docker.com>
(cherry picked from commit 55ea440428)
Signed-off-by: Stephen J Day <stephen.day@docker.com>
2017-07-20 13:39:13 -07:00
Derek McGowan
9bc9d212ec Merge pull request #2122 from mstanleyjones/configuration_changes_backport
Backport #2116 to releases/2.5
2017-01-03 15:43:27 -08:00
Derek McGowan
fcbea606cb Improve formatting of configuration.md
Backported from master to release/2.5

Signed-off-by: Misty Stanley-Jones <misty@docker.com>
2017-01-03 14:44:46 -08:00
Derek McGowan
6b114e6d8f Merge pull request #2081 from Windfarer/release/2.5
fix panic when using storage redirect middleware
2017-01-03 10:19:19 -08:00
Eric Yang
6c985f7f63 Update main.go
Signed-off-by: Eric Yang <windfarer@gmail.com>
2016-11-24 20:41:49 +08:00
Derek McGowan
2c3b616fee Merge pull request #2054 from mstanleyjones/2.5_metadata_fixes
2.5 metadata fixes
2016-11-14 15:23:15 -08:00
Derek McGowan
5adfbe34db
Remove newlines from end of error strings
Golint now checks for new lines at the end of go error strings,
remove these unneeded new lines.

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2016-11-14 14:10:23 -08:00
Richard Scothern
cfe7079300
Satisfy the latest go lint rules
Signed-off-by: Richard Scothern <richard.scothern@docker.com>
2016-11-14 13:48:06 -08:00
Misty Stanley-Jones
abd2d765ac Metadata and formatting fixes needed for Jekyll build
Signed-off-by: Misty Stanley-Jones <misty@docker.com>
(cherry picked from commit 49d6706ce9d952718725350d82d9ea7deb4f7326)
Signed-off-by: Misty Stanley-Jones <misty@docker.com>
2016-11-11 11:58:31 -08:00
Misty Stanley-Jones
6b3ccf9640 Convert Markdown frontmatter to YAML
Some frontmatter such as the weights, menu stuff, etc is no longer used
'draft=true' becomes 'published: false'

Signed-off-by: Misty Stanley-Jones <misty@docker.com>
(cherry picked from commit f180e9a934)
Signed-off-by: Misty Stanley-Jones <misty@docker.com>
(cherry picked from commit c5a8e74c562cd62db83df69ec71d9cee3e346317)
Signed-off-by: Misty Stanley-Jones <misty@docker.com>
2016-11-11 11:58:31 -08:00
Misty Stanley-Jones
a8402a2253 Merge pull request #1985 from johndmulhausen/master
Remove old documentation source, add README on migration
(cherry picked from commit c372264f17)

Signed-off-by: Misty Stanley-Jones <misty@docker.com>
(cherry picked from commit f1219102a421c15f5c6fc437c1e1ec951424d9b5)
Signed-off-by: Misty Stanley-Jones <misty@docker.com>
2016-11-11 11:58:31 -08:00
Derek McGowan
0a22649f66 Update to fix lint errors
Context should use type values instead of strings.
Updated direct calls to WithValue, but still other uses of string keys.
Update Acl to ACL in s3 driver.

Cherry-picked to release/2.5 branch

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
Signed-off-by: Misty Stanley-Jones <misty@docker.com>
2016-11-11 11:58:28 -08:00
Edgar Lee
12acdf0a6c Stop ErrFinishedWalk from escaping from Repositories walk
Signed-off-by: Edgar Lee <edgar.lee@docker.com>
2016-08-26 11:20:37 -07:00
Edgar Lee
45b84c9512 Use typecast over reflect for error type checking
Signed-off-by: Edgar Lee <edgar.lee@docker.com>
2016-08-26 11:19:18 -07:00
Edgar Lee
8160a430be Handle new errors returned from catalog repository listing
Signed-off-by: Edgar Lee <edgar.lee@docker.com>
2016-08-26 11:19:06 -07:00
Edgar Lee
a405d3e88b Improve catalog enumerate runtime by an order of magnitude
Signed-off-by: Edgar Lee <edgar.lee@docker.com>
2016-08-26 11:10:53 -07:00
Stephen J Day
2aa09ff9a8 registry/storage: more efficient path compare in catalog
Previous component-wise path comparison is recursive and generates a
large amount of garbage. This more efficient version simply replaces the
path comparison with the zero-value to sort before everything. We do
this by replacing the byte-wise comparison that swaps a single character
inline for the separator comparison, such that separators sort first.

The resulting implementation provides component-wise path comparison
with no cost incurred for allocation or stack frame.

Direction of the comparison is also reversed to match Go style.

Signed-off-by: Stephen J Day <stephen.day@docker.com>
2016-08-26 11:08:59 -07:00
Edgar Lee
fdc51bb1f2 Stop ErrFinishedWalk from escaping from Repositories walk
Signed-off-by: Edgar Lee <edgar.lee@docker.com>
2016-08-26 11:08:41 -07:00
Sebastien Coavoux
0567fa3c2a Fix: Compare path properly when list repository in catalog. #1854
Signed-off-by: Sebastien Coavoux <alignak@pyseb.cx>
2016-08-26 11:07:41 -07:00
Edgar Lee
22a59f2512 Refactor errVal named parameter for catalog repositories to err
Signed-off-by: Edgar Lee <edgar.lee@docker.com>
2016-08-26 11:07:32 -07:00
Edgar Lee
734caef0f4 Fix storage drivers dropping non EOF errors when listing repositories
This fixes errors other than io.EOF from being dropped when a storage driver
lists repositories. For example, filesystem driver may point to a missing
directory and errors, which then gets subsequently dropped.

Signed-off-by: Edgar Lee <edgar.lee@docker.com>
2016-08-26 11:07:24 -07:00
69 changed files with 1246 additions and 4853 deletions

View file

@ -16,3 +16,4 @@ davidli <wenquan.li@hp.com> davidli <wenquan.li@hpe.com>
Omer Cohen <git@omer.io> Omer Cohen <git@omerc.net> Omer Cohen <git@omer.io> Omer Cohen <git@omerc.net>
Eric Yang <windfarer@gmail.com> Eric Yang <Windfarer@users.noreply.github.com> Eric Yang <windfarer@gmail.com> Eric Yang <Windfarer@users.noreply.github.com>
Nikita Tarasov <nikita@mygento.ru> Nikita <luckyraul@users.noreply.github.com> Nikita Tarasov <nikita@mygento.ru> Nikita <luckyraul@users.noreply.github.com>
Misty Stanley-Jones <misty@docker.com> Misty Stanley-Jones <misty@apache.org>

22
AUTHORS
View file

@ -16,9 +16,9 @@ Andrew T Nguyen <andrew.nguyen@docker.com>
Andrey Kostov <kostov.andrey@gmail.com> Andrey Kostov <kostov.andrey@gmail.com>
Andy Goldstein <agoldste@redhat.com> Andy Goldstein <agoldste@redhat.com>
Anis Elleuch <vadmeste@gmail.com> Anis Elleuch <vadmeste@gmail.com>
Anton Tiurin <noxiouz@yandex.ru>
Antonio Mercado <amercado@thinknode.com> Antonio Mercado <amercado@thinknode.com>
Antonio Murdaca <runcom@redhat.com> Antonio Murdaca <runcom@redhat.com>
Anton Tiurin <noxiouz@yandex.ru>
Arien Holthuizen <aholthuizen@schubergphilis.com> Arien Holthuizen <aholthuizen@schubergphilis.com>
Arnaud Porterie <arnaud.porterie@docker.com> Arnaud Porterie <arnaud.porterie@docker.com>
Arthur Baars <arthur@semmle.com> Arthur Baars <arthur@semmle.com>
@ -31,6 +31,8 @@ bin liu <liubin0329@gmail.com>
Brian Bland <brian.bland@docker.com> Brian Bland <brian.bland@docker.com>
burnettk <burnettk@gmail.com> burnettk <burnettk@gmail.com>
Carson A <ca@carsonoid.net> Carson A <ca@carsonoid.net>
Cezar Sa Espinola <cezarsa@gmail.com>
Charles Smith <charles.smith@docker.com>
Chris Dillon <squarism@gmail.com> Chris Dillon <squarism@gmail.com>
cyli <cyli@twistedmatrix.com> cyli <cyli@twistedmatrix.com>
Daisuke Fujita <dtanshi45@gmail.com> Daisuke Fujita <dtanshi45@gmail.com>
@ -39,15 +41,16 @@ Darren Shepherd <darren@rancher.com>
Dave Trombley <dave.trombley@gmail.com> Dave Trombley <dave.trombley@gmail.com>
Dave Tucker <dt@docker.com> Dave Tucker <dt@docker.com>
David Lawrence <david.lawrence@docker.com> David Lawrence <david.lawrence@docker.com>
davidli <wenquan.li@hp.com>
David Verhasselt <david@crowdway.com> David Verhasselt <david@crowdway.com>
David Xia <dxia@spotify.com> David Xia <dxia@spotify.com>
davidli <wenquan.li@hp.com>
Dejan Golja <dejan@golja.org> Dejan Golja <dejan@golja.org>
Derek McGowan <derek@mcgstyle.net> Derek McGowan <derek@mcgstyle.net>
Diogo Mónica <diogo.monica@gmail.com> Diogo Mónica <diogo.monica@gmail.com>
DJ Enriquez <dj.enriquez@infospace.com> DJ Enriquez <dj.enriquez@infospace.com>
Donald Huang <don.hcd@gmail.com> Donald Huang <don.hcd@gmail.com>
Doug Davis <dug@us.ibm.com> Doug Davis <dug@us.ibm.com>
Edgar Lee <edgar.lee@docker.com>
Eric Yang <windfarer@gmail.com> Eric Yang <windfarer@gmail.com>
Fabio Huser <fabio@fh1.ch> Fabio Huser <fabio@fh1.ch>
farmerworking <farmerworking@gmail.com> farmerworking <farmerworking@gmail.com>
@ -58,9 +61,9 @@ gabriell nascimento <gabriell@bluesoft.com.br>
Gleb Schukin <gschukin@ptsecurity.com> Gleb Schukin <gschukin@ptsecurity.com>
harche <p.harshal@gmail.com> harche <p.harshal@gmail.com>
Henri Gomez <henri.gomez@gmail.com> Henri Gomez <henri.gomez@gmail.com>
Hu Keping <hukeping@huawei.com>
Hua Wang <wanghua.humble@gmail.com> Hua Wang <wanghua.humble@gmail.com>
Hu Keping <hukeping@huawei.com> Hu Keping <hukeping@huawei.com>
HuKeping <hukeping@huawei.com>
Ian Babrou <ibobrik@gmail.com> Ian Babrou <ibobrik@gmail.com>
igayoso <igayoso@gmail.com> igayoso <igayoso@gmail.com>
Jack Griffin <jackpg14@gmail.com> Jack Griffin <jackpg14@gmail.com>
@ -70,20 +73,20 @@ Jessie Frazelle <jessie@docker.com>
jhaohai <jhaohai@foxmail.com> jhaohai <jhaohai@foxmail.com>
Jianqing Wang <tsing@jianqing.org> Jianqing Wang <tsing@jianqing.org>
John Starks <jostarks@microsoft.com> John Starks <jostarks@microsoft.com>
Jonathan Boulle <jonathanboulle@gmail.com>
Jon Johnson <jonjohnson@google.com> Jon Johnson <jonjohnson@google.com>
Jon Poler <jonathan.poler@apcera.com> Jon Poler <jonathan.poler@apcera.com>
Jonathan Boulle <jonathanboulle@gmail.com>
Jordan Liggitt <jliggitt@redhat.com> Jordan Liggitt <jliggitt@redhat.com>
Josh Hawn <josh.hawn@docker.com> Josh Hawn <josh.hawn@docker.com>
Julien Fernandez <julien.fernandez@gmail.com> Julien Fernandez <julien.fernandez@gmail.com>
Ke Xu <leonhartx.k@gmail.com>
Keerthan Mala <kmala@engineyard.com> Keerthan Mala <kmala@engineyard.com>
Kelsey Hightower <kelsey.hightower@gmail.com> Kelsey Hightower <kelsey.hightower@gmail.com>
Kenneth Lim <kennethlimcp@gmail.com> Kenneth Lim <kennethlimcp@gmail.com>
Kenny Leung <kleung@google.com> Kenny Leung <kleung@google.com>
Li Yi <denverdino@gmail.com> Ke Xu <leonhartx.k@gmail.com>
Liu Hua <sdu.liu@huawei.com>
liuchang0812 <liuchang0812@gmail.com> liuchang0812 <liuchang0812@gmail.com>
Liu Hua <sdu.liu@huawei.com>
Li Yi <denverdino@gmail.com>
Louis Kottmann <louis.kottmann@gmail.com> Louis Kottmann <louis.kottmann@gmail.com>
Luke Carpenter <x@rubynerd.net> Luke Carpenter <x@rubynerd.net>
Mary Anthony <mary@docker.com> Mary Anthony <mary@docker.com>
@ -94,6 +97,7 @@ Matt Robenolt <matt@ydekproductions.com>
Michael Prokop <mika@grml.org> Michael Prokop <mika@grml.org>
Michal Minar <miminar@redhat.com> Michal Minar <miminar@redhat.com>
Miquel Sabaté <msabate@suse.com> Miquel Sabaté <msabate@suse.com>
Misty Stanley-Jones <misty@docker.com>
Morgan Bauer <mbauer@us.ibm.com> Morgan Bauer <mbauer@us.ibm.com>
moxiegirl <mary@docker.com> moxiegirl <mary@docker.com>
Nathan Sullivan <nathan@nightsys.net> Nathan Sullivan <nathan@nightsys.net>
@ -113,6 +117,7 @@ Rodolfo Carvalho <rhcarvalho@gmail.com>
Rusty Conover <rusty@luckydinosaur.com> Rusty Conover <rusty@luckydinosaur.com>
Sean Boran <Boran@users.noreply.github.com> Sean Boran <Boran@users.noreply.github.com>
Sebastiaan van Stijn <github@gone.nl> Sebastiaan van Stijn <github@gone.nl>
Sebastien Coavoux <s.coavoux@free.fr>
Serge Dubrouski <sergeyfd@gmail.com> Serge Dubrouski <sergeyfd@gmail.com>
Sharif Nassar <sharif@mrwacky.com> Sharif Nassar <sharif@mrwacky.com>
Shawn Falkner-Horine <dreadpirateshawn@gmail.com> Shawn Falkner-Horine <dreadpirateshawn@gmail.com>
@ -134,11 +139,12 @@ Tonis Tiigi <tonistiigi@gmail.com>
Tony Holdstock-Brown <tony@docker.com> Tony Holdstock-Brown <tony@docker.com>
Trevor Pounds <trevor.pounds@gmail.com> Trevor Pounds <trevor.pounds@gmail.com>
Troels Thomsen <troels@thomsen.io> Troels Thomsen <troels@thomsen.io>
Victoria Bialas <victoria.bialas@docker.com>
Vincent Batts <vbatts@redhat.com> Vincent Batts <vbatts@redhat.com>
Vincent Demeester <vincent@sbr.pm> Vincent Demeester <vincent@sbr.pm>
Vincent Giersch <vincent.giersch@ovh.net> Vincent Giersch <vincent.giersch@ovh.net>
W. Trevor King <wking@tremily.us>
weiyuan.yl <weiyuan.yl@alibaba-inc.com> weiyuan.yl <weiyuan.yl@alibaba-inc.com>
W. Trevor King <wking@tremily.us>
xg.song <xg.song@venusource.com> xg.song <xg.song@venusource.com>
xiekeyang <xiekeyang@huawei.com> xiekeyang <xiekeyang@huawei.com>
Yann ROBERT <yann.robert@anantaplex.fr> Yann ROBERT <yann.robert@anantaplex.fr>

View file

@ -13,6 +13,7 @@ import (
_ "github.com/docker/distribution/registry/storage/driver/gcs" _ "github.com/docker/distribution/registry/storage/driver/gcs"
_ "github.com/docker/distribution/registry/storage/driver/inmemory" _ "github.com/docker/distribution/registry/storage/driver/inmemory"
_ "github.com/docker/distribution/registry/storage/driver/middleware/cloudfront" _ "github.com/docker/distribution/registry/storage/driver/middleware/cloudfront"
_ "github.com/docker/distribution/registry/storage/driver/middleware/redirect"
_ "github.com/docker/distribution/registry/storage/driver/oss" _ "github.com/docker/distribution/registry/storage/driver/oss"
_ "github.com/docker/distribution/registry/storage/driver/s3-aws" _ "github.com/docker/distribution/registry/storage/driver/s3-aws"
_ "github.com/docker/distribution/registry/storage/driver/s3-goamz" _ "github.com/docker/distribution/registry/storage/driver/s3-goamz"

View file

@ -176,6 +176,18 @@ func filterAccessList(ctx context.Context, scope string, requestedAccessList []a
return grantedAccessList return grantedAccessList
} }
type acctSubject struct{}
func (acctSubject) String() string { return "acctSubject" }
type requestedAccess struct{}
func (requestedAccess) String() string { return "requestedAccess" }
type grantedAccess struct{}
func (grantedAccess) String() string { return "grantedAccess" }
// getToken handles authenticating the request and authorizing access to the // getToken handles authenticating the request and authorizing access to the
// requested scopes. // requested scopes.
func (ts *tokenServer) getToken(ctx context.Context, w http.ResponseWriter, r *http.Request) { func (ts *tokenServer) getToken(ctx context.Context, w http.ResponseWriter, r *http.Request) {
@ -218,17 +230,17 @@ func (ts *tokenServer) getToken(ctx context.Context, w http.ResponseWriter, r *h
username := context.GetStringValue(ctx, "auth.user.name") username := context.GetStringValue(ctx, "auth.user.name")
ctx = context.WithValue(ctx, "acctSubject", username) ctx = context.WithValue(ctx, acctSubject{}, username)
ctx = context.WithLogger(ctx, context.GetLogger(ctx, "acctSubject")) ctx = context.WithLogger(ctx, context.GetLogger(ctx, acctSubject{}))
context.GetLogger(ctx).Info("authenticated client") context.GetLogger(ctx).Info("authenticated client")
ctx = context.WithValue(ctx, "requestedAccess", requestedAccessList) ctx = context.WithValue(ctx, requestedAccess{}, requestedAccessList)
ctx = context.WithLogger(ctx, context.GetLogger(ctx, "requestedAccess")) ctx = context.WithLogger(ctx, context.GetLogger(ctx, requestedAccess{}))
grantedAccessList := filterAccessList(ctx, username, requestedAccessList) grantedAccessList := filterAccessList(ctx, username, requestedAccessList)
ctx = context.WithValue(ctx, "grantedAccess", grantedAccessList) ctx = context.WithValue(ctx, grantedAccess{}, grantedAccessList)
ctx = context.WithLogger(ctx, context.GetLogger(ctx, "grantedAccess")) ctx = context.WithLogger(ctx, context.GetLogger(ctx, grantedAccess{}))
token, err := ts.issuer.CreateJWT(username, service, grantedAccessList) token, err := ts.issuer.CreateJWT(username, service, grantedAccessList)
if err != nil { if err != nil {
@ -340,17 +352,17 @@ func (ts *tokenServer) postToken(ctx context.Context, w http.ResponseWriter, r *
return return
} }
ctx = context.WithValue(ctx, "acctSubject", subject) ctx = context.WithValue(ctx, acctSubject{}, subject)
ctx = context.WithLogger(ctx, context.GetLogger(ctx, "acctSubject")) ctx = context.WithLogger(ctx, context.GetLogger(ctx, acctSubject{}))
context.GetLogger(ctx).Info("authenticated client") context.GetLogger(ctx).Info("authenticated client")
ctx = context.WithValue(ctx, "requestedAccess", requestedAccessList) ctx = context.WithValue(ctx, requestedAccess{}, requestedAccessList)
ctx = context.WithLogger(ctx, context.GetLogger(ctx, "requestedAccess")) ctx = context.WithLogger(ctx, context.GetLogger(ctx, requestedAccess{}))
grantedAccessList := filterAccessList(ctx, subject, requestedAccessList) grantedAccessList := filterAccessList(ctx, subject, requestedAccessList)
ctx = context.WithValue(ctx, "grantedAccess", grantedAccessList) ctx = context.WithValue(ctx, grantedAccess{}, grantedAccessList)
ctx = context.WithLogger(ctx, context.GetLogger(ctx, "grantedAccess")) ctx = context.WithLogger(ctx, context.GetLogger(ctx, grantedAccess{}))
token, err := ts.issuer.CreateJWT(subject, service, grantedAccessList) token, err := ts.issuer.CreateJWT(subject, service, grantedAccessList)
if err != nil { if err != nil {

View file

@ -1,9 +0,0 @@
FROM docs/base:oss
MAINTAINER Docker Docs <docs@docker.com>
ENV PROJECT=registry
# To get the git info for this repo
COPY . /src
RUN rm -rf /docs/content/$PROJECT/
COPY . /docs/content/$PROJECT/

View file

@ -1,38 +0,0 @@
.PHONY: all default docs docs-build docs-shell shell test
# to allow `make DOCSDIR=docs docs-shell` (to create a bind mount in docs)
DOCS_MOUNT := $(if $(DOCSDIR),-v $(CURDIR)/$(DOCSDIR):/$(DOCSDIR))
# to allow `make DOCSPORT=9000 docs`
DOCSPORT := 8000
# Get the IP ADDRESS
DOCKER_IP=$(shell python -c "import urlparse ; print urlparse.urlparse('$(DOCKER_HOST)').hostname or ''")
HUGO_BASE_URL=$(shell test -z "$(DOCKER_IP)" && echo localhost || echo "$(DOCKER_IP)")
HUGO_BIND_IP=0.0.0.0
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g")
DOCKER_DOCS_IMAGE := registry-docs$(if $(GIT_BRANCH_CLEAN),:$(GIT_BRANCH_CLEAN))
DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e AWS_S3_BUCKET -e NOCACHE
# for some docs workarounds (see below in "docs-build" target)
GITCOMMIT := $(shell git rev-parse --short HEAD 2>/dev/null)
default: docs
docs: docs-build
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP)
docs-draft: docs-build
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 -e DOCKERHOST "$(DOCKER_DOCS_IMAGE)" hugo server --buildDrafts="true" --port=$(DOCSPORT) --baseUrl=$(HUGO_BASE_URL) --bind=$(HUGO_BIND_IP)
docs-shell: docs-build
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 "$(DOCKER_DOCS_IMAGE)" bash
docs-build:
docker build -t "$(DOCKER_DOCS_IMAGE)" .
test: docs-build
$(DOCKER_RUN_DOCS) "$(DOCKER_DOCS_IMAGE)"

16
docs/README.md Normal file
View file

@ -0,0 +1,16 @@
# The docs have been moved!
The documentation for Registry has been merged into
[the general documentation repo](https://github.com/docker/docker.github.io).
Commit history has been preserved.
The docs for Registry are now here:
https://github.com/docker/docker.github.io/tree/master/registry
> Note: The definitive [./spec directory](spec/) directory and
[configuration.md](configuration.md) file will be maintained in this repository
and be refreshed periodically in
[the general documentation repo](https://github.com/docker/docker.github.io).
As always, the docs in the general repo remain open-source and we appreciate
your feedback and pull requests!

View file

@ -1,54 +0,0 @@
<!--[metadata]>
+++
draft = true
+++
<![end-metadata]-->
# Architecture
## Design
**TODO(stevvooe):** Discuss the architecture of the registry, internally and externally, in a few different deployment scenarios.
### Eventual Consistency
> **NOTE:** This section belongs somewhere, perhaps in a design document. We
> are leaving this here so the information is not lost.
Running the registry on eventually consistent backends has been part of the
design from the beginning. This section covers some of the approaches to
dealing with this reality.
There are a few classes of issues that we need to worry about when
implementing something on top of the storage drivers:
1. Read-After-Write consistency (see this [article on
s3](http://shlomoswidler.com/2009/12/read-after-write-consistency-in-amazon.html)).
2. [Write-Write Conflicts](http://en.wikipedia.org/wiki/Write%E2%80%93write_conflict).
In reality, the registry must worry about these kinds of errors when doing the
following:
1. Accepting data into a temporary upload file may not have latest data block
yet (read-after-write).
2. Moving uploaded data into its blob location (write-write race).
3. Modifying the "current" manifest for given tag (write-write race).
4. A whole slew of operations around deletes (read-after-write, delete-write
races, garbage collection, etc.).
The backend path layout employs a few techniques to avoid these problems:
1. Large writes are done to private upload directories. This alleviates most
of the corruption potential under multiple writers by avoiding multiple
writers.
2. Constraints in storage driver implementations, such as support for writing
after the end of a file to extend it.
3. Digest verification to avoid data corruption.
4. Manifest files are stored by digest and cannot change.
5. All other non-content files (links, hashes, etc.) are written as an atomic
unit. Anything that requires additions and deletions is broken out into
separate "files". Last writer still wins.
Unfortunately, one must play this game when trying to build something like
this on top of eventually consistent storage systems. If we run into serious
problems, we can wrap the storagedrivers in a shared consistency layer but
that would increase complexity and hinder registry cluster performance.

View file

@ -1,84 +0,0 @@
<!--[metadata]>
+++
title = "Compatibility"
description = "describes get by digest pitfall"
keywords = ["registry, manifest, images, tags, repository, distribution, digest"]
[menu.main]
parent="smn_registry_ref"
weight=9
+++
<![end-metadata]-->
# Registry Compatibility
## Synopsis
*If a manifest is pulled by _digest_ from a registry 2.3 with Docker Engine 1.9
and older, and the manifest was pushed with Docker Engine 1.10, a security check
will cause the Engine to receive a manifest it cannot use and the pull will fail.*
## Registry Manifest Support
Historically, the registry has supported a [single manifest type](./spec/manifest-v2-1.md)
known as _Schema 1_.
With the move toward multiple architecture images the distribution project
introduced two new manifest types: Schema 2 manifests and manifest lists. The
registry 2.3 supports all three manifest types and in order to be compatible
with older Docker engines will, in certain cases, do an on-the-fly
transformation of a manifest before serving the JSON in the response.
This conversion has some implications for pulling manifests by digest and this
document enumerate these implications.
## Content Addressable Storage (CAS)
Manifests are stored and retrieved in the registry by keying off a digest
representing a hash of the contents. One of the advantages provided by CAS is
security: if the contents are changed, then the digest will no longer match.
This prevents any modification of the manifest by a MITM attack or an untrusted
third party.
When a manifest is stored by the registry, this digest is returned in the HTTP
response headers and, if events are configured, delivered within the event. The
manifest can either be retrieved by the tag, or this digest.
For registry versions 2.2.1 and below, the registry will always store and
serve _Schema 1_ manifests. The Docker Engine 1.10 will first
attempt to send a _Schema 2_ manifest, falling back to sending a
Schema 1 type manifest when it detects that the registry does not
support the new version.
## Registry v2.3
### Manifest Push with Docker 1.9 and Older
The Docker Engine will construct a _Schema 1_ manifest which the
registry will persist to disk.
When the manifest is pulled by digest or tag with any docker version, a
_Schema 1_ manifest will be returned.
### Manifest Push with Docker 1.10
The docker engine will construct a _Schema 2_ manifest which the
registry will persist to disk.
When the manifest is pulled by digest or tag with Docker Engine 1.10, a
_Schema 2_ manifest will be returned. The Docker Engine 1.10
understands the new manifest format.
When the manifest is pulled by *tag* with Docker Engine 1.9 and older, the
manifest is converted on-the-fly to _Schema 1_ and sent in the
response. The Docker Engine 1.9 is compatible with this older format.
*When the manifest is pulled by _digest_ with Docker Engine 1.9 and older, the
same rewriting process will not happen in the registry. If this were to happen
the digest would no longer match the hash of the manifest and would violate the
constraints of CAS.*
For this reason if a manifest is pulled by _digest_ from a registry 2.3 with Docker
Engine 1.9 and older, and the manifest was pushed with Docker Engine 1.10, a
security check will cause the Engine to receive a manifest it cannot use and the
pull will fail.

File diff suppressed because it is too large Load diff

View file

@ -1,237 +0,0 @@
<!--[metadata]>
+++
title = "Deploying a registry server"
description = "Explains how to deploy a registry"
keywords = ["registry, on-prem, images, tags, repository, distribution, deployment"]
[menu.main]
parent="smn_registry"
weight=3
+++
<![end-metadata]-->
# Deploying a registry server
You need to [install Docker version 1.6.0 or newer](/engine/installation/index.md).
## Running on localhost
Start your registry:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
You can now use it with docker.
Get any image from the hub and tag it to point to your registry:
docker pull ubuntu && docker tag ubuntu localhost:5000/ubuntu
... then push it to your registry:
docker push localhost:5000/ubuntu
... then pull it back from your registry:
docker pull localhost:5000/ubuntu
To stop your registry, you would:
docker stop registry && docker rm -v registry
## Storage
By default, your registry data is persisted as a [docker volume](/engine/tutorials/dockervolumes.md) on the host filesystem. Properly understanding volumes is essential if you want to stick with a local filesystem storage.
Specifically, you might want to point your volume location to a specific place in order to more easily access your registry data. To do so you can:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/data:/var/lib/registry \
registry:2
### Alternatives
You should usually consider using [another storage backend](./storage-drivers/index.md) instead of the local filesystem. Use the [storage configuration options](./configuration.md#storage) to configure an alternate storage backend.
Using one of these will allow you to more easily scale your registry, and leverage your storage redundancy and availability features.
## Running a domain registry
While running on `localhost` has its uses, most people want their registry to be more widely available. To do so, the Docker engine requires you to secure it using TLS, which is conceptually very similar to configuring your web server with SSL.
### Get a certificate
Assuming that you own the domain `myregistrydomain.com`, and that its DNS record points to the host where you are running your registry, you first need to get a certificate from a CA.
Create a `certs` directory:
mkdir -p certs
Then move and/or rename your crt file to: `certs/domain.crt`, and your key file to: `certs/domain.key`.
Make sure you stopped your registry from the previous steps, then start your registry again with TLS enabled:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
You should now be able to access your registry from another docker host:
docker pull ubuntu
docker tag ubuntu myregistrydomain.com:5000/ubuntu
docker push myregistrydomain.com:5000/ubuntu
docker pull myregistrydomain.com:5000/ubuntu
#### Gotcha
A certificate issuer may supply you with an *intermediate* certificate. In this case, you must combine your certificate with the intermediate's to form a *certificate bundle*. You can do this using the `cat` command:
cat domain.crt intermediate-certificates.pem > certs/domain.crt
### Let's Encrypt
The registry supports using Let's Encrypt to automatically obtain a browser-trusted certificate. For more
information on Let's Encrypt, see [https://letsencrypt.org/how-it-works/](https://letsencrypt.org/how-it-works/) and the relevant section of the [registry configuration](configuration.md#letsencrypt).
### Alternatives
While rarely advisable, you may want to use self-signed certificates instead, or use your registry in an insecure fashion. You will find instructions [here](insecure.md).
## Load Balancing Considerations
One may want to use a load balancer to distribute load, terminate TLS or
provide high availability. While a full load balancing setup is outside the
scope of this document, there are a few considerations that can make the process
smoother.
The most important aspect is that a load balanced cluster of registries must
share the same resources. For the current version of the registry, this means
the following must be the same:
- Storage Driver
- HTTP Secret
- Redis Cache (if configured)
If any of these are different, the registry will have trouble serving requests.
As an example, if you're using the filesystem driver, all registry instances
must have access to the same filesystem root, which means they should be in
the same machine. For other drivers, such as s3 or azure, they should be
accessing the same resource, and will likely share an identical configuration.
The _HTTP Secret_ coordinates uploads, so also must be the same across
instances. Configuring different redis instances will work (at the time
of writing), but will not be optimal if the instances are not shared, causing
more requests to be directed to the backend.
#### Important/Required HTTP-Headers
Getting the headers correct is very important. For all responses to any
request under the "/v2/" url space, the `Docker-Distribution-API-Version`
header should be set to the value "registry/2.0", even for a 4xx response.
This header allows the docker engine to quickly resolve authentication realms
and fallback to version 1 registries, if necessary. Confirming this is setup
correctly can help avoid problems with fallback.
In the same train of thought, you must make sure you are properly sending the
`X-Forwarded-Proto`, `X-Forwarded-For` and `Host` headers to their "client-side"
values. Failure to do so usually makes the registry issue redirects to internal
hostnames or downgrading from https to http.
A properly secured registry should return 401 when the "/v2/" endpoint is hit
without credentials. The response should include a `WWW-Authenticate`
challenge, providing guidance on how to authenticate, such as with basic auth
or a token service. If the load balancer has health checks, it is recommended
to configure it to consider a 401 response as healthy and any other as down.
This will secure your registry by ensuring that configuration problems with
authentication don't accidentally expose an unprotected registry. If you're
using a less sophisticated load balancer, such as Amazon's Elastic Load
Balancer, that doesn't allow one to change the healthy response code, health
checks can be directed at "/", which will always return a `200 OK` response.
## Restricting access
Except for registries running on secure local networks, registries should always implement access restrictions.
### Native basic auth
The simplest way to achieve access restriction is through basic authentication (this is very similar to other web servers' basic authentication mechanism).
> **Warning**: You **cannot** use authentication with an insecure registry. You have to [configure TLS first](#running-a-domain-registry) for this to work.
First create a password file with one entry for the user "testuser", with password "testpassword":
mkdir auth
docker run --entrypoint htpasswd registry:2 -Bbn testuser testpassword > auth/htpasswd
Make sure you stopped your registry from the previous step, then start it again:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/auth:/auth \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
You should now be able to:
docker login myregistrydomain.com:5000
And then push and pull images as an authenticated user.
#### Gotcha
Seeing X509 errors is usually a sign you are trying to use self-signed certificates, and failed to [configure your docker daemon properly](insecure.md).
### Alternatives
1. You may want to leverage more advanced basic auth implementations through a proxy design, in front of the registry. You will find examples of such patterns in the [recipes list](recipes/index.md).
2. Alternatively, the Registry also supports delegated authentication, redirecting users to a specific, trusted token server. That approach requires significantly more investment, and only makes sense if you want to fully configure ACLs and more control over the Registry integration into your global authorization and authentication systems.
You will find [background information here](spec/auth/token.md), and [configuration information here](configuration.md#auth).
Beware that you will have to implement your own authentication service for this to work, or leverage a third-party implementation.
## Managing with Compose
As your registry configuration grows more complex, dealing with it can quickly become tedious.
It's highly recommended to use [Docker Compose](/compose/index.md) to facilitate operating your registry.
Here is a simple `docker-compose.yml` example that condenses everything explained so far:
```
registry:
restart: always
image: registry:2
ports:
- 5000:5000
environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /certs/domain.key
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
volumes:
- /path/data:/var/lib/registry
- /path/certs:/certs
- /path/auth:/auth
```
> **Warning**: replace `/path` by whatever directory that holds your `certs` and `auth` folder from above.
You can then start your registry with a simple
docker-compose up -d
## Next
You will find more specific and advanced informations in the following sections:
- [Configuration reference](configuration.md)
- [Working with notifications](notifications.md)
- [Advanced "recipes"](recipes/index.md)
- [Registry API](spec/api.md)
- [Storage driver model](storage-drivers/index.md)
- [Token authentication](spec/auth/token.md)

View file

@ -1,27 +0,0 @@
<!--[metadata]>
+++
title = "Deprecated Features"
description = "describes deprecated functionality"
keywords = ["registry, manifest, images, signatures, repository, distribution, digest"]
[menu.main]
parent="smn_registry_ref"
weight=8
+++
<![end-metadata]-->
# Docker Registry Deprecation
This document details functionality or components which are deprecated within
the registry.
### v2.5.0
The signature store has been removed from the registry. Since `v2.4.0` it has
been possible to configure the registry to generate manifest signatures rather
than load them from storage. In this version of the registry this becomes
the default behavior. Signatures which are attached to manifests on put are
not stored in the registry. This does not alter the functional behavior of
the registry.
Old signatures blobs can be removed from the registry storage by running the
garbage-collect subcommand.

View file

@ -1,137 +0,0 @@
<!--[metadata]>
+++
title = "Garbage Collection"
description = "High level discussion of garbage collection"
keywords = ["registry, garbage, images, tags, repository, distribution"]
[menu.main]
parent="smn_registry_ref"
weight=4
+++
<![end-metadata]-->
# Garbage Collection
As of v2.4.0 a garbage collector command is included within the registry binary.
This document describes what this command does and how and why it should be used.
## What is Garbage Collection?
From [wikipedia](https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)):
"In computer science, garbage collection (GC) is a form of automatic memory management. The
garbage collector, or just collector, attempts to reclaim garbage, or memory occupied by
objects that are no longer in use by the program."
In the context of the Docker registry, garbage collection is the process of
removing blobs from the filesystem which are no longer referenced by a
manifest. Blobs can include both layers and manifests.
## Why Garbage Collection?
Registry data can occupy considerable amounts of disk space and freeing up
this disk space is an oft-requested feature. Additionally for reasons of security it
can be desirable to ensure that certain layers no longer exist on the filesystem.
## Garbage Collection in the Registry
Filesystem layers are stored by their content address in the Registry. This
has many advantages, one of which is that data is stored once and referred to by manifests.
See [here](compatibility.md#content-addressable-storage-cas) for more details.
Layers are therefore shared amongst manifests; each manifest maintains a reference
to the layer. As long as a layer is referenced by one manifest, it cannot be garbage
collected.
Manifests and layers can be 'deleted` with the registry API (refer to the API
documentation [here](spec/api.md#deleting-a-layer) and
[here](spec/api.md#deleting-an-image) for details). This API removes references
to the target and makes them eligible for garbage collection. It also makes them
unable to be read via the API.
If a layer is deleted it will be removed from the filesystem when garbage collection
is run. If a manifest is deleted the layers to which it refers will be removed from
the filesystem if no other manifests refers to them.
### Example
In this example manifest A references two layers: `a` and `b`. Manifest `B` references
layers `a` and `c`. In this state, nothing is eligible for garbage collection:
```
A -----> a <----- B
\--> b |
c <--/
```
Manifest B is deleted via the API:
```
A -----> a B
\--> b
c
```
In this state layer `c` no longer has a reference and is eligible for garbage
collection. Layer `a` had one reference removed but will not be garbage
collected as it is still referenced by manifest `A`. The blob representing
manifest `B` will also be eligible for garbage collection.
After garbage collection has been run manifest `A` and its blobs remain.
```
A -----> a
\--> b
```
## How Garbage Collection works
Garbage collection runs in two phases. First, in the 'mark' phase, the process
scans all the manifests in the registry. From these manifests, it constructs a
set of content address digests. This set is the 'mark set' and denotes the set
of blobs to *not* delete. Secondly, in the 'sweep' phase, the process scans all
the blobs and if a blob's content address digest is not in the mark set, the
process will delete it.
> **NOTE** You should ensure that the registry is in read-only mode or not running at
> all. If you were to upload an image while garbage collection is running, there is the
> risk that the image's layers will be mistakenly deleted, leading to a corrupted image.
This type of garbage collection is known as stop-the-world garbage collection. In future
registry versions the intention is that garbage collection will be an automated background
action and this manual process will no longer apply.
# Running garbage collection
Garbage collection can be run as follows
`bin/registry garbage-collect [--dry-run] /path/to/config.yml`
The garbage-collect command accepts a `--dry-run` parameter, which will print the progress
of the mark and sweep phases without removing any data. Running with a log leve of `info`
will give a clear indication of what will and will not be deleted.
_Sample output from a dry run garbage collection with registry log level set to `info`_
```
hello-world
hello-world: marking manifest sha256:fea8895f450959fa676bcc1df0611ea93823a735a01205fd8622846041d0c7cf
hello-world: marking blob sha256:03f4658f8b782e12230c1783426bd3bacce651ce582a4ffb6fbbfa2079428ecb
hello-world: marking blob sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
hello-world: marking configuration sha256:690ed74de00f99a7d00a98a5ad855ac4febd66412be132438f9b8dbd300a937d
ubuntu
4 blobs marked, 5 blobs eligible for deletion
blob eligible for deletion: sha256:28e09fddaacbfc8a13f82871d9d66141a6ed9ca526cb9ed295ef545ab4559b81
blob eligible for deletion: sha256:7e15ce58ccb2181a8fced7709e9893206f0937cc9543bc0c8178ea1cf4d7e7b5
blob eligible for deletion: sha256:87192bdbe00f8f2a62527f36bb4c7c7f4eaf9307e4b87e8334fb6abec1765bcb
blob eligible for deletion: sha256:b549a9959a664038fc35c155a95742cf12297672ca0ae35735ec027d55bf4e97
blob eligible for deletion: sha256:f251d679a7c61455f06d793e43c06786d7766c88b8c24edf242b2c08e3c3f599
```

View file

@ -1,70 +0,0 @@
<!--[metadata]>
+++
draft = true
+++
<![end-metadata]-->
# Glossary
This page contains definitions for distribution related terms.
<dl>
<dt id="blob"><h4>Blob</h4></dt>
<dd>
<blockquote>A blob is any kind of content that is stored by a Registry under a content-addressable identifier (a "digest").</blockquote>
<p>
<a href="#layer">Layers</a> are a good example of "blobs".
</p>
</dd>
<dt id="image"><h4>Image</h4></dt>
<dd>
<blockquote>An image is a named set of immutable data from which a Docker container can be created.</blockquote>
<p>
An image is represented by a json file called a <a href="#manifest">manifest</a>, and is conceptually a set of <a hred="#layer">layers</a>.
Image names indicate the location where they can be pulled from and pushed to, as they usually start with a <a href="#registry">registry</a> domain name and port.
</p>
</dd>
<dt id="layer"><h4>Layer</h4></dt>
<dd>
<blockquote>A layer is a tar archive bundling partial content from a filesystem.</blockquote>
<p>
Layers from an <a href="#image">image</a> are usually extracted in order on top of each other to make up a root filesystem from which containers run out.
</p>
</dd>
<dt id="manifest"><h4>Manifest</h4></dt>
<dd><blockquote>A manifest is the JSON representation of an image.</blockquote></dd>
<dt id="namespace"><h4>Namespace</h4></dt>
<dd><blockquote>A namespace is a collection of repositories with a common name prefix.</blockquote>
<p>
The namespace with an empty prefix is considered the Global Namespace.
</p>
</dd>
<dt id="registry"><h4>Registry</h4></dt>
<dd><blockquote>A registry is a service that let you store and deliver <a href="#images">images</a>.</blockquote>
</dd>
<dt id="registry"><h4>Repository</h4></dt>
<dd>
<blockquote>A repository is a set of data containing all versions of a given image.</blockquote>
</dd>
<dt id="scope"><h4>Scope</h4></dt>
<dd><blockquote>A scope is the portion of a namespace onto which a given authorization token is granted.</blockquote></dd>
<dt id="tag"><h4>Tag</h4></dt>
<dd><blockquote>A tag is conceptually a "version" of a <a href="#image">named image</a>.</blockquote>
<p>
Example: `docker pull myimage:latest` instructs docker to pull the image "myimage" in version "latest".
</p>
</dd>
</dl>

View file

@ -1,24 +0,0 @@
<!--[metadata]>
+++
title = "Getting help"
description = "Getting help with the Registry"
keywords = ["registry, on-prem, images, tags, repository, distribution, help, 101, TL;DR"]
[menu.main]
parent="smn_registry"
weight=9
+++
<![end-metadata]-->
# Getting help
If you need help, or just want to chat, you can reach us:
- on irc: `#docker-distribution` on freenode
- on the [mailing list](https://groups.google.com/a/dockerproject.org/forum/#!forum/distribution) (mail at <distribution@dockerproject.org>)
If you want to report a bug:
- be sure to first read about [how to contribute](https://github.com/docker/distribution/blob/master/CONTRIBUTING.md)
- you can then do so on the [GitHub project bugtracker](https://github.com/docker/distribution/issues)
You can also find out more about the Docker's project [Getting Help resources](/opensource/get-help.md).

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 31 KiB

View file

@ -1,67 +0,0 @@
<!--[metadata]>
+++
title = "Registry Overview"
description = "High-level overview of the Registry"
keywords = ["registry, on-prem, images, tags, repository, distribution"]
aliases = ["/registry/overview/"]
[menu.main]
parent="smn_registry"
weight=1
+++
<![end-metadata]-->
# Docker Registry
## What it is
The Registry is a stateless, highly scalable server side application that stores and lets you distribute Docker images.
The Registry is open-source, under the permissive [Apache license](http://en.wikipedia.org/wiki/Apache_License).
## Why use it
You should use the Registry if you want to:
* tightly control where your images are being stored
* fully own your images distribution pipeline
* integrate image storage and distribution tightly into your in-house development workflow
## Alternatives
Users looking for a zero maintenance, ready-to-go solution are encouraged to head-over to the [Docker Hub](https://hub.docker.com), which provides a free-to-use, hosted Registry, plus additional features (organization accounts, automated builds, and more).
Users looking for a commercially supported version of the Registry should look into [Docker Trusted Registry](https://docs.docker.com/docker-trusted-registry/overview/).
## Requirements
The Registry is compatible with Docker engine **version 1.6.0 or higher**.
If you really need to work with older Docker versions, you should look into the [old python registry](https://github.com/docker/docker-registry).
## TL;DR
Start your registry
docker run -d -p 5000:5000 --name registry registry:2
Pull (or build) some image from the hub
docker pull ubuntu
Tag the image so that it points to your registry
docker tag ubuntu localhost:5000/myfirstimage
Push it
docker push localhost:5000/myfirstimage
Pull it back
docker pull localhost:5000/myfirstimage
Now stop your registry and remove all data
docker stop registry && docker rm -v registry
## Next
You should now read the [detailed introduction about the registry](introduction.md), or jump directly to [deployment instructions](deploying.md).

View file

@ -1,114 +0,0 @@
<!--[metadata]>
+++
title = "Testing an insecure registry"
description = "Deploying a Registry in an insecure fashion"
keywords = ["registry, on-prem, images, tags, repository, distribution, insecure"]
[menu.main]
parent="smn_registry_ref"
weight=5
+++
<![end-metadata]-->
# Insecure Registry
While it's highly recommended to secure your registry using a TLS certificate
issued by a known CA, you may alternatively decide to use self-signed
certificates, or even use your registry over plain http.
You have to understand the downsides in doing so, and the extra burden in
configuration.
## Deploying a plain HTTP registry
> **Warning**: it's not possible to use an insecure registry with basic authentication.
This basically tells Docker to entirely disregard security for your registry.
While this is relatively easy to configure the daemon in this way, it is
**very** insecure. It does expose your registry to trivial MITM. Only use this
solution for isolated testing or in a tightly controlled, air-gapped
environment.
1. Open the `/etc/default/docker` file or `/etc/sysconfig/docker` for editing.
Depending on your operating system, your Engine daemon start options.
2. Edit (or add) the `DOCKER_OPTS` line and add the `--insecure-registry` flag.
This flag takes the URL of your registry, for example.
`DOCKER_OPTS="--insecure-registry myregistrydomain.com:5000"`
3. Close and save the configuration file.
4. Restart your Docker daemon
The command you use to restart the daemon depends on your operating system.
For example, on Ubuntu, this is usually the `service docker stop` and `service
docker start` command.
5. Repeat this configuration on every Engine host that wants to access your registry.
## Using self-signed certificates
> **Warning**: using this along with basic authentication requires to **also** trust the certificate into the OS cert store for some versions of docker (see below)
This is more secure than the insecure registry solution. You must configure every docker daemon that wants to access your registry
1. Generate your own certificate:
mkdir -p certs && openssl req \
-newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
-x509 -days 365 -out certs/domain.crt
2. Be sure to use the name `myregistrydomain.com` as a CN.
3. Use the result to [start your registry with TLS enabled](./deploying.md#get-a-certificate)
4. Instruct every docker daemon to trust that certificate.
This is done by copying the `domain.crt` file to `/etc/docker/certs.d/myregistrydomain.com:5000/ca.crt`.
5. Don't forget to restart the Engine daemon.
## Troubleshooting insecure registry
This sections lists some common failures and how to recover from them.
### Failing...
Failing to configure the Engine daemon and trying to pull from a registry that is not using
TLS will results in the following message:
```
FATA[0000] Error response from daemon: v1 ping attempt failed with error:
Get https://myregistrydomain.com:5000/v1/_ping: tls: oversized record received with length 20527.
If this private registry supports only HTTP or HTTPS with an unknown CA certificate,please add
`--insecure-registry myregistrydomain.com:5000` to the daemon's arguments.
In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag;
simply place the CA certificate at /etc/docker/certs.d/myregistrydomain.com:5000/ca.crt
```
### Docker still complains about the certificate when using authentication?
When using authentication, some versions of docker also require you to trust the certificate at the OS level. Usually, on Ubuntu this is done with:
```bash
$ cp certs/domain.crt /usr/local/share/ca-certificates/myregistrydomain.com.crt
update-ca-certificates
```
... and on Red Hat (and its derivatives) with:
```bash
cp certs/domain.crt /etc/pki/ca-trust/source/anchors/myregistrydomain.com.crt
update-ca-trust
```
... On some distributions, e.g. Oracle Linux 6, the Shared System Certificates feature needs to be manually enabled:
```bash
$ update-ca-trust enable
```
Now restart docker (`service docker stop && service docker start`, or any other way you use to restart docker).

View file

@ -1,55 +0,0 @@
<!--[metadata]>
+++
title = "Understanding the Registry"
description = "Explains what the Registry is, basic use cases and requirements"
keywords = ["registry, on-prem, images, tags, repository, distribution, use cases, requirements"]
[menu.main]
parent="smn_registry"
weight=2
+++
<![end-metadata]-->
# Understanding the Registry
A registry is a storage and content delivery system, holding named Docker images, available in different tagged versions.
> Example: the image `distribution/registry`, with tags `2.0` and `2.1`.
Users interact with a registry by using docker push and pull commands.
> Example: `docker pull registry-1.docker.io/distribution/registry:2.1`.
Storage itself is delegated to drivers. The default storage driver is the local posix filesystem, which is suitable for development or small deployments. Additional cloud-based storage drivers like S3, Microsoft Azure, OpenStack Swift and Aliyun OSS are also supported. People looking into using other storage backends may do so by writing their own driver implementing the [Storage API](storage-drivers/index.md).
Since securing access to your hosted images is paramount, the Registry natively supports TLS and basic authentication.
The Registry GitHub repository includes additional information about advanced authentication and authorization methods. Only very large or public deployments are expected to extend the Registry in this way.
Finally, the Registry ships with a robust [notification system](notifications.md), calling webhooks in response to activity, and both extensive logging and reporting, mostly useful for large installations that want to collect metrics.
## Understanding image naming
Image names as used in typical docker commands reflect their origin:
* `docker pull ubuntu` instructs docker to pull an image named `ubuntu` from the official Docker Hub. This is simply a shortcut for the longer `docker pull docker.io/library/ubuntu` command
* `docker pull myregistrydomain:port/foo/bar` instructs docker to contact the registry located at `myregistrydomain:port` to find the image `foo/bar`
You can find out more about the various Docker commands dealing with images in the [official Docker engine documentation](/engine/reference/commandline/cli.md).
## Use cases
Running your own Registry is a great solution to integrate with and complement your CI/CD system. In a typical workflow, a commit to your source revision control system would trigger a build on your CI system, which would then push a new image to your Registry if the build is successful. A notification from the Registry would then trigger a deployment on a staging environment, or notify other systems that a new image is available.
It's also an essential component if you want to quickly deploy a new image over a large cluster of machines.
Finally, it's the best way to distribute images inside an isolated network.
## Requirements
You absolutely need to be familiar with Docker, specifically with regard to pushing and pulling images. You must understand the difference between the daemon and the cli, and at least grasp basic concepts about networking.
Also, while just starting a registry is fairly easy, operating it in a production environment requires operational skills, just like any other service. You are expected to be familiar with systems availability and scalability, logging and log processing, systems monitoring, and security 101. Strong understanding of http and overall network communications, plus familiarity with golang are certainly useful as well for advanced operations or hacking.
## Next
Dive into [deploying your registry](deploying.md)

View file

@ -1,23 +0,0 @@
<!--[metadata]>
+++
title = "Docker Registry"
description = "High-level overview of the Registry"
keywords = ["registry, on-prem, images, tags, repository, distribution"]
type = "menu"
[menu.main]
identifier="smn_registry"
parent="mn_components"
+++
<![end-metadata]-->
# Overview of Docker Registry Documentation
The Docker Registry documentation includes the following topics:
* [Docker Registry Introduction](index.md)
* [Understanding the Registry](introduction.md)
* [Deploying a registry server](deploying.md)
* [Registry Configuration Reference](configuration.md)
* [Notifications](notifications.md)
* [Recipes](recipes/index.md)
* [Getting help](help.md)

View file

@ -1,30 +0,0 @@
<!--[metadata]>
+++
draft = true
+++
<![end-metadata]-->
# Migrating a 1.0 registry to 2.0
TODO: This needs to be revised in light of Olivier's work
A few thoughts here:
There was no "1.0". There was an implementation of the Registry API V1 but only a version 0.9 of the service was released.
The image formats are not compatible in any way. One must convert v1 images to v2 images using a docker client or other tool.
One can migrate images from one version to the other by pulling images from the old registry and pushing them to the v2 registry.
-----
The Docker Registry 2.0 is backward compatible with images created by the earlier specification. If you are migrating a private registry to version 2.0, you should use the following process:
1. Configure and test a 2.0 registry image in a sandbox environment.
2. Back up up your production image storage.
Your production image storage should reside on a volume or storage backend.
Make sure you have a backup of its contents.
3. Stop your existing registry service.
4. Restart your registry with your tested 2.0 image.

View file

@ -1,350 +0,0 @@
<!--[metadata]>
+++
title = "Working with notifications"
description = "Explains how to work with registry notifications"
keywords = ["registry, on-prem, images, tags, repository, distribution, notifications, advanced"]
[menu.main]
parent="smn_registry"
weight=5
+++
<![end-metadata]-->
# Notifications
The Registry supports sending webhook notifications in response to events
happening within the registry. Notifications are sent in response to manifest
pushes and pulls and layer pushes and pulls. These actions are serialized into
events. The events are queued into a registry-internal broadcast system which
queues and dispatches events to [_Endpoints_](#endpoints).
![](images/notifications.png)
## Endpoints
Notifications are sent to _endpoints_ via HTTP requests. Each configured
endpoint has isolated queues, retry configuration and http targets within each
instance of a registry. When an action happens within the registry, it is
converted into an event which is dropped into an inmemory queue. When the
event reaches the end of the queue, an http request is made to the endpoint
until the request succeeds. The events are sent serially to each endpoint but
order is not guaranteed.
## Configuration
To setup a registry instance to send notifications to endpoints, one must add
them to the configuration. A simple example follows:
notifications:
endpoints:
- name: alistener
url: https://mylistener.example.com/event
headers:
Authorization: [Bearer <your token, if needed>]
timeout: 500ms
threshold: 5
backoff: 1s
The above would configure the registry with an endpoint to send events to
`https://mylistener.example.com/event`, with the header "Authorization: Bearer
<your token, if needed>". The request would timeout after 500 milliseconds. If
5 failures happen consecutively, the registry will backoff for 1 second before
trying again.
For details on the fields, please see the [configuration documentation](configuration.md#notifications).
A properly configured endpoint should lead to a log message from the registry
upon startup:
```
INFO[0000] configuring endpoint alistener (https://mylistener.example.com/event), timeout=500ms, headers=map[Authorization:[Bearer <your token if needed>]] app.id=812bfeb2-62d6-43cf-b0c6-152f541618a3 environment=development service=registry
```
## Events
Events have a well-defined JSON structure and are sent as the body of
notification requests. One or more events are sent in a structure called an
envelope. Each event has a unique id that can be used to uniquely identify incoming
requests, if required. Along with that, an _action_ is provided with a
_target_, identifying the object mutated during the event.
The fields available in an `event` are described below.
Field | Type | Description
----- | ----- | -------------
id | string |ID provides a unique identifier for the event.
timestamp | Time | Timestamp is the time at which the event occurred.
action | string | Action indicates what action encompasses the provided event.
target | distribution.Descriptor | Target uniquely describes the target of the event.
length | int | Length in bytes of content. Same as Size field in Descriptor.
repository | string | Repository identifies the named repository.
fromRepository | string | FromRepository identifies the named repository which a blob was mounted from if appropriate.
url | string | URL provides a direct link to the content.
tag | string | Tag identifies a tag name in tag events
request | [RequestRecord](https://godoc.org/github.com/docker/distribution/notifications#RequestRecord) | Request covers the request that generated the event.
actor | [ActorRecord](https://godoc.org/github.com/docker/distribution/notifications#ActorRecord). | Actor specifies the agent that initiated the event. For most situations, this could be from the authorization context of the request.
source | [SourceRecord](https://godoc.org/github.com/docker/distribution/notifications#SourceRecord) | Source identifies the registry node that generated the event. Put differently, while the actor "initiates" the event, the source "generates" it.
The following is an example of a JSON event, sent in response to the push of a
manifest:
```json
{
"events": [
{
"id": "320678d8-ca14-430f-8bb6-4ca139cd83f7",
"timestamp": "2016-03-09T14:44:26.402973972-08:00",
"action": "pull",
"target": {
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 708,
"digest": "sha256:fea8895f450959fa676bcc1df0611ea93823a735a01205fd8622846041d0c7cf",
"length": 708,
"repository": "hello-world",
"url": "http://192.168.100.227:5000/v2/hello-world/manifests/sha256:fea8895f450959fa676bcc1df0611ea93823a735a01205fd8622846041d0c7cf",
"tag": "latest"
},
"request": {
"id": "6df24a34-0959-4923-81ca-14f09767db19",
"addr": "192.168.64.11:42961",
"host": "192.168.100.227:5000",
"method": "GET",
"useragent": "curl/7.38.0"
},
"actor": {},
"source": {
"addr": "xtal.local:5000",
"instanceID": "a53db899-3b4b-4a62-a067-8dd013beaca4"
}
}
]
}
```
The target struct of events which are sent when manifests and blobs are deleted
will contain a subset of the data contained in Get and Put events. Specifically,
only the digest and repository will be sent.
```json
"target": {
"digest": "sha256:d89e1bee20d9cb344674e213b581f14fbd8e70274ecf9d10c514bab78a307845",
"repository": "library/test"
},
```
> __NOTE:__ As of version 2.1, the `length` field for event targets
> is being deprecated for the `size` field, bringing the target in line with
> common nomenclature. Both will continue to be set for the foreseeable
> future. Newer code should favor `size` but accept either.
## Envelope
The envelope contains one or more events, with the following json structure:
```json
{
"events": [ ... ],
}
```
While events may be sent in the same envelope, the set of events within that
envelope have no implied relationship. For example, the registry may choose to
group unrelated events and send them in the same envelope to reduce the total
number of requests.
The full package has the mediatype
"application/vnd.docker.distribution.events.v1+json", which will be set on the
request coming to an endpoint.
An example of a full event may look as follows:
```json
GET /callback
Host: application/vnd.docker.distribution.events.v1+json
Authorization: Bearer <your token, if needed>
Content-Type: application/vnd.docker.distribution.events.v1+json
{
"events": [
{
"id": "asdf-asdf-asdf-asdf-0",
"timestamp": "2006-01-02T15:04:05Z",
"action": "push",
"target": {
"mediaType": "application/vnd.docker.distribution.manifest.v1+json",
"length": 1,
"digest": "sha256:fea8895f450959fa676bcc1df0611ea93823a735a01205fd8622846041d0c7cf",
"repository": "library/test",
"url": "http://example.com/v2/library/test/manifests/sha256:c3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d5"
},
"request": {
"id": "asdfasdf",
"addr": "client.local",
"host": "registrycluster.local",
"method": "PUT",
"useragent": "test/0.1"
},
"actor": {
"name": "test-actor"
},
"source": {
"addr": "hostname.local:port"
}
},
{
"id": "asdf-asdf-asdf-asdf-1",
"timestamp": "2006-01-02T15:04:05Z",
"action": "push",
"target": {
"mediaType": "application/vnd.docker.container.image.rootfs.diff+x-gtar",
"length": 2,
"digest": "sha256:c3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d5",
"repository": "library/test",
"url": "http://example.com/v2/library/test/blobs/sha256:c3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d5"
},
"request": {
"id": "asdfasdf",
"addr": "client.local",
"host": "registrycluster.local",
"method": "PUT",
"useragent": "test/0.1"
},
"actor": {
"name": "test-actor"
},
"source": {
"addr": "hostname.local:port"
}
},
{
"id": "asdf-asdf-asdf-asdf-2",
"timestamp": "2006-01-02T15:04:05Z",
"action": "push",
"target": {
"mediaType": "application/vnd.docker.container.image.rootfs.diff+x-gtar",
"length": 3,
"digest": "sha256:c3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d5",
"repository": "library/test",
"url": "http://example.com/v2/library/test/blobs/sha256:c3b3692957d439ac1928219a83fac91e7bf96c153725526874673ae1f2023f8d5"
},
"request": {
"id": "asdfasdf",
"addr": "client.local",
"host": "registrycluster.local",
"method": "PUT",
"useragent": "test/0.1"
},
"actor": {
"name": "test-actor"
},
"source": {
"addr": "hostname.local:port"
}
}
]
}
```
## Responses
The registry is fairly accepting of the response codes from endpoints. If an
endpoint responds with any 2xx or 3xx response code (after following
redirects), the message will be considered delivered and discarded.
In turn, it is recommended that endpoints are accepting of incoming responses,
as well. While the format of event envelopes are standardized by media type,
any "pickyness" about validation may cause the queue to backup on the
registry.
## Monitoring
The state of the endpoints are reported via the debug/vars http interface,
usually configured to `http://localhost:5001/debug/vars`. Information such as
configuration and metrics are available by endpoint.
The following provides an example of a few endpoints that have experienced
several failures and have since recovered:
```json
"notifications":{
"endpoints":[
{
"name":"local-5003",
"url":"http://localhost:5003/callback",
"Headers":{
"Authorization":[
"Bearer \u003can example token\u003e"
]
},
"Timeout":1000000000,
"Threshold":10,
"Backoff":1000000000,
"Metrics":{
"Pending":76,
"Events":76,
"Successes":0,
"Failures":0,
"Errors":46,
"Statuses":{
}
}
},
{
"name":"local-8083",
"url":"http://localhost:8083/callback",
"Headers":null,
"Timeout":1000000000,
"Threshold":10,
"Backoff":1000000000,
"Metrics":{
"Pending":0,
"Events":76,
"Successes":76,
"Failures":0,
"Errors":28,
"Statuses":{
"202 Accepted":76
}
}
}
]
}
```
If using notification as part of a larger application, it is _critical_ to
monitor the size ("Pending" above) of the endpoint queues. If failures or
queue sizes are increasing, it can indicate a larger problem.
The logs are also a valuable resource for monitoring problems. A failing
endpoint will lead to messages similar to the following:
```
ERRO[0340] retryingsink: error writing events: httpSink{http://localhost:5003/callback}: error posting: Post http://localhost:5003/callback: dial tcp 127.0.0.1:5003: connection refused, retrying
WARN[0340] httpSink{http://localhost:5003/callback} encountered too many errors, backing off
```
The above indicates that several errors have led to a backoff and the registry
will wait before retrying.
## Considerations
Currently, the queues are inmemory, so endpoints should be _reasonably
reliable_. They are designed to make a best-effort to send the messages but if
an instance is lost, messages may be dropped. If an endpoint goes down, care
should be taken to ensure that the registry instance is not terminated before
the endpoint comes back up or messages will be lost.
This can be mitigated by running endpoints in close proximity to the registry
instances. One could run an endpoint that pages to disk and then forwards a
request to provide better durability.
The notification system is designed around a series of interchangeable _sinks_
which can be wired up to achieve interesting behavior. If this system doesn't
provide acceptable guarantees, adding a transactional `Sink` to the registry
is a possibility, although it may have an effect on request service time.
Please see the
[godoc](http://godoc.org/github.com/docker/distribution/notifications#Sink)
for more information.

View file

@ -1,215 +0,0 @@
<!--[metadata]>
+++
title = "Authenticating proxy with apache"
description = "Restricting access to your registry using an apache proxy"
keywords = ["registry, on-prem, images, tags, repository, distribution, authentication, proxy, apache, httpd, TLS, recipe, advanced"]
[menu.main]
parent="smn_recipes"
+++
<![end-metadata]-->
# Authenticating proxy with apache
## Use-case
People already relying on an apache proxy to authenticate their users to other services might want to leverage it and have Registry communications tunneled through the same pipeline.
Usually, that includes enterprise setups using LDAP/AD on the backend and a SSO mechanism fronting their internal http portal.
### Alternatives
If you just want authentication for your registry, and are happy maintaining users access separately, you should really consider sticking with the native [basic auth registry feature](../deploying.md#native-basic-auth).
### Solution
With the method presented here, you implement basic authentication for docker engines in a reverse proxy that sits in front of your registry.
While we use a simple htpasswd file as an example, any other apache authentication backend should be fairly easy to implement once you are done with the example.
We also implement push restriction (to a limited user group) for the sake of the example. Again, you should modify this to fit your mileage.
### Gotchas
While this model gives you the ability to use whatever authentication backend you want through the secondary authentication mechanism implemented inside your proxy, it also requires that you move TLS termination from the Registry to the proxy itself.
Furthermore, introducing an extra http layer in your communication pipeline will make it more complex to deploy, maintain, and debug, and will possibly create issues.
## Setting things up
Read again [the requirements](index.md#requirements).
Ready?
Run the following script:
```
mkdir -p auth
mkdir -p data
# This is the main apache configuration you will use
cat <<EOF > auth/httpd.conf
LoadModule headers_module modules/mod_headers.so
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authn_core_module modules/mod_authn_core.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule authz_core_module modules/mod_authz_core.so
LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule access_compat_module modules/mod_access_compat.so
LoadModule log_config_module modules/mod_log_config.so
LoadModule ssl_module modules/mod_ssl.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule unixd_module modules/mod_unixd.so
<IfModule ssl_module>
SSLRandomSeed startup builtin
SSLRandomSeed connect builtin
</IfModule>
<IfModule unixd_module>
User daemon
Group daemon
</IfModule>
ServerAdmin you@example.com
ErrorLog /proc/self/fd/2
LogLevel warn
<IfModule log_config_module>
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
<IfModule logio_module>
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
</IfModule>
CustomLog /proc/self/fd/1 common
</IfModule>
ServerRoot "/usr/local/apache2"
Listen 5043
<Directory />
AllowOverride none
Require all denied
</Directory>
<VirtualHost *:5043>
ServerName myregistrydomain.com
SSLEngine on
SSLCertificateFile /usr/local/apache2/conf/domain.crt
SSLCertificateKeyFile /usr/local/apache2/conf/domain.key
## SSL settings recommandation from: https://raymii.org/s/tutorials/Strong_SSL_Security_On_Apache2.html
# Anti CRIME
SSLCompression off
# POODLE and other stuff
SSLProtocol all -SSLv2 -SSLv3 -TLSv1
# Secure cypher suites
SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH
SSLHonorCipherOrder on
Header always set "Docker-Distribution-Api-Version" "registry/2.0"
Header onsuccess set "Docker-Distribution-Api-Version" "registry/2.0"
RequestHeader set X-Forwarded-Proto "https"
ProxyRequests off
ProxyPreserveHost on
# no proxy for /error/ (Apache HTTPd errors messages)
ProxyPass /error/ !
ProxyPass /v2 http://registry:5000/v2
ProxyPassReverse /v2 http://registry:5000/v2
<Location /v2>
Order deny,allow
Allow from all
AuthName "Registry Authentication"
AuthType basic
AuthUserFile "/usr/local/apache2/conf/httpd.htpasswd"
AuthGroupFile "/usr/local/apache2/conf/httpd.groups"
# Read access to authentified users
<Limit GET HEAD>
Require valid-user
</Limit>
# Write access to docker-deployer only
<Limit POST PUT DELETE PATCH>
Require group pusher
</Limit>
</Location>
</VirtualHost>
EOF
# Now, create a password file for "testuser" and "testpassword"
docker run --entrypoint htpasswd httpd:2.4 -Bbn testuser testpassword > auth/httpd.htpasswd
# Create another one for "testuserpush" and "testpasswordpush"
docker run --entrypoint htpasswd httpd:2.4 -Bbn testuserpush testpasswordpush >> auth/httpd.htpasswd
# Create your group file
echo "pusher: testuserpush" > auth/httpd.groups
# Copy over your certificate files
cp domain.crt auth
cp domain.key auth
# Now create your compose file
cat <<EOF > docker-compose.yml
apache:
image: "httpd:2.4"
hostname: myregistrydomain.com
ports:
- 5043:5043
links:
- registry:registry
volumes:
- `pwd`/auth:/usr/local/apache2/conf
registry:
image: registry:2
ports:
- 127.0.0.1:5000:5000
volumes:
- `pwd`/data:/var/lib/registry
EOF
```
## Starting and stopping
Now, start your stack:
docker-compose up -d
Login with a "push" authorized user (using `testuserpush` and `testpasswordpush`), then tag and push your first image:
docker login myregistrydomain.com:5043
docker tag ubuntu myregistrydomain.com:5043/test
docker push myregistrydomain.com:5043/test
Now, login with a "pull-only" user (using `testuser` and `testpassword`), then pull back the image:
docker login myregistrydomain.com:5043
docker pull myregistrydomain.com:5043/test
Verify that the "pull-only" can NOT push:
docker push myregistrydomain.com:5043/test

View file

@ -1,37 +0,0 @@
<!--[metadata]>
+++
title = "Recipes Overview"
description = "Fun stuff to do with your registry"
keywords = ["registry, on-prem, images, tags, repository, distribution, recipes, advanced"]
[menu.main]
parent="smn_recipes"
weight=-10
+++
<![end-metadata]-->
# Recipes
You will find here a list of "recipes", end-to-end scenarios for exotic or otherwise advanced use-cases.
Most users are not expected to have a use for these.
## Requirements
You should have followed entirely the basic [deployment guide](../deploying.md).
If you have not, please take the time to do so.
At this point, it's assumed that:
* you understand Docker security requirements, and how to configure your docker engines properly
* you have installed Docker Compose
* it's HIGHLY recommended that you get a certificate from a known CA instead of self-signed certificates
* inside the current directory, you have a X509 `domain.crt` and `domain.key`, for the CN `myregistrydomain.com`
* be sure you have stopped and removed any previously running registry (typically `docker stop registry && docker rm -v registry`)
## The List
* [using Apache as an authenticating proxy](apache.md)
* [using Nginx as an authenticating proxy](nginx.md)
* [running a Registry on OS X](osx-setup-guide.md)
* [mirror the Docker Hub](mirror.md)

View file

@ -1,21 +0,0 @@
<!--[metadata]>
+++
title = "Recipes"
description = "Registry Recipes"
keywords = ["registry, on-prem, images, tags, repository, distribution"]
type = "menu"
[menu.main]
identifier="smn_recipes"
parent="smn_registry"
weight=6
+++
<![end-metadata]-->
# Recipes
## The List
* [using Apache as an authenticating proxy](apache.md)
* [using Nginx as an authenticating proxy](nginx.md)
* [running a Registry on OS X](osx-setup-guide.md)
* [mirror the Docker Hub](mirror.md)

View file

@ -1,74 +0,0 @@
<!--[metadata]>
+++
title = "Mirroring Docker Hub"
description = "Setting-up a local mirror for Docker Hub images"
keywords = ["registry, on-prem, images, tags, repository, distribution, mirror, Hub, recipe, advanced"]
[menu.main]
parent="smn_recipes"
+++
<![end-metadata]-->
# Registry as a pull through cache
## Use-case
If you have multiple instances of Docker running in your environment (e.g., multiple physical or virtual machines, all running the Docker daemon), each time one of them requires an image that it doesnt have it will go out to the internet and fetch it from the public Docker registry. By running a local registry mirror, you can keep most of the redundant image fetch traffic on your local network.
### Alternatives
Alternatively, if the set of images you are using is well delimited, you can simply pull them manually and push them to a simple, local, private registry.
Furthermore, if your images are all built in-house, not using the Hub at all and relying entirely on your local registry is the simplest scenario.
### Gotcha
It's currently not possible to mirror another private registry. Only the central Hub can be mirrored.
### Solution
The Registry can be configured as a pull through cache. In this mode a Registry responds to all normal docker pull requests but stores all content locally.
## How does it work?
The first time you request an image from your local registry mirror, it pulls the image from the public Docker registry and stores it locally before handing it back to you. On subsequent requests, the local registry mirror is able to serve the image from its own storage.
### What if the content changes on the Hub?
When a pull is attempted with a tag, the Registry will check the remote to ensure if it has the latest version of the requested content. If it doesn't it will fetch the latest content and cache it.
### What about my disk?
In environments with high churn rates, stale data can build up in the cache. When running as a pull through cache the Registry will periodically remove old content to save disk space. Subsequent requests for removed content will cause a remote fetch and local re-caching.
To ensure best performance and guarantee correctness the Registry cache should be configured to use the `filesystem` driver for storage.
## Running a Registry as a pull through cache
The easiest way to run a registry as a pull through cache is to run the official Registry image.
Multiple registry caches can be deployed over the same back-end. A single registry cache will ensure that concurrent requests do not pull duplicate data, but this property will not hold true for a registry cache cluster.
### Configuring the cache
To configure a Registry to run as a pull through cache, the addition of a `proxy` section is required to the config file.
In order to access private images on the Docker Hub, a username and password can be supplied.
proxy:
remoteurl: https://registry-1.docker.io
username: [username]
password: [password]
> :warn: if you specify a username and password, it's very important to understand that private resources that this user has access to on the Hub will be made available on your mirror. It's thus paramount that you secure your mirror by implementing authentication if you expect these resources to stay private!
### Configuring the Docker daemon
You will need to pass the `--registry-mirror` option to your Docker daemon on startup:
docker --registry-mirror=https://<my-docker-mirror-host> daemon
For example, if your mirror is serving on http://10.0.0.2:5000, you would run:
docker --registry-mirror=https://10.0.0.2:5000 daemon
NOTE: Depending on your local host setup, you may be able to add the `--registry-mirror` option to the `DOCKER_OPTS` variable in `/etc/default/docker`.

View file

@ -1,190 +0,0 @@
<!--[metadata]>
+++
title = "Authenticating proxy with nginx"
description = "Restricting access to your registry using a nginx proxy"
keywords = ["registry, on-prem, images, tags, repository, distribution, nginx, proxy, authentication, TLS, recipe, advanced"]
[menu.main]
parent="smn_recipes"
+++
<![end-metadata]-->
# Authenticating proxy with nginx
## Use-case
People already relying on a nginx proxy to authenticate their users to other services might want to leverage it and have Registry communications tunneled through the same pipeline.
Usually, that includes enterprise setups using LDAP/AD on the backend and a SSO mechanism fronting their internal http portal.
### Alternatives
If you just want authentication for your registry, and are happy maintaining users access separately, you should really consider sticking with the native [basic auth registry feature](../deploying.md#native-basic-auth).
### Solution
With the method presented here, you implement basic authentication for docker engines in a reverse proxy that sits in front of your registry.
While we use a simple htpasswd file as an example, any other nginx authentication backend should be fairly easy to implement once you are done with the example.
We also implement push restriction (to a limited user group) for the sake of the example. Again, you should modify this to fit your mileage.
### Gotchas
While this model gives you the ability to use whatever authentication backend you want through the secondary authentication mechanism implemented inside your proxy, it also requires that you move TLS termination from the Registry to the proxy itself.
Furthermore, introducing an extra http layer in your communication pipeline will make it more complex to deploy, maintain, and debug, and will possibly create issues. Make sure the extra complexity is required.
For instance, Amazon's Elastic Load Balancer (ELB) in HTTPS mode already sets the following client header:
```
X-Real-IP
X-Forwarded-For
X-Forwarded-Proto
```
So if you have an nginx sitting behind it, should remove these lines from the example config below:
```
X-Real-IP $remote_addr; # pass on real client's IP
X-Forwarded-For $proxy_add_x_forwarded_for;
X-Forwarded-Proto $scheme;
```
Otherwise nginx will reset the ELB's values, and the requests will not be routed properly. For more information, see [#970](https://github.com/docker/distribution/issues/970).
## Setting things up
Read again [the requirements](index.md#requirements).
Ready?
--
Create the required directories
```
mkdir -p auth
mkdir -p data
```
Create the main nginx configuration you will use.
```
cat <<EOF > auth/nginx.conf
events {
worker_connections 1024;
}
http {
upstream docker-registry {
server registry:5000;
}
## Set a variable to help us decide if we need to add the
## 'Docker-Distribution-Api-Version' header.
## The registry always sets this header.
## In the case of nginx performing auth, the header will be unset
## since nginx is auth-ing before proxying.
map \$upstream_http_docker_distribution_api_version \$docker_distribution_api_version {
'registry/2.0' '';
default registry/2.0;
}
server {
listen 443 ssl;
server_name myregistrydomain.com;
# SSL
ssl_certificate /etc/nginx/conf.d/domain.crt;
ssl_certificate_key /etc/nginx/conf.d/domain.key;
# Recommendations from https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
# required to avoid HTTP 411: see Issue #1486 (https://github.com/docker/docker/issues/1486)
chunked_transfer_encoding on;
location /v2/ {
# Do not allow connections from docker 1.5 and earlier
# docker pre-1.6.0 did not properly set the user agent on ping, catch "Go *" user agents
if (\$http_user_agent ~ "^(docker\/1\.(3|4|5(?!\.[0-9]-dev))|Go ).*\$" ) {
return 404;
}
# To add basic authentication to v2 use auth_basic setting.
auth_basic "Registry realm";
auth_basic_user_file /etc/nginx/conf.d/nginx.htpasswd;
## If $docker_distribution_api_version is empty, the header will not be added.
## See the map directive above where this variable is defined.
add_header 'Docker-Distribution-Api-Version' \$docker_distribution_api_version always;
proxy_pass http://docker-registry;
proxy_set_header Host \$http_host; # required for docker client's sake
proxy_set_header X-Real-IP \$remote_addr; # pass on real client's IP
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
proxy_read_timeout 900;
}
}
}
EOF
```
Now create a password file for "testuser" and "testpassword"
```
docker run --rm --entrypoint htpasswd registry:2 -bn testuser testpassword > auth/nginx.htpasswd
```
Copy over your certificate files
```
cp domain.crt auth
cp domain.key auth
```
Now create your compose file
```
cat <<EOF > docker-compose.yml
nginx:
image: "nginx:1.9"
ports:
- 5043:443
links:
- registry:registry
volumes:
- ./auth:/etc/nginx/conf.d
- ./auth/nginx.conf:/etc/nginx/nginx.conf:ro
registry:
image: registry:2
ports:
- 127.0.0.1:5000:5000
volumes:
- `pwd`./data:/var/lib/registry
EOF
```
## Starting and stopping
Now, start your stack:
docker-compose up -d
Login with a "push" authorized user (using `testuser` and `testpassword`), then tag and push your first image:
docker login -u=testuser -p=testpassword -e=root@example.ch myregistrydomain.com:5043
docker tag ubuntu myregistrydomain.com:5043/test
docker push myregistrydomain.com:5043/test
docker pull myregistrydomain.com:5043/test

View file

@ -1,81 +0,0 @@
<!--[metadata]>
+++
title = "Running on OS X"
description = "Explains how to run a registry on OS X"
keywords = ["registry, on-prem, images, tags, repository, distribution, OS X, recipe, advanced"]
[menu.main]
parent="smn_recipes"
+++
<![end-metadata]-->
# OS X Setup Guide
## Use-case
This is useful if you intend to run a registry server natively on OS X.
### Alternatives
You can start a VM on OS X, and deploy your registry normally as a container using Docker inside that VM.
The simplest road to get there is traditionally to use the [docker Toolbox](https://www.docker.com/toolbox), or [docker-machine](/machine/index.md), which usually relies on the [boot2docker](http://boot2docker.io/) iso inside a VirtualBox VM.
### Solution
Using the method described here, you install and compile your own from the git repository and run it as an OS X agent.
### Gotchas
Production services operation on OS X is out of scope of this document. Be sure you understand well these aspects before considering going to production with this.
## Setup golang on your machine
If you know, safely skip to the next section.
If you don't, the TLDR is:
bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)
source ~/.gvm/scripts/gvm
gvm install go1.4.2
gvm use go1.4.2
If you want to understand, you should read [How to Write Go Code](https://golang.org/doc/code.html).
## Checkout the Docker Distribution source tree
mkdir -p $GOPATH/src/github.com/docker
git clone https://github.com/docker/distribution.git $GOPATH/src/github.com/docker/distribution
cd $GOPATH/src/github.com/docker/distribution
## Build the binary
GOPATH=$(PWD)/Godeps/_workspace:$GOPATH make binaries
sudo cp bin/registry /usr/local/libexec/registry
## Setup
Copy the registry configuration file in place:
mkdir /Users/Shared/Registry
cp docs/osx/config.yml /Users/Shared/Registry/config.yml
## Running the Docker Registry under launchd
Copy the Docker registry plist into place:
plutil -lint docs/osx/com.docker.registry.plist
cp docs/osx/com.docker.registry.plist ~/Library/LaunchAgents/
chmod 644 ~/Library/LaunchAgents/com.docker.registry.plist
Start the Docker registry:
launchctl load ~/Library/LaunchAgents/com.docker.registry.plist
### Restarting the docker registry service
launchctl stop com.docker.registry
launchctl start com.docker.registry
### Unloading the docker registry service
launchctl unload ~/Library/LaunchAgents/com.docker.registry.plist

View file

@ -1,42 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.docker.registry</string>
<key>KeepAlive</key>
<true/>
<key>StandardErrorPath</key>
<string>/Users/Shared/Registry/registry.log</string>
<key>StandardOutPath</key>
<string>/Users/Shared/Registry/registry.log</string>
<key>Program</key>
<string>/usr/local/libexec/registry</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/libexec/registry</string>
<string>/Users/Shared/Registry/config.yml</string>
</array>
<key>Sockets</key>
<dict>
<key>http-listen-address</key>
<dict>
<key>SockServiceName</key>
<string>5000</string>
<key>SockType</key>
<string>dgram</string>
<key>SockFamily</key>
<string>IPv4</string>
</dict>
<key>http-debug-address</key>
<dict>
<key>SockServiceName</key>
<string>5001</string>
<key>SockType</key>
<string>dgram</string>
<key>SockFamily</key>
<string>IPv4</string>
</dict>
</dict>
</dict>
</plist>

View file

@ -1,16 +0,0 @@
version: 0.1
log:
level: info
fields:
service: registry
environment: macbook-air
storage:
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /Users/Shared/Registry
http:
addr: 0.0.0.0:5000
secret: mytokensecret
debug:
addr: localhost:5001

View file

@ -1,14 +1,8 @@
<!--[metadata]> ---
+++ title: "Docker Registry HTTP API V2"
title = "HTTP API V2" description: "Specification for the Registry API."
description = "Specification for the Registry API." keywords: "registry, on-prem, images, tags, repository, distribution, api, advanced"
keywords = ["registry, on-prem, images, tags, repository, distribution, api, advanced"] ---
[menu.main]
parent="smn_registry_ref"
+++
<![end-metadata]-->
# Docker Registry HTTP API V2
## Introduction ## Introduction
@ -4846,8 +4840,3 @@ The following headers will be returned with the response:
|----|-----------| |----|-----------|
|`Content-Length`|Length of the JSON response body.| |`Content-Length`|Length of the JSON response body.|
|`Link`|RFC5988 compliant rel='next' with URL to next result set, if available| |`Link`|RFC5988 compliant rel='next' with URL to next result set, if available|

View file

@ -1,14 +1,8 @@
<!--[metadata]> ---
+++ title: "Docker Registry HTTP API V2"
title = "HTTP API V2" description: "Specification for the Registry API."
description = "Specification for the Registry API." keywords: "registry, on-prem, images, tags, repository, distribution, api, advanced"
keywords = ["registry, on-prem, images, tags, repository, distribution, api, advanced"] ---
[menu.main]
parent="smn_registry_ref"
+++
<![end-metadata]-->
# Docker Registry HTTP API V2
## Introduction ## Introduction

View file

@ -1,15 +1,8 @@
<!--[metadata]> ---
+++ title: "Docker Registry v2 authentication"
title = "Docker Registry Token Authentication" description: "Docker Registry v2 authentication schema"
description = "Docker Registry v2 authentication schema" keywords: "registry, on-prem, images, tags, repository, distribution, authentication, advanced"
keywords = ["registry, on-prem, images, tags, repository, distribution, authentication, advanced"] ---
[menu.main]
parent="smn_registry_ref"
weight=100
+++
<![end-metadata]-->
# Docker Registry v2 authentication
See the [Token Authentication Specification](token.md), See the [Token Authentication Specification](token.md),
[Token Authentication Implementation](jwt.md), [Token Authentication Implementation](jwt.md),

View file

@ -1,15 +1,8 @@
<!--[metadata]> ---
+++ title: "Docker Registry v2 Bearer token specification"
title = "Token Authentication Implementation" description: "Describe the reference implementation of the Docker Registry v2 authentication schema"
description = "Describe the reference implementation of the Docker Registry v2 authentication schema" keywords: "registry, on-prem, images, tags, repository, distribution, JWT authentication, advanced"
keywords = ["registry, on-prem, images, tags, repository, distribution, JWT authentication, advanced"] ---
[menu.main]
parent="smn_registry_ref"
weight=101
+++
<![end-metadata]-->
# Docker Registry v2 Bearer token specification
This specification covers the `docker/distribution` implementation of the This specification covers the `docker/distribution` implementation of the
v2 Registry's authentication schema. Specifically, it describes the JSON v2 Registry's authentication schema. Specifically, it describes the JSON

View file

@ -1,15 +1,8 @@
<!--[metadata]> ---
+++ title: "Docker Registry v2 authentication using OAuth2"
title = "Oauth2 Token Authentication" description: "Specifies the Docker Registry v2 authentication"
description = "Specifies the Docker Registry v2 authentication" keywords: "registry, on-prem, images, tags, repository, distribution, oauth2, advanced"
keywords = ["registry, on-prem, images, tags, repository, distribution, oauth2, advanced"] ---
[menu.main]
parent="smn_registry_ref"
weight=102
+++
<![end-metadata]-->
# Docker Registry v2 authentication using OAuth2
This document describes support for the OAuth2 protocol within the authorization This document describes support for the OAuth2 protocol within the authorization
server. [RFC6749](https://tools.ietf.org/html/rfc6749) should be used as a server. [RFC6749](https://tools.ietf.org/html/rfc6749) should be used as a
@ -188,4 +181,3 @@ Content-Type: application/json
{"refresh_token":"kas9Da81Dfa8","access_token":"eyJhbGciOiJFUzI1NiIsInR5":"expires_in":900,"scope":"repository:samalba/my-app:pull,repository:samalba/my-app:push"} {"refresh_token":"kas9Da81Dfa8","access_token":"eyJhbGciOiJFUzI1NiIsInR5":"expires_in":900,"scope":"repository:samalba/my-app:pull,repository:samalba/my-app:push"}
``` ```

View file

@ -1,15 +1,8 @@
<!--[metadata]> ---
+++ title: "Docker Registry Token Scope and Access"
title = "Token Scope Documentation" description: "Describes the scope and access fields used for registry authorization tokens"
description = "Describes the scope and access fields used for registry authorization tokens" keywords: "registry, on-prem, images, tags, repository, distribution, advanced, access, scope"
keywords = ["registry, on-prem, images, tags, repository, distribution, advanced, access, scope"] ---
[menu.main]
parent="smn_registry_ref"
weight=103
+++
<![end-metadata]-->
# Docker Registry Token Scope and Access
Tokens used by the registry are always restricted what resources they may Tokens used by the registry are always restricted what resources they may
be used to access, where those resources may be accessed, and what actions be used to access, where those resources may be accessed, and what actions
@ -140,4 +133,3 @@ done by fetching an access token using the refresh token. Since the refresh
token is not scoped to specific resources for an audience, extra care should token is not scoped to specific resources for an audience, extra care should
be taken to only use the refresh token to negotiate new access tokens directly be taken to only use the refresh token to negotiate new access tokens directly
with the authorization server, and never with a resource provider. with the authorization server, and never with a resource provider.

View file

@ -1,15 +1,8 @@
<!--[metadata]> ---
+++ title: "Docker Registry v2 authentication via central service"
title = "Token Authentication Specification" description: "Specifies the Docker Registry v2 authentication"
description = "Specifies the Docker Registry v2 authentication" keywords: "registry, on-prem, images, tags, repository, distribution, Bearer authentication, advanced"
keywords = ["registry, on-prem, images, tags, repository, distribution, Bearer authentication, advanced"] ---
[menu.main]
parent="smn_registry_ref"
weight=104
+++
<![end-metadata]-->
# Docker Registry v2 authentication via central service
This document outlines the v2 Docker registry authentication scheme: This document outlines the v2 Docker registry authentication scheme:

View file

@ -1,10 +1,7 @@
<!--[metadata]> ---
+++ published: false
draft = true title: Distribution API Implementations
+++ ---
<![end-metadata]-->
# Distribution API Implementations
This is a list of known implementations of the Distribution API spec. This is a list of known implementations of the Distribution API spec.
@ -29,4 +26,3 @@ _Known Issues_
- No resumable push support - No resumable push support
- No PATCH implementation for blob upload - No PATCH implementation for blob upload
- Content ranges ignored - Content ranges ignored

View file

@ -1,15 +1,8 @@
<!--[metadata]> ---
+++ title: "Docker Registry Reference"
title = "Reference Overview" description: "Explains registry JSON objects"
description = "Explains registry JSON objects" keywords: "registry, service, images, repository, json"
keywords = ["registry, service, images, repository, json"] ---
[menu.main]
parent="smn_registry_ref"
weight=-1
+++
<![end-metadata]-->
# Docker Registry Reference
* [HTTP API V2](api.md) * [HTTP API V2](api.md)
* [Storage Driver](../storage-drivers/index.md) * [Storage Driver](../storage-drivers/index.md)

View file

@ -1,17 +1,9 @@
<!--[metadata]> ---
+++ published: false
draft=true title: "Docker Distribution JSON Canonicalization"
title = "Docker Distribution JSON Canonicalization" description: "Explains registry JSON objects"
description = "Explains registry JSON objects" keywords: "registry, service, images, repository, json"
keywords = ["registry, service, images, repository, json"] ---
[menu.main]
parent="smn_registry_ref"
+++
<![end-metadata]-->
# Docker Distribution JSON Canonicalization
To provide consistent content hashing of JSON objects throughout Docker To provide consistent content hashing of JSON objects throughout Docker
Distribution APIs, the specification defines a canonical JSON format. Adopting Distribution APIs, the specification defines a canonical JSON format. Adopting

View file

@ -1,14 +1,8 @@
<!--[metadata]> ---
+++ title: "Image Manifest V 2, Schema 1 "
title = "Image Manifest V 2, Schema 1 " description: "image manifest for the Registry."
description = "image manifest for the Registry." keywords: "registry, on-prem, images, tags, repository, distribution, api, advanced, manifest"
keywords = ["registry, on-prem, images, tags, repository, distribution, api, advanced, manifest"] ---
[menu.main]
parent="smn_registry_ref"
+++
<![end-metadata]-->
# Image Manifest Version 2, Schema 1
This document outlines the format of of the V2 image manifest. The image This document outlines the format of of the V2 image manifest. The image
manifest described herein was introduced in the Docker daemon in the [v1.3.0 manifest described herein was introduced in the Docker daemon in the [v1.3.0

View file

@ -1,14 +1,8 @@
<!--[metadata]> ---
+++ title: "Image Manifest V 2, Schema 2 "
title = "Image Manifest V 2, Schema 2 " description: "image manifest for the Registry."
description = "image manifest for the Registry." keywords: "registry, on-prem, images, tags, repository, distribution, api, advanced, manifest"
keywords = ["registry, on-prem, images, tags, repository, distribution, api, advanced, manifest"] ---
[menu.main]
parent="smn_registry_ref"
+++
<![end-metadata]-->
# Image Manifest Version 2, Schema 2
This document outlines the format of of the V2 image manifest, schema version 2. This document outlines the format of of the V2 image manifest, schema version 2.
The original (and provisional) image manifest for V2 (schema 1), was introduced The original (and provisional) image manifest for V2 (schema 1), was introduced

View file

@ -1,13 +0,0 @@
<!--[metadata]>
+++
title = "Reference"
description = "Explains registry JSON objects"
keywords = ["registry, service, images, repository, json"]
type = "menu"
[menu.main]
identifier="smn_registry_ref"
parent="smn_registry"
weight=7
+++
<![end-metadata]-->

View file

@ -1,78 +0,0 @@
<!--[metadata]>
+++
title = "Microsoft Azure storage driver"
description = "Explains how to use the Azure storage drivers"
keywords = ["registry, service, driver, images, storage, azure"]
[menu.main]
parent = "smn_storagedrivers"
+++
<![end-metadata]-->
# Microsoft Azure storage driver
An implementation of the `storagedriver.StorageDriver` interface which uses [Microsoft Azure Blob Storage](http://azure.microsoft.com/en-us/services/storage/) for object storage.
## Parameters
<table>
<tr>
<th>Parameter</th>
<th>Required</th>
<th>Description</th>
</tr>
<tr>
<td>
<code>accountname</code>
</td>
<td>
yes
</td>
<td>
Name of the Azure Storage Account.
</td>
</tr>
<tr>
<td>
<code>accountkey</code>
</td>
<td>
yes
</td>
<td>
Primary or Secondary Key for the Storage Account.
</td>
</tr>
<tr>
<td>
<code>container</code>
</td>
<td>
yes
</td>
<td>
Name of the Azure root storage container in which all registry data will be stored. Must comply the storage container name [requirements][create-container-api].
</td>
</tr>
<tr>
<td>
<code>realm</code>
</td>
<td>
no
</td>
<td>
Domain name suffix for the Storage Service API endpoint. For example realm for "Azure in China" would be `core.chinacloudapi.cn` and realm for "Azure Government" would be `core.usgovcloudapi.net`. By default, this
is <code>core.windows.net</code>.
</td>
</tr>
</table>
## Related Information
* To get information about
[azure-blob-storage](http://azure.microsoft.com/en-us/services/storage/) visit
the Microsoft website.
* You can use Microsoft's [Blob Service REST API](https://msdn.microsoft.com/en-us/library/azure/dd135733.aspx) to [create a container] (https://msdn.microsoft.com/en-us/library/azure/dd179468.aspx).

View file

@ -1,24 +0,0 @@
<!--[metadata]>
+++
title = "Filesystem storage driver"
description = "Explains how to use the filesystem storage drivers"
keywords = ["registry, service, driver, images, storage, filesystem"]
[menu.main]
parent="smn_storagedrivers"
+++
<![end-metadata]-->
# Filesystem storage driver
An implementation of the `storagedriver.StorageDriver` interface which uses the local filesystem.
## Parameters
`rootdirectory`: (optional) The absolute path to a root directory tree in which
to store all registry files. The registry stores all its data here so make sure
there is adequate space available. Defaults to `/var/lib/registry`.
`maxthreads`: (optional) The maximum number of simultaneous blocking filesystem
operations permitted within the registry. Each operation spawns a new thread and
may cause thread exhaustion issues if many are done in parallel. Defaults to
`100`, and can be no lower than `25`.

View file

@ -1,78 +0,0 @@
<!--[metadata]>
+++
title = "GCS storage driver"
description = "Explains how to use the Google Cloud Storage drivers"
keywords = ["registry, service, driver, images, storage, gcs, google, cloud"]
[menu.main]
parent="smn_storagedrivers"
+++
<![end-metadata]-->
# Google Cloud Storage driver
An implementation of the `storagedriver.StorageDriver` interface which uses Google Cloud for object storage.
## Parameters
<table>
<tr>
<th>Parameter</th>
<th>Required</th>
<th>Description</th>
</tr>
<tr>
<td>
<code>bucket</code>
</td>
<td>
yes
</td>
<td>
Storage bucket name.
</td>
</tr>
<tr>
<td>
<code>keyfile</code>
</td>
<td>
no
</td>
<td>
A private service account key file in JSON format. Instead of a key file <a href="https://developers.google.com/identity/protocols/application-default-credentials">Google Application Default Credentials</a> can be used.
</td>
</tr>
<tr>
<td>
<code>rootdirectory</code>
</td>
<td>
no
</td>
<td>
This is a prefix that will be applied to all Google Cloud Storage keys to allow you to segment data in your bucket if necessary.
</tr>
</tr>
<tr>
<td>
<code>chunksize</code>
</td>
<td>
no (default 5242880)
</td>
<td>
This is the chunk size used for uploading large blobs, must be a multiple of 256*1024.
</tr>
</table>
`bucket`: The name of your Google Cloud Storage bucket where you wish to store objects (needs to already be created prior to driver initialization).
`keyfile`: (optional) A private key file in JSON format, used for [Service Account Authentication](https://cloud.google.com/storage/docs/authentication#service_accounts).
**Note** Instead of a key file you can use [Google Application Default Credentials](https://developers.google.com/identity/protocols/application-default-credentials).
`rootdirectory`: (optional) The root directory tree in which all registry files will be stored. Defaults to the empty string (bucket root).

View file

@ -1,66 +0,0 @@
<!--[metadata]>
+++
title = "Storage Driver overview"
description = "Explains how to use storage drivers"
keywords = ["registry, on-prem, images, tags, repository, distribution, storage drivers, advanced"]
aliases = ["/registry/storagedrivers/"]
[menu.main]
parent="smn_storagedrivers"
identifier="storage_index"
weight=-1
+++
<![end-metadata]-->
# Docker Registry Storage Driver
This document describes the registry storage driver model, implementation, and explains how to contribute new storage drivers.
## Provided Drivers
This storage driver package comes bundled with several drivers:
- [inmemory](inmemory.md): A temporary storage driver using a local inmemory map. This exists solely for reference and testing.
- [filesystem](filesystem.md): A local storage driver configured to use a directory tree in the local filesystem.
- [s3](s3.md): A driver storing objects in an Amazon Simple Storage Solution (S3) bucket.
- [azure](azure.md): A driver storing objects in [Microsoft Azure Blob Storage](http://azure.microsoft.com/en-us/services/storage/).
- [swift](swift.md): A driver storing objects in [Openstack Swift](http://docs.openstack.org/developer/swift/).
- [oss](oss.md): A driver storing objects in [Aliyun OSS](http://www.aliyun.com/product/oss).
- [gcs](gcs.md): A driver storing objects in a [Google Cloud Storage](https://cloud.google.com/storage/) bucket.
## Storage Driver API
The storage driver API is designed to model a filesystem-like key/value storage in a manner abstract enough to support a range of drivers from the local filesystem to Amazon S3 or other distributed object storage systems.
Storage drivers are required to implement the `storagedriver.StorageDriver` interface provided in `storagedriver.go`, which includes methods for reading, writing, and deleting content, as well as listing child objects of a specified prefix key.
Storage drivers are intended to be written in Go, providing compile-time
validation of the `storagedriver.StorageDriver` interface.
## Driver Selection and Configuration
The preferred method of selecting a storage driver is using the `StorageDriverFactory` interface in the `storagedriver/factory` package. These factories provide a common interface for constructing storage drivers with a parameters map. The factory model is based off of the [Register](http://golang.org/pkg/database/sql/#Register) and [Open](http://golang.org/pkg/database/sql/#Open) methods in the builtin [database/sql](http://golang.org/pkg/database/sql) package.
Storage driver factories may be registered by name using the
`factory.Register` method, and then later invoked by calling `factory.Create`
with a driver name and parameters map. If no such storage driver can be found,
`factory.Create` will return an `InvalidStorageDriverError`.
## Driver Contribution
### Writing new storage drivers
To create a valid storage driver, one must implement the
`storagedriver.StorageDriver` interface and make sure to expose this driver
via the factory system.
#### Registering
Storage drivers should call `factory.Register` with their driver name in an `init` method, allowing callers of `factory.New` to construct instances of this driver without requiring modification of imports throughout the codebase.
## Testing
Storage driver test suites are provided in
`storagedriver/testsuites/testsuites.go` and may be used for any storage
driver written in Go. Tests can be registered using the `RegisterSuite`
function, which run the same set of tests for any registered drivers.

View file

@ -1,23 +0,0 @@
<!--[metadata]>
+++
title = "In-memory storage driver"
description = "Explains how to use the in-memory storage drivers"
keywords = ["registry, service, driver, images, storage, in-memory"]
[menu.main]
parent="smn_storagedrivers"
+++
<![end-metadata]-->
# In-memory storage driver (Testing Only)
For purely tests purposes, you can use the `inmemory` storage driver. This
driver is an implementation of the `storagedriver.StorageDriver` interface which
uses local memory for object storage. If you would like to run a registry from
volatile memory, use the [`filesystem` driver](filesystem.md) on a ramdisk.
**IMPORTANT**: This storage driver *does not* persist data across runs. This is why it is only suitable for testing. *Never* use this driver in production.
## Parameters
None

View file

@ -1,13 +0,0 @@
<!--[metadata]>
+++
title = "Storage Drivers"
description = "Storage Drivers"
keywords = ["registry, on-prem, images, tags, repository, distribution"]
type = "menu"
[menu.main]
identifier="smn_storagedrivers"
parent="smn_registry"
weight=7
+++
<![end-metadata]-->

View file

@ -1,126 +0,0 @@
<!--[metadata]>
+++
title = "Aliyun OSS storage driver"
description = "Explains how to use the Aliyun OSS storage driver"
keywords = ["registry, service, driver, images, storage, OSS, aliyun"]
[menu.main]
parent="smn_storagedrivers"
+++
<![end-metadata]-->
# Aliyun OSS storage driver
An implementation of the `storagedriver.StorageDriver` interface which uses [Aliyun OSS](http://www.aliyun.com/product/oss) for object storage.
## Parameters
<table>
<tr>
<th>Parameter</th>
<th>Required</th>
<th>Description</th>
</tr>
<tr>
<td>
<code>accesskeyid</code>
</td>
<td>
yes
</td>
<td>
Your access key ID.
</td>
</tr>
<tr>
<td>
<code>accesskeysecret</code>
</td>
<td>
yes
</td>
<td>
Your access key secret.
</td>
</tr>
<tr>
<td>
<code>region</code>
</td>
<td>
yes
</td>
<td> The name of the OSS region in which you would like to store objects (for example `oss-cn-beijing`). For a list of regions, you can look at <http://docs.aliyun.com/#/oss/product-documentation/domain-region>
</td>
</tr>
<tr>
<td>
<code>endpoint</code>
</td>
<td>
no
</td>
<td>
An endpoint which defaults to `<bucket>.<region>.aliyuncs.com` or `<bucket>.<region>-internal.aliyuncs.com` (when `internal=true`). You can change the default endpoint by changing this value.
</td>
</tr>
<tr>
<td>
<code>internal</code>
</td>
<td>
no
</td>
<td> An internal endpoint or the public endpoint for OSS access. The default is false. For a list of regions, you can look at <http://docs.aliyun.com/#/oss/product-documentation/domain-region>
</td>
</tr>
<tr>
<td>
<code>bucket</code>
</td>
<td>
yes
</td>
<td> The name of your OSS bucket where you wish to store objects (needs to already be created prior to driver initialization).
</td>
</tr>
<tr>
<td>
<code>encrypt</code>
</td>
<td>
no
</td>
<td> Specifies whether you would like your data encrypted on the server side. Defaults to false if not specified.
</td>
</tr>
<tr>
<td>
<code>secure</code>
</td>
<td>
no
</td>
<td> Specifies whether to transfer data to the bucket over ssl or not. If you omit this value, `true` is used.
</td>
</tr>
<tr>
<td>
<code>chunksize</code>
</td>
<td>
no
</td>
<td> The default part size for multipart uploads (performed by WriteStream) to OSS. The default is 10 MB. Keep in mind that the minimum part size for OSS is 5MB. You might experience better performance for larger chunk sizes depending on the speed of your connection to OSS.
</td>
</tr>
<tr>
<td>
<code>rootdirectory</code>
</td>
<td>
no
</td>
<td> The root directory tree in which to store all registry files. Defaults to an empty string (bucket root).
</td>
</tr>
</table>

View file

@ -1,268 +0,0 @@
<!--[metadata]>
+++
title = "S3 storage driver"
description = "Explains how to use the S3 storage drivers"
keywords = ["registry, service, driver, images, storage, S3"]
[menu.main]
parent="smn_storagedrivers"
+++
<![end-metadata]-->
# S3 storage driver
An implementation of the `storagedriver.StorageDriver` interface which uses Amazon S3 or S3 compatible services for object storage.
## Parameters
<table>
<tr>
<th>Parameter</th>
<th>Required</th>
<th>Description</th>
</tr>
<tr>
<td>
<code>accesskey</code>
</td>
<td>
yes
</td>
<td>
Your AWS Access Key.
</td>
</tr>
<tr>
<td>
<code>secretkey</code>
</td>
<td>
yes
</td>
<td>
Your AWS Secret Key.
</td>
</tr>
<tr>
<td>
<code>region</code>
</td>
<td>
yes
</td>
<td>
The AWS region in which your bucket exists. For the moment, the Go AWS
library in use does not use the newer DNS based bucket routing.
</td>
</tr>
<tr>
<td>
<code>regionendpoint</code>
</td>
<td>
no
</td>
<td>
Endpoint for S3 compatible storage services (Minio, etc)
</td>
</tr>
<tr>
<td>
<code>bucket</code>
</td>
<td>
yes
</td>
<td>
The bucket name in which you want to store the registry's data.
</td>
</tr>
<tr>
<td>
<code>encrypt</code>
</td>
<td>
no
</td>
<td>
Specifies whether the registry stores the image in encrypted format or
not. A boolean value. The default is false.
</td>
</tr>
<tr>
<td>
<code>keyid</code>
</td>
<td>
no
</td>
<td>
Optional KMS key ID to use for encryption (encrypt must be true, or this
parameter will be ignored). The default is none.
</td>
</tr>
<tr>
<td>
<code>secure</code>
</td>
<td>
no
</td>
<td>
Indicates whether to use HTTPS instead of HTTP. A boolean value. The
default is <code>true</code>.
</td>
</tr>
<tr>
<td>
<code>v4auth</code>
</td>
<td>
no
</td>
<td>
Indicates whether the registry uses Version 4 of AWS's authentication.
Generally, you should set this to <code>true</code>. By default, this is
<code>false</code>.
</td>
</tr>
<tr>
<td>
<code>chunksize</code>
</td>
<td>
no
</td>
<td>
The S3 API requires multipart upload chunks to be at least 5MB. This value
should be a number that is larger than 5*1024*1024.
</td>
</tr>
<tr>
<td>
<code>rootdirectory</code>
</td>
<td>
no
</td>
<td>
This is a prefix that will be applied to all S3 keys to allow you to segment data in your bucket if necessary.
</td>
</tr>
<tr>
<td>
<code>storageclass</code>
</td>
<td>
no
</td>
<td>
The S3 storage class applied to each registry file. The default value is STANDARD.
</td>
</tr>
</table>
`accesskey`: Your aws access key.
`secretkey`: Your aws secret key.
**Note** You can provide empty strings for your access and secret keys if you plan on running the driver on an ec2 instance and will handle authentication with the instance's credentials.
`region`: The name of the aws region in which you would like to store objects (for example `us-east-1`). For a list of regions, you can look at http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
`regionendpoint`: (optional) Endpoint URL for S3 compatible APIs. This should not be provided when using Amazon S3.
`bucket`: The name of your S3 bucket where you wish to store objects. The bucket must exist prior to the driver initialization.
`encrypt`: (optional) Whether you would like your data encrypted on the server side (defaults to false if not specified).
`keyid`: (optional) Whether you would like your data encrypted with this KMS key ID (defaults to none if not specified, will be ignored if encrypt is not true).
`secure`: (optional) Whether you would like to transfer data to the bucket over ssl or not. Defaults to true (meaning transferring over ssl) if not specified. Note that while setting this to false will improve performance, it is not recommended due to security concerns.
`v4auth`: (optional) Whether you would like to use aws signature version 4 with your requests. This defaults to false if not specified (note that the eu-central-1 region does not work with version 2 signatures, so the driver will error out if initialized with this region and v4auth set to false)
`chunksize`: (optional) The default part size for multipart uploads (performed by WriteStream) to S3. The default is 10 MB. Keep in mind that the minimum part size for S3 is 5MB. Depending on the speed of your connection to S3, a larger chunk size may result in better performance; faster connections will benefit from larger chunk sizes.
`rootdirectory`: (optional) The root directory tree in which all registry files will be stored. Defaults to the empty string (bucket root).
`storageclass`: (optional) The storage class applied to each registry file. Defaults to STANDARD. Valid options are STANDARD and REDUCED_REDUNDANCY.
## S3 permission scopes
The following IAM permissions are required by the registry for push and pull. See [the S3 policy documentation](http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuAndPermissions.html) for more details.
```
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:::mybucket"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload"
],
"Resource": "arn:aws:s3:::mybucket/*"
}
]
```
# CloudFront as Middleware with S3 backend
## Use Case
Adding CloudFront as a middleware for your S3 backed registry can dramatically improve pull times. Your registry will have the ability to retrieve your images from edge servers, rather than the geographically limited location of your S3 bucket. The farther your registry is from your bucket, the more improvements you will see. See [Amazon CloudFront](https://aws.amazon.com/cloudfront/details/).
## Configuring CloudFront for Distribution
If you are unfamiliar with creating a CloudFront distribution, see [Getting Started with Cloudfront](http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html).
Defaults can be kept in most areas except:
### Origin:
The CloudFront distribution must be created such that the `Origin Path` is set to the directory level of the root "docker" key in S3. If your registry exists on the root of the bucket, this path should be left blank.
### Behaviors:
- Viewer Protocol Policy: HTTPS Only
- Allowed HTTP Methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
- Cached HTTP Methods: OPTIONS (checked)
- Restrict Viewer Access (Use Signed URLs or Signed Cookies): Yes
- Trusted Signers: Self (Can add other accounts as long as you have access to CloudFront Key Pairs for those additional accounts)
## Registry configuration
Here the `middleware` option is used. It is still important to keep the `storage` option as CloudFront will only handle `pull` actions; `push` actions are still directly written to S3.
The following example shows what you will need at minimum:
```
...
storage:
s3:
region: us-east-1
bucket: docker.myregistry.com
middleware:
storage:
- name: cloudfront
options:
baseurl: https://abcdefghijklmn.cloudfront.net/
privatekey: /etc/docker/cloudfront/pk-ABCEDFGHIJKLMNOPQRST.pem
keypairid: ABCEDFGHIJKLMNOPQRST
...
```
## CloudFront Key-Pair
A CloudFront key-pair is required for all AWS accounts needing access to your CloudFront distribution. For information, please see [Creating CloudFront Key Pairs](http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-trusted-signers.html#private-content-creating-cloudfront-key-pairs).

View file

@ -1,246 +0,0 @@
<!--[metadata]>
+++
title = "Swift storage driver"
description = "Explains how to use the OpenStack swift storage driver"
keywords = ["registry, service, driver, images, storage, swift"]
[menu.main]
parent="smn_storagedrivers"
+++
<![end-metadata]-->
# OpenStack Swift storage driver
An implementation of the `storagedriver.StorageDriver` interface that uses [OpenStack Swift](http://docs.openstack.org/developer/swift/) for object storage.
## Parameters
<table>
<tr>
<th>Parameter</th>
<th>Required</th>
<th>Description</th>
</tr>
<tr>
<td>
<code>authurl</code>
</td>
<td>
yes
</td>
<td>
URL for obtaining an auth token. https://storage.myprovider.com/v2.0 or https://storage.myprovider.com/v3/auth
</td>
</tr>
<tr>
<td>
<code>username</code>
</td>
<td>
yes
</td>
<td>
Your Openstack user name.
</td>
</tr>
<tr>
<td>
<code>password</code>
</td>
<td>
yes
</td>
<td>
Your Openstack password.
</td>
</tr>
<tr>
<td>
<code>region</code>
</td>
<td>
no
</td>
<td>
The Openstack region in which your container exists.
</td>
</tr>
<tr>
<td>
<code>container</code>
</td>
<td>
yes
</td>
<td>
The name of your Swift container where you wish to store the registry's data. The driver creates the named container during its initialization.
</td>
</tr>
<tr>
<td>
<code>tenant</code>
</td>
<td>
no
</td>
<td>
Your Openstack tenant name. You can either use <code>tenant</code> or <code>tenantid</code>.
</td>
</tr>
<tr>
<td>
<code>tenantid</code>
</td>
<td>
no
</td>
<td>
Your Openstack tenant id. You can either use <code>tenant</code> or <code>tenantid</code>.
</td>
</tr>
<tr>
<td>
<code>domain</code>
</td>
<td>
no
</td>
<td>
Your Openstack domain name for Identity v3 API. You can either use <code>domain</code> or <code>domainid</code>.
</td>
</tr>
<tr>
<td>
<code>domainid</code>
</td>
<td>
no
</td>
<td>
Your Openstack domain id for Identity v3 API. You can either use <code>domain</code> or <code>domainid</code>.
</td>
</tr>
<tr>
<td>
<code>trustid</code>
</td>
<td>
no
</td>
<td>
Your Openstack trust id for Identity v3 API.
</td>
</tr>
<tr>
<td>
<code>insecureskipverify</code>
</td>
<td>
no
</td>
<td>
true to skip TLS verification, false by default.
</td>
</tr>
<tr>
<td>
<code>chunksize</code>
</td>
<td>
no
</td>
<td>
Size of the data segments for the Swift Dynamic Large Objects. This value should be a number (defaults to 5M).
</td>
</tr>
<tr>
<td>
<code>prefix</code>
</td>
<td>
no
</td>
<td>
This is a prefix that will be applied to all Swift keys to allow you to segment data in your container if necessary. Defaults to the empty string which is the container's root.
</td>
</tr>
<tr>
<td>
<code>secretkey</code>
</td>
<td>
no
</td>
<td>
The secret key used to generate temporary URLs.
</td>
</tr>
<tr>
<td>
<code>accesskey</code>
</td>
<td>
no
</td>
<td>
The access key to generate temporary URLs. It is used by HP Cloud Object Storage in addition to the `secretkey` parameter.
</td>
</tr>
<tr>
<td>
<code>authversion</code>
</td>
<td>
no
</td>
<td>
Specify the OpenStack Auth's version,for example <code>3</code>. By default the driver will autodetect the auth's version from the AuthURL.
</td>
</tr>
<tr>
<td>
<code>endpointtype</code>
</td>
<td>
no
</td>
<td>
The endpoint type used when connecting to swift. Possible values are `public`, `internal` and `admin`. Default is `public`.
</td>
</tr>
</table>
The features supported by the Swift server are queried by requesting the `/info` URL on the server. In case the administrator
disabled that feature, the configuration file can specify the following optional parameters :
<table>
<tr>
<td>
<code>tempurlcontainerkey</code>
</td>
<td>
<p>
Specify whether to use container secret key to generate temporary URL when set to true, or the account secret key otherwise.</p>
</p>
</td>
</tr>
<tr>
<td>
<code>tempurlmethods</code>
</td>
<td>
<p>
Array of HTTP methods that are supported by the TempURL middleware of the Swift server. Example:</p>
<code>
- tempurlmethods:
- GET
- PUT
- HEAD
- POST
- DELETE
</code>
</p>
</td>
</tr>
</table>

View file

@ -16,7 +16,7 @@ func TestSillyAccessController(t *testing.T) {
} }
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := context.WithValue(nil, "http.request", r) ctx := context.WithRequest(context.Background(), r)
authCtx, err := ac.Authorized(ctx) authCtx, err := ac.Authorized(ctx)
if err != nil { if err != nil {
switch err := err.(type) { switch err := err.(type) {

View file

@ -284,7 +284,7 @@ func TestAccessController(t *testing.T) {
Action: "baz", Action: "baz",
} }
ctx := context.WithValue(nil, "http.request", req) ctx := context.WithRequest(context.Background(), req)
authCtx, err := accessController.Authorized(ctx, testAccess) authCtx, err := accessController.Authorized(ctx, testAccess)
challenge, ok := err.(auth.Challenge) challenge, ok := err.(auth.Challenge)
if !ok { if !ok {

View file

@ -425,6 +425,8 @@ func (app *App) configureEvents(configuration *configuration.Configuration) {
} }
} }
type redisStartAtKey struct{}
func (app *App) configureRedis(configuration *configuration.Configuration) { func (app *App) configureRedis(configuration *configuration.Configuration) {
if configuration.Redis.Addr == "" { if configuration.Redis.Addr == "" {
ctxu.GetLogger(app).Infof("redis not configured") ctxu.GetLogger(app).Infof("redis not configured")
@ -434,11 +436,11 @@ func (app *App) configureRedis(configuration *configuration.Configuration) {
pool := &redis.Pool{ pool := &redis.Pool{
Dial: func() (redis.Conn, error) { Dial: func() (redis.Conn, error) {
// TODO(stevvooe): Yet another use case for contextual timing. // TODO(stevvooe): Yet another use case for contextual timing.
ctx := context.WithValue(app, "redis.connect.startedat", time.Now()) ctx := context.WithValue(app, redisStartAtKey{}, time.Now())
done := func(err error) { done := func(err error) {
logger := ctxu.GetLoggerWithField(ctx, "redis.connect.duration", logger := ctxu.GetLoggerWithField(ctx, "redis.connect.duration",
ctxu.Since(ctx, "redis.connect.startedat")) ctxu.Since(ctx, redisStartAtKey{}))
if err != nil { if err != nil {
logger.Errorf("redis: error connecting: %v", err) logger.Errorf("redis: error connecting: %v", err)
} else { } else {
@ -671,6 +673,18 @@ func (app *App) dispatcher(dispatch dispatchFunc) http.Handler {
}) })
} }
type errCodeKey struct{}
func (errCodeKey) String() string { return "err.code" }
type errMessageKey struct{}
func (errMessageKey) String() string { return "err.message" }
type errDetailKey struct{}
func (errDetailKey) String() string { return "err.detail" }
func (app *App) logError(context context.Context, errors errcode.Errors) { func (app *App) logError(context context.Context, errors errcode.Errors) {
for _, e1 := range errors { for _, e1 := range errors {
var c ctxu.Context var c ctxu.Context
@ -678,23 +692,23 @@ func (app *App) logError(context context.Context, errors errcode.Errors) {
switch e1.(type) { switch e1.(type) {
case errcode.Error: case errcode.Error:
e, _ := e1.(errcode.Error) e, _ := e1.(errcode.Error)
c = ctxu.WithValue(context, "err.code", e.Code) c = ctxu.WithValue(context, errCodeKey{}, e.Code)
c = ctxu.WithValue(c, "err.message", e.Code.Message()) c = ctxu.WithValue(c, errMessageKey{}, e.Code.Message())
c = ctxu.WithValue(c, "err.detail", e.Detail) c = ctxu.WithValue(c, errDetailKey{}, e.Detail)
case errcode.ErrorCode: case errcode.ErrorCode:
e, _ := e1.(errcode.ErrorCode) e, _ := e1.(errcode.ErrorCode)
c = ctxu.WithValue(context, "err.code", e) c = ctxu.WithValue(context, errCodeKey{}, e)
c = ctxu.WithValue(c, "err.message", e.Message()) c = ctxu.WithValue(c, errMessageKey{}, e.Message())
default: default:
// just normal go 'error' // just normal go 'error'
c = ctxu.WithValue(context, "err.code", errcode.ErrorCodeUnknown) c = ctxu.WithValue(context, errCodeKey{}, errcode.ErrorCodeUnknown)
c = ctxu.WithValue(c, "err.message", e1.Error()) c = ctxu.WithValue(c, errMessageKey{}, e1.Error())
} }
c = ctxu.WithLogger(c, ctxu.GetLogger(c, c = ctxu.WithLogger(c, ctxu.GetLogger(c,
"err.code", errCodeKey{},
"err.message", errMessageKey{},
"err.detail")) errDetailKey{}))
ctxu.GetResponseLogger(c).Errorf("response completed with error") ctxu.GetResponseLogger(c).Errorf("response completed with error")
} }
} }

View file

@ -179,8 +179,8 @@ func (buh *blobUploadHandler) PatchBlobData(w http.ResponseWriter, r *http.Reque
// TODO(dmcgowan): support Content-Range header to seek and write range // TODO(dmcgowan): support Content-Range header to seek and write range
if err := copyFullPayload(w, r, buh.Upload, buh, "blob PATCH", &buh.Errors); err != nil { if err := copyFullPayload(w, r, buh.Upload, -1, buh, "blob PATCH"); err != nil {
// copyFullPayload reports the error if necessary buh.Errors = append(buh.Errors, errcode.ErrorCodeUnknown.WithDetail(err.Error()))
return return
} }
@ -218,8 +218,8 @@ func (buh *blobUploadHandler) PutBlobUploadComplete(w http.ResponseWriter, r *ht
return return
} }
if err := copyFullPayload(w, r, buh.Upload, buh, "blob PUT", &buh.Errors); err != nil { if err := copyFullPayload(w, r, buh.Upload, -1, buh, "blob PUT"); err != nil {
// copyFullPayload reports the error if necessary buh.Errors = append(buh.Errors, errcode.ErrorCodeUnknown.WithDetail(err.Error()))
return return
} }

View file

@ -9,6 +9,7 @@ import (
"strconv" "strconv"
"github.com/docker/distribution/registry/api/errcode" "github.com/docker/distribution/registry/api/errcode"
"github.com/docker/distribution/registry/storage/driver"
"github.com/gorilla/handlers" "github.com/gorilla/handlers"
) )
@ -45,7 +46,9 @@ func (ch *catalogHandler) GetCatalog(w http.ResponseWriter, r *http.Request) {
repos := make([]string, maxEntries) repos := make([]string, maxEntries)
filled, err := ch.App.registry.Repositories(ch.Context, repos, lastEntry) filled, err := ch.App.registry.Repositories(ch.Context, repos, lastEntry)
if err == io.EOF { _, pathNotFound := err.(driver.PathNotFoundError)
if err == io.EOF || pathNotFound {
moreEntries = false moreEntries = false
} else if err != nil { } else if err != nil {
ch.Errors = append(ch.Errors, errcode.ErrorCodeUnknown.WithDetail(err)) ch.Errors = append(ch.Errors, errcode.ErrorCodeUnknown.WithDetail(err))

View file

@ -6,7 +6,6 @@ import (
"net/http" "net/http"
ctxu "github.com/docker/distribution/context" ctxu "github.com/docker/distribution/context"
"github.com/docker/distribution/registry/api/errcode"
) )
// closeResources closes all the provided resources after running the target // closeResources closes all the provided resources after running the target
@ -23,7 +22,9 @@ func closeResources(handler http.Handler, closers ...io.Closer) http.Handler {
// copyFullPayload copies the payload of an HTTP request to destWriter. If it // copyFullPayload copies the payload of an HTTP request to destWriter. If it
// receives less content than expected, and the client disconnected during the // receives less content than expected, and the client disconnected during the
// upload, it avoids sending a 400 error to keep the logs cleaner. // upload, it avoids sending a 400 error to keep the logs cleaner.
func copyFullPayload(responseWriter http.ResponseWriter, r *http.Request, destWriter io.Writer, context ctxu.Context, action string, errSlice *errcode.Errors) error { //
// The copy will be limited to `limit` bytes, if limit is greater than zero.
func copyFullPayload(responseWriter http.ResponseWriter, r *http.Request, destWriter io.Writer, limit int64, context ctxu.Context, action string) error {
// Get a channel that tells us if the client disconnects // Get a channel that tells us if the client disconnects
var clientClosed <-chan bool var clientClosed <-chan bool
if notifier, ok := responseWriter.(http.CloseNotifier); ok { if notifier, ok := responseWriter.(http.CloseNotifier); ok {
@ -32,8 +33,13 @@ func copyFullPayload(responseWriter http.ResponseWriter, r *http.Request, destWr
ctxu.GetLogger(context).Warnf("the ResponseWriter does not implement CloseNotifier (type: %T)", responseWriter) ctxu.GetLogger(context).Warnf("the ResponseWriter does not implement CloseNotifier (type: %T)", responseWriter)
} }
var body = r.Body
if limit > 0 {
body = http.MaxBytesReader(responseWriter, body, limit)
}
// Read in the data, if any. // Read in the data, if any.
copied, err := io.Copy(destWriter, r.Body) copied, err := io.Copy(destWriter, body)
if clientClosed != nil && (err != nil || (r.ContentLength > 0 && copied < r.ContentLength)) { if clientClosed != nil && (err != nil || (r.ContentLength > 0 && copied < r.ContentLength)) {
// Didn't receive as much content as expected. Did the client // Didn't receive as much content as expected. Did the client
// disconnect during the request? If so, avoid returning a 400 // disconnect during the request? If so, avoid returning a 400
@ -58,7 +64,6 @@ func copyFullPayload(responseWriter http.ResponseWriter, r *http.Request, destWr
if err != nil { if err != nil {
ctxu.GetLogger(context).Errorf("unknown error reading request payload: %v", err) ctxu.GetLogger(context).Errorf("unknown error reading request payload: %v", err)
*errSlice = append(*errSlice, errcode.ErrorCodeUnknown.WithDetail(err))
return err return err
} }

View file

@ -23,6 +23,7 @@ import (
const ( const (
defaultArch = "amd64" defaultArch = "amd64"
defaultOS = "linux" defaultOS = "linux"
maxManifestBodySize = 4 << 20
) )
// imageManifestDispatcher takes the request context and builds the // imageManifestDispatcher takes the request context and builds the
@ -240,8 +241,9 @@ func (imh *imageManifestHandler) PutImageManifest(w http.ResponseWriter, r *http
} }
var jsonBuf bytes.Buffer var jsonBuf bytes.Buffer
if err := copyFullPayload(w, r, &jsonBuf, imh, "image manifest PUT", &imh.Errors); err != nil { if err := copyFullPayload(w, r, &jsonBuf, maxManifestBodySize, imh, "image manifest PUT"); err != nil {
// copyFullPayload reports the error if necessary // copyFullPayload reports the error if necessary
imh.Errors = append(imh.Errors, v2.ErrorCodeManifestInvalid.WithDetail(err.Error()))
return return
} }

View file

@ -111,7 +111,7 @@ func newManifestStoreTestEnv(t *testing.T, name, tag string) *manifestStoreTestE
stats: make(map[string]int), stats: make(map[string]int),
} }
manifestDigest, err := populateRepo(t, ctx, truthRepo, name, tag) manifestDigest, err := populateRepo(ctx, t, truthRepo, name, tag)
if err != nil { if err != nil {
t.Fatalf(err.Error()) t.Fatalf(err.Error())
} }
@ -148,7 +148,7 @@ func newManifestStoreTestEnv(t *testing.T, name, tag string) *manifestStoreTestE
} }
} }
func populateRepo(t *testing.T, ctx context.Context, repository distribution.Repository, name, tag string) (digest.Digest, error) { func populateRepo(ctx context.Context, t *testing.T, repository distribution.Repository, name, tag string) (digest.Digest, error) {
m := schema1.Manifest{ m := schema1.Manifest{
Versioned: manifest.Versioned{ Versioned: manifest.Versioned{
SchemaVersion: 1, SchemaVersion: 1,

View file

@ -27,7 +27,7 @@ func (bs *blobStore) Get(ctx context.Context, dgst digest.Digest) ([]byte, error
return nil, err return nil, err
} }
p, err := bs.driver.GetContent(ctx, bp) p, err := getContent(ctx, bs.driver, bp)
if err != nil { if err != nil {
switch err.(type) { switch err.(type) {
case driver.PathNotFoundError: case driver.PathNotFoundError:
@ -37,7 +37,7 @@ func (bs *blobStore) Get(ctx context.Context, dgst digest.Digest) ([]byte, error
return nil, err return nil, err
} }
return p, err return p, nil
} }
func (bs *blobStore) Open(ctx context.Context, dgst digest.Digest) (distribution.ReadSeekCloser, error) { func (bs *blobStore) Open(ctx context.Context, dgst digest.Digest) (distribution.ReadSeekCloser, error) {

View file

@ -16,12 +16,12 @@ import (
func CheckBlobDescriptorCache(t *testing.T, provider cache.BlobDescriptorCacheProvider) { func CheckBlobDescriptorCache(t *testing.T, provider cache.BlobDescriptorCacheProvider) {
ctx := context.Background() ctx := context.Background()
checkBlobDescriptorCacheEmptyRepository(t, ctx, provider) checkBlobDescriptorCacheEmptyRepository(ctx, t, provider)
checkBlobDescriptorCacheSetAndRead(t, ctx, provider) checkBlobDescriptorCacheSetAndRead(ctx, t, provider)
checkBlobDescriptorCacheClear(t, ctx, provider) checkBlobDescriptorCacheClear(ctx, t, provider)
} }
func checkBlobDescriptorCacheEmptyRepository(t *testing.T, ctx context.Context, provider cache.BlobDescriptorCacheProvider) { func checkBlobDescriptorCacheEmptyRepository(ctx context.Context, t *testing.T, provider cache.BlobDescriptorCacheProvider) {
if _, err := provider.Stat(ctx, "sha384:abc111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111"); err != distribution.ErrBlobUnknown { if _, err := provider.Stat(ctx, "sha384:abc111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111"); err != distribution.ErrBlobUnknown {
t.Fatalf("expected unknown blob error with empty store: %v", err) t.Fatalf("expected unknown blob error with empty store: %v", err)
} }
@ -59,7 +59,7 @@ func checkBlobDescriptorCacheEmptyRepository(t *testing.T, ctx context.Context,
} }
} }
func checkBlobDescriptorCacheSetAndRead(t *testing.T, ctx context.Context, provider cache.BlobDescriptorCacheProvider) { func checkBlobDescriptorCacheSetAndRead(ctx context.Context, t *testing.T, provider cache.BlobDescriptorCacheProvider) {
localDigest := digest.Digest("sha384:abc111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111") localDigest := digest.Digest("sha384:abc111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111")
expected := distribution.Descriptor{ expected := distribution.Descriptor{
Digest: "sha256:abc1111111111111111111111111111111111111111111111111111111111111", Digest: "sha256:abc1111111111111111111111111111111111111111111111111111111111111",
@ -143,7 +143,7 @@ func checkBlobDescriptorCacheSetAndRead(t *testing.T, ctx context.Context, provi
} }
} }
func checkBlobDescriptorCacheClear(t *testing.T, ctx context.Context, provider cache.BlobDescriptorCacheProvider) { func checkBlobDescriptorCacheClear(ctx context.Context, t *testing.T, provider cache.BlobDescriptorCacheProvider) {
localDigest := digest.Digest("sha384:def111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111") localDigest := digest.Digest("sha384:def111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111")
expected := distribution.Descriptor{ expected := distribution.Descriptor{
Digest: "sha256:def1111111111111111111111111111111111111111111111111111111111111", Digest: "sha256:def1111111111111111111111111111111111111111111111111111111111111",

View file

@ -10,15 +10,14 @@ import (
"github.com/docker/distribution/registry/storage/driver" "github.com/docker/distribution/registry/storage/driver"
) )
// ErrFinishedWalk is used when the called walk function no longer wants // errFinishedWalk signals an early exit to the walk when the current query
// to accept any more values. This is used for pagination when the // is satisfied.
// required number of repos have been found. var errFinishedWalk = errors.New("finished walk")
var ErrFinishedWalk = errors.New("finished walk")
// Returns a list, or partial list, of repositories in the registry. // Returns a list, or partial list, of repositories in the registry.
// Because it's a quite expensive operation, it should only be used when building up // Because it's a quite expensive operation, it should only be used when building up
// an initial set of repositories. // an initial set of repositories.
func (reg *registry) Repositories(ctx context.Context, repos []string, last string) (n int, errVal error) { func (reg *registry) Repositories(ctx context.Context, repos []string, last string) (n int, err error) {
var foundRepos []string var foundRepos []string
if len(repos) == 0 { if len(repos) == 0 {
@ -30,26 +29,22 @@ func (reg *registry) Repositories(ctx context.Context, repos []string, last stri
return 0, err return 0, err
} }
// errFinishedWalk signals an early exit to the walk when the current query
// is satisfied.
errFinishedWalk := errors.New("finished walk")
err = Walk(ctx, reg.blobStore.driver, root, func(fileInfo driver.FileInfo) error { err = Walk(ctx, reg.blobStore.driver, root, func(fileInfo driver.FileInfo) error {
filePath := fileInfo.Path() err := handleRepository(fileInfo, root, last, func(repoPath string) error {
// lop the base path off
repoPath := filePath[len(root)+1:]
_, file := path.Split(repoPath)
if file == "_layers" {
repoPath = strings.TrimSuffix(repoPath, "/_layers")
if repoPath > last {
foundRepos = append(foundRepos, repoPath) foundRepos = append(foundRepos, repoPath)
} return nil
return ErrSkipDir })
} else if strings.HasPrefix(file, "_") { if err != nil {
return ErrSkipDir return err
} }
// if we've filled our array, no need to walk any further // if we've filled our array, no need to walk any further
if len(foundRepos) == len(repos) { if len(foundRepos) == len(repos) {
return ErrFinishedWalk return errFinishedWalk
} }
return nil return nil
@ -57,41 +52,102 @@ func (reg *registry) Repositories(ctx context.Context, repos []string, last stri
n = copy(repos, foundRepos) n = copy(repos, foundRepos)
// Signal that we have no more entries by setting EOF switch err {
if len(foundRepos) <= len(repos) && err != ErrFinishedWalk { case nil:
errVal = io.EOF // nil means that we completed walk and didn't fill buffer. No more
// records are available.
err = io.EOF
case errFinishedWalk:
// more records are available.
err = nil
} }
return n, errVal return n, err
} }
// Enumerate applies ingester to each repository // Enumerate applies ingester to each repository
func (reg *registry) Enumerate(ctx context.Context, ingester func(string) error) error { func (reg *registry) Enumerate(ctx context.Context, ingester func(string) error) error {
repoNameBuffer := make([]string, 100) root, err := pathFor(repositoriesRootPathSpec{})
var last string
for {
n, err := reg.Repositories(ctx, repoNameBuffer, last)
if err != nil && err != io.EOF {
return err
}
if n == 0 {
break
}
last = repoNameBuffer[n-1]
for i := 0; i < n; i++ {
repoName := repoNameBuffer[i]
err = ingester(repoName)
if err != nil { if err != nil {
return err return err
} }
err = Walk(ctx, reg.blobStore.driver, root, func(fileInfo driver.FileInfo) error {
return handleRepository(fileInfo, root, "", ingester)
})
return err
} }
if err == io.EOF { // lessPath returns true if one path a is less than path b.
break //
// A component-wise comparison is done, rather than the lexical comparison of
// strings.
func lessPath(a, b string) bool {
// we provide this behavior by making separator always sort first.
return compareReplaceInline(a, b, '/', '\x00') < 0
}
// compareReplaceInline modifies runtime.cmpstring to replace old with new
// during a byte-wise comparison.
func compareReplaceInline(s1, s2 string, old, new byte) int {
l := len(s1)
if len(s2) < l {
l = len(s2)
}
for i := 0; i < l; i++ {
c1, c2 := s1[i], s2[i]
if c1 == old {
c1 = new
}
if c2 == old {
c2 = new
}
if c1 < c2 {
return -1
}
if c1 > c2 {
return +1
} }
} }
if len(s1) < len(s2) {
return -1
}
if len(s1) > len(s2) {
return +1
}
return 0
}
// handleRepository calls function fn with a repository path if fileInfo
// has a path of a repository under root and that it is lexographically
// after last. Otherwise, it will return ErrSkipDir. This should be used
// with Walk to do handling with repositories in a storage.
func handleRepository(fileInfo driver.FileInfo, root, last string, fn func(repoPath string) error) error {
filePath := fileInfo.Path()
// lop the base path off
repo := filePath[len(root)+1:]
_, file := path.Split(repo)
if file == "_layers" {
repo = strings.TrimSuffix(repo, "/_layers")
if lessPath(last, repo) {
if err := fn(repo); err != nil {
return err
}
}
return ErrSkipDir
} else if strings.HasPrefix(file, "_") {
return ErrSkipDir
}
return nil return nil
} }

View file

@ -1,6 +1,7 @@
package storage package storage
import ( import (
"fmt"
"io" "io"
"testing" "testing"
@ -112,7 +113,28 @@ func TestCatalogInParts(t *testing.T) {
if !testEq(p, env.expected[chunkLen*2:chunkLen*3-1], numFilled) { if !testEq(p, env.expected[chunkLen*2:chunkLen*3-1], numFilled) {
t.Errorf("Expected catalog third chunk err") t.Errorf("Expected catalog third chunk err")
} }
}
func TestCatalogEnumerate(t *testing.T) {
env := setupFS(t)
var repos []string
repositoryEnumerator := env.registry.(distribution.RepositoryEnumerator)
err := repositoryEnumerator.Enumerate(env.ctx, func(repoName string) error {
repos = append(repos, repoName)
return nil
})
if err != nil {
t.Errorf("Expected catalog enumerate err")
}
if len(repos) != len(env.expected) {
t.Errorf("Expected catalog enumerate doesn't have correct number of values")
}
if !testEq(repos, env.expected, len(env.expected)) {
t.Errorf("Expected catalog enumerate not over all values")
}
} }
func testEq(a, b []string, size int) bool { func testEq(a, b []string, size int) bool {
@ -123,3 +145,42 @@ func testEq(a, b []string, size int) bool {
} }
return true return true
} }
func setupBadWalkEnv(t *testing.T) *setupEnv {
d := newBadListDriver()
ctx := context.Background()
registry, err := NewRegistry(ctx, d, BlobDescriptorCacheProvider(memory.NewInMemoryBlobDescriptorCacheProvider()), EnableRedirect)
if err != nil {
t.Fatalf("error creating registry: %v", err)
}
return &setupEnv{
ctx: ctx,
driver: d,
registry: registry,
}
}
type badListDriver struct {
driver.StorageDriver
}
var _ driver.StorageDriver = &badListDriver{}
func newBadListDriver() *badListDriver {
return &badListDriver{StorageDriver: inmemory.New()}
}
func (d *badListDriver) List(ctx context.Context, path string) ([]string, error) {
return nil, fmt.Errorf("List error")
}
func TestCatalogWalkError(t *testing.T) {
env := setupBadWalkEnv(t)
p := make([]string, 1)
_, err := env.registry.Repositories(env.ctx, p, "")
if err == io.EOF {
t.Errorf("Expected catalog driver list error")
}
}

View file

@ -97,7 +97,7 @@ func MarkAndSweep(ctx context.Context, storageDriver driver.StorageDriver, regis
}) })
if err != nil { if err != nil {
return fmt.Errorf("failed to mark: %v\n", err) return fmt.Errorf("failed to mark: %v", err)
} }
// sweep // sweep
@ -125,7 +125,7 @@ func MarkAndSweep(ctx context.Context, storageDriver driver.StorageDriver, regis
} }
err = vacuum.RemoveBlob(string(dgst)) err = vacuum.RemoveBlob(string(dgst))
if err != nil { if err != nil {
return fmt.Errorf("failed to delete blob %s: %v\n", dgst, err) return fmt.Errorf("failed to delete blob %s: %v", dgst, err)
} }
} }

71
registry/storage/io.go Normal file
View file

@ -0,0 +1,71 @@
package storage
import (
"errors"
"io"
"io/ioutil"
"github.com/docker/distribution/context"
"github.com/docker/distribution/registry/storage/driver"
)
const (
maxBlobGetSize = 4 << 20
)
func getContent(ctx context.Context, driver driver.StorageDriver, p string) ([]byte, error) {
r, err := driver.Reader(ctx, p, 0)
if err != nil {
return nil, err
}
return readAllLimited(r, maxBlobGetSize)
}
func readAllLimited(r io.Reader, limit int64) ([]byte, error) {
r = limitReader(r, limit)
return ioutil.ReadAll(r)
}
// limitReader returns a new reader limited to n bytes. Unlike io.LimitReader,
// this returns an error when the limit reached.
func limitReader(r io.Reader, n int64) io.Reader {
return &limitedReader{r: r, n: n}
}
// limitedReader implements a reader that errors when the limit is reached.
//
// Partially cribbed from net/http.MaxBytesReader.
type limitedReader struct {
r io.Reader // underlying reader
n int64 // max bytes remaining
err error // sticky error
}
func (l *limitedReader) Read(p []byte) (n int, err error) {
if l.err != nil {
return 0, l.err
}
if len(p) == 0 {
return 0, nil
}
// If they asked for a 32KB byte read but only 5 bytes are
// remaining, no need to read 32KB. 6 bytes will answer the
// question of the whether we hit the limit or go past it.
if int64(len(p)) > l.n+1 {
p = p[:l.n+1]
}
n, err = l.r.Read(p)
if int64(n) <= l.n {
l.n -= int64(n)
l.err = err
return n, err
}
n = int(l.n)
l.n = 0
l.err = errors.New("storage: read exceeds limit")
return n, l.err
}

View file

@ -8,4 +8,4 @@ var Package = "github.com/docker/distribution"
// the latest release tag by hand, always suffixed by "+unknown". During // the latest release tag by hand, always suffixed by "+unknown". During
// build, it will be replaced by the actual version. The value here will be // build, it will be replaced by the actual version. The value here will be
// used if the registry is run after a go get based install. // used if the registry is run after a go get based install.
var Version = "v2.4.1+unknown" var Version = "v2.5.2+unknown"