Push, pull and delete of manifest files in the registry have been implemented
on top of the storage services. Basic workflows, including reporting of missing
manifests are tested, including various proposed response codes. Common testing
functionality has been collected into shared methods. A test suite may be
emerging but it might better to capture more edge cases (such as resumable
upload, range requests, etc.) before we commit to a full approach.
To support clearer test cases and simpler handler methods, an application aware
urlBuilder has been added. We may want to export the functionality for use in
the client, which could allow us to abstract away from gorilla/mux.
A few error codes have been added to fill in error conditions missing from the
proposal. Some use cases have identified some problems with the approach to
error reporting that requires more work to reconcile. To resolve this, the
mapping of Go errors into error types needs to pulled out of the handlers and
into the application. We also need to move to type-based errors, with rich
information, rather than value-based errors. ErrorHandlers will probably
replace the http.Handlers to make this work correctly.
Unrelated to the above, the "length" parameter has been migrated to "size" for
completing layer uploads. This change should have gone out before but these
diffs ending up being coupled with the parameter name change due to updates to
the layer unit tests.
To provide rich error reporting during manifest pushes, the storage layers
verifyManifest stage has been modified to provide the necessary granularity.
Along with this comes with a partial shift to explicit error types, which
represents a small move in larger refactoring of error handling. Signature
methods from libtrust have been added to the various Manifest types to clean up
the verification code.
A primitive deletion implementation for manifests has been added. It only
deletes the manifest file and doesn't attempt to add some of the richer
features request, such as layer cleanup.
Previously, discussions were still ongoing about different storage layouts that
could support various access models. This changeset removes a layer of
indirection that was in place due to earlier designs. Effectively, this both
associates a layer with a named repository and ensures that content cannot be
accessed across repositories. It also moves to rely on tarsum as a true
content-addressable identifier, removing a layer of indirection during blob
resolution.
This change implements the first pass at image manifest storage on top of the
storagedriver. Very similar to LayerService, its much simpler due to less
complexity of pushing and pulling images.
Various components are still missing, such as detailed error reporting on
missing layers during verification, but the base functionality is present.
This changeset move the Manifest type into the storage package to make the type
accessible to client and registry without import cycles. The structure of the
manifest was also changed to accuratle reflect the stages of the signing
process. A straw man Manifest.Sign method has been added to start testing this
concept out but will probably be accompanied by the more import
SignedManifest.Verify method as the security model develops.
This is probably the start of a concerted effort to consolidate types across
the client and server portions of the code base but we may want to see how such
a handy type, like the Manifest and SignedManifest, would work in docker core.
The http API has its first set of endpoints to implement the core aspects of
fetching and uploading layers. Uploads can be started and completed in a single
chunk and the content can be fetched via tarsum. Most proposed error conditions
should be represented but edge cases likely remain.
In this version, note that the layers are still called layers, even though the
routes are pointing to blobs. This will change with backend refactoring over
the next few weeks.
The unit tests are a bit of a shamble but these need to be carefully written
along with the core specification process. As the the client-server interaction
solidifies, we can port this into a verification suite for registry providers.
This updates API error codes to coincide with changes to the proposal. Mostly,
redundant error codes were merged and missing ones were added. The set in the
main errors.go file will flow back into the specification.
A test case has been added to ensure ErrorCodeUnknown is included in marshaled
json.
This change separates out the remote file reader functionality from layer
reprsentation data. More importantly, issues with seeking have been fixed and
thoroughly tested.
Running goroutines with pullLayer are blocked to send error of a
pull operation. If we abort pulling without notify them about
cancelation they will get stucked forever. To avoid this possible
leak cancelCh was introduced. In case of abort we close that channel
to notify other goroutines about cancelation.
Fixes/tests listing for keys beginning with "/"
No longer extraneously wraps Closers in ioutil.NopClosers
Uses omitempty for all ipc struct type fields
Mostly, we've made superficial changes to the storage package to start using
the Digest type. Many of the exported interface methods have been changed to
reflect this in addition to changes in the way layer uploads will be initiated.
Further work here is necessary but will come with a separate PR.
The Digest type will be fairly central for blob and layer management. The type
presented in this package provides a number of core features that should enable
reliable use within the registry. This commit will be followed by others that
convert the storage layer and webapp to use this type as the primary layer/blob
CAS identifier.
To bring the implementation inline with the specification, the names and
structure of the API routes have been updated.
The overloaded term "image" has been replaced with the term "manifest", which
may also be known as "image manifest". The desire for the layer storage to be
more of a general blob storage is reflected in moving from "layer" api prefixes
to "blob". The "tarsum" path parameter has been replaced by a more general
"digest" parameter and is no longer required to start uploads. Another set of
changes will come along to support this change at the storage service layer.
This only works for a specific whitelist of error types, which is
currently all errors in the storagedriver package.
Also improves storagedriver tests to enforce proper error types are
returned
This change contains the initial implementation of the LayerService to power
layer push and pulls on the storagedriver. The interfaces presented in this
package will be used by the http application to drive most features around
efficient pulls and resumable pushes.
The file storage/layer.go defines the interface interactions. LayerService is
the root type and supports methods to access Layer and LayerUpload objects.
Pull operations are supported with LayerService.Fetch and push operations are
supported with LayerService.Upload and LayerService.Resume. Reads and writes of
layers are split between Layer and LayerUpload, respectively.
LayerService is implemented internally with the layerStore object, which takes
a storagedriver.StorageDriver and a pathMapper instance.
LayerUploadState is currently exported and will likely continue to be as the
interaction between it and layerUploadStore are better understood. Likely, the
layerUploadStore lifecycle and implementation will be deferred to the
application.
Image pushes pulls will be implemented in a similar manner without the
discrete, persistent upload.
Much of this change is in place to get something running and working. Caveats
of this change include the following:
1. Layer upload state storage is implemented on the local filesystem, separate
from the storage driver. This must be replaced with using the proper backend
and other state storage. This can be removed when we implement resumable
hashing and tarsum calculations to avoid backend roundtrips.
2. Error handling is rather bespoke at this time. The http API implementation
should really dictate the error return structure for the future, so we
intend to refactor this heavily to support these errors. We'd also like to
collect production data to understand how failures happen in the system as
a while before moving to a particular edict around error handling.
3. The layerUploadStore, which manages layer upload storage and state is not
currently exported. This will likely end up being split, with the file
management portion being pointed at the storagedriver and the state storage
elsewhere.
4. Access Control provisions are nearly completely missing from this change.
There are details around how layerindex lookup works that are related with
access controls. As the auth portions of the new API take shape, these
provisions will become more clear.
Please see TODOs for details and individual recommendations.
A layer can only be pushed/pulled if the layer preceding it by the
length of the push/pull window has been successfully pushed.
An error returned from pushing or pulling any layer will cause the full
operation to be aborted.