This change provides a toolkit for intercepting registry calls, such as
`ManifestService.Get` and `LayerUpload.Finish`, with the goal of easily
supporting interesting callbacks and listeners. The package proxies
returned objects through the decorate function before creation, allowing one to
carefully choose injection points.
Use cases range from notification systems all the way to cache integration.
While such a tool isn't strictly necessary, it reduces the amount of code
required to accomplish such tasks, deferring the tricky aspects to the
decorator package.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
In support of making the storage API ready for supporting notifications and
mirroring, we've begun the process of paring down the storage model. The
process started by creating a central Registry interface. From there, the
common name argument on the LayerService and ManifestService was factored into
a Repository interface. The rest of the changes directly follow from this.
An interface wishlist was added, suggesting a direction to take the registry
package that should support the distribution project's future goals. As these
objects move out of the storage package and we implement a Registry backed by
the http client, these design choices will start getting validation.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This change refactors the storage backend to use the new path layout. To
facilitate this, manifest storage has been separated into a revision store and
tag store, supported by a more general blob store. The blob store is a hybrid
object, effectively providing both small object access, keyed by content
address, as well as methods that can be used to manage and traverse links to
underlying blobs. This covers common operations used in the revision store and
tag store, such as linking and traversal. The blob store can also be updated to
better support layer reading but this refactoring has been left for another
day.
The revision store and tag store support the manifest store's compound view of
data. These underlying stores provide facilities for richer access models, such
as content-addressable access and a richer tagging model. The highlight of this
change is the ability to sign a manifest from different hosts and have the
registry merge and serve those signatures as part of the manifest package.
Various other items, such as the delegate layer handler, were updated to more
directly use the blob store or other mechanism to fit with the changes.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Several requirements for storing registry data have been compiled and the
backend layout has been refactored to comply. Specifically, we now store most
data as blobs that are linked from repositories. All data access is traversed
through repositories. Manifest updates are no longer destructive and support
references by digest or tag. Signatures for manifests are now stored externally
to the manifest payload to allow merging of signatures posted at different
time.
The design is detailed in the documentation for pathMapper.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Detecting tar files then falling back for calculating digests turned out to be
fairly unreliable. Likely, the implementation was broken for content that was
not a tarfile. Also, for the use case of the registry, it is really not needed.
This functionality has been removed in FromReader and FromBytes. FromTarArchive
has been added for convenience.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
As we get closer to release, we need to ensure that builds are repeatable.
Godep provides a workable solution to managing dependencies in Go to support
this requirement. This commit should be bolstered by updates to documentation
and build configuration.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Because we guarded the error check, nil Upload on the handler was getting
through to unexpected branches. This directly handles the missing upload
ensuring its set as expected.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Most of this change follows from the modifications to the storage api. The
driving factor is the separation of layerUploadState from the storage backend,
leaving it to the web application to store and update it. As part of the
updates to meet changes in the storage api, support for the size parameter has
been completely removed.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This refactors the hmac state token to take control of the layerUploadState
json message, which has been removed from the storage backend. It also moves
away from the concept of a LayerUploadStateStore callback object, which was
short-lived. This allows for upload offset to be managed by the web application
logic in the face of an inconsistent backend. By controlling the upload offset
externally, we reduce the possibility of misreporting upload state to a client.
We may still want to modify the way this works after getting production
experience.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
To smooth initial implementation, uploads were spooled to local file storage,
validated, then pushed to remote storage. That approach was flawed in that it
present easy clustering of registry services that share a remote storage
backend. The original plan was to implement resumable hashes then implement
remote upload storage. After some thought, it was found to be better to get
remote spooling working, then optimize with resumable hashes.
Moving to this approach has tradeoffs: after storing the complete upload
remotely, the node must fetch the content and validate it before moving it to
the final location. This can double bandwidth usage to the remote backend.
Modifying the verification and upload code to store intermediate hashes should
be trivial once the layer digest format has settled.
The largest changes for users of the storage package (mostly the registry app)
are the LayerService interface and the LayerUpload interface. The LayerService
now takes qualified repository names to start and resume uploads. In corallry,
the concept of LayerUploadState has been complete removed, exposing all aspects
of that state as part of the LayerUpload object. The LayerUpload object has
been modified to work as an io.WriteSeeker and includes a StartedAt time, to
allow for upload timeout policies. Finish now only requires a digest, eliding
the requirement for a size parameter.
Resource cleanup has taken a turn for the better. Resources are cleaned up
after successful uploads and during a cancel call. Admittedly, this is probably
not completely where we want to be. It's recommend that we bolster this with a
periodic driver utility script that scans for partial uploads and deletes the
underlying data. As a small benefit, we can leave these around to better
understand how and why these uploads are failing, at the cost of some extra
disk space.
Many other changes follow from the changes above. The webapp needs to be
updated to meet the new interface requirements.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This change updates the path mapper to be able to specify upload management
locations. This includes a startedat file, which contains the RFC3339 formatted
start time of the upload and the actual data file.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
While reading from the input in WriteStream, the inmemory driver can deadlock
if the reader is from the same instance. To fix this, the write lock is
released before reading into a local buffer. The lock is re-acquired to
finish the actual write.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
This changeset implements a fileWriter type that can be used to managed writes
to remote files in a StorageDriver. Basically, it manages a local seek position
for a remote path. An efficient use of this implementation will write data in
large blocks.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
We now also have a storagedriver error variable for identifying
api calls that are not implemented by drivers (the URLFor method
is not implemented by either the filesystem or inmemory drivers)