distribution/storage/layerreader.go
Stephen J Day 2637e29e18 Initial implementation of registry LayerService
This change contains the initial implementation of the LayerService to power
layer push and pulls on the storagedriver. The interfaces presented in this
package will be used by the http application to drive most features around
efficient pulls and resumable pushes.

The file storage/layer.go defines the interface interactions. LayerService is
the root type and supports methods to access Layer and LayerUpload objects.
Pull operations are supported with LayerService.Fetch and push operations are
supported with LayerService.Upload and LayerService.Resume. Reads and writes of
layers are split between Layer and LayerUpload, respectively.

LayerService is implemented internally with the layerStore object, which takes
a storagedriver.StorageDriver and a pathMapper instance.

LayerUploadState is currently exported and will likely continue to be as the
interaction between it and layerUploadStore are better understood. Likely, the
layerUploadStore lifecycle and implementation will be deferred to the
application.

Image pushes pulls will be implemented in a similar manner without the
discrete, persistent upload.

Much of this change is in place to get something running and working. Caveats
of this change include the following:

1. Layer upload state storage is implemented on the local filesystem, separate
   from the storage driver. This must be replaced with using the proper backend
   and other state storage. This can be removed when we implement resumable
   hashing and tarsum calculations to avoid backend roundtrips.
2. Error handling is rather bespoke at this time. The http API implementation
   should really dictate the error return structure for the future, so we
   intend to refactor this heavily to support these errors. We'd also like to
   collect production data to understand how failures happen in the system as
   a while before moving to a particular edict around error handling.
3. The layerUploadStore, which manages layer upload storage and state is not
   currently exported. This will likely end up being split, with the file
   management portion being pointed at the storagedriver and the state storage
   elsewhere.
4. Access Control provisions are nearly completely missing from this change.
   There are details around how layerindex lookup works that are related with
   access controls. As the auth portions of the new API take shape, these
   provisions will become more clear.

Please see TODOs for details and individual recommendations.
2014-11-17 17:54:07 -08:00

172 lines
3.6 KiB
Go

package storage
import (
"bufio"
"fmt"
"io"
"os"
"time"
)
// layerReadSeeker implements Layer and provides facilities for reading and
// seeking.
type layerReader struct {
layerStore *layerStore
rc io.ReadCloser
brd *bufio.Reader
name string // repo name of this layer
tarSum string
path string
createdAt time.Time
// offset is the current read offset
offset int64
// size is the total layer size, if available.
size int64
closedErr error // terminal error, if set, reader is closed
}
var _ Layer = &layerReader{}
func (lrs *layerReader) Name() string {
return lrs.name
}
func (lrs *layerReader) TarSum() string {
return lrs.tarSum
}
func (lrs *layerReader) CreatedAt() time.Time {
return lrs.createdAt
}
func (lrs *layerReader) Read(p []byte) (n int, err error) {
if err := lrs.closed(); err != nil {
return 0, err
}
rd, err := lrs.reader()
if err != nil {
return 0, err
}
n, err = rd.Read(p)
lrs.offset += int64(n)
// Simulate io.EOR error if we reach filesize.
if err == nil && lrs.offset >= lrs.size {
err = io.EOF
}
// TODO(stevvooe): More error checking is required here. If the reader
// times out for some reason, we should reset the reader so we re-open the
// connection.
return n, err
}
func (lrs *layerReader) Seek(offset int64, whence int) (int64, error) {
if err := lrs.closed(); err != nil {
return 0, err
}
var err error
newOffset := lrs.offset
switch whence {
case os.SEEK_CUR:
newOffset += int64(whence)
case os.SEEK_END:
newOffset = lrs.size + int64(whence)
case os.SEEK_SET:
newOffset = int64(whence)
}
if newOffset < 0 {
err = fmt.Errorf("cannot seek to negative position")
} else if newOffset >= lrs.size {
err = fmt.Errorf("cannot seek passed end of layer")
} else {
if lrs.offset != newOffset {
lrs.resetReader()
}
// No problems, set the offset.
lrs.offset = newOffset
}
return lrs.offset, err
}
// Close the layer. Should be called when the resource is no longer needed.
func (lrs *layerReader) Close() error {
if lrs.closedErr != nil {
return lrs.closedErr
}
// TODO(sday): Must export this error.
lrs.closedErr = fmt.Errorf("layer closed")
// close and release reader chain
if lrs.rc != nil {
lrs.rc.Close()
lrs.rc = nil
}
lrs.brd = nil
return lrs.closedErr
}
// reader prepares the current reader at the lrs offset, ensuring its buffered
// and ready to go.
func (lrs *layerReader) reader() (io.Reader, error) {
if err := lrs.closed(); err != nil {
return nil, err
}
if lrs.rc != nil {
return lrs.brd, nil
}
// If we don't have a reader, open one up.
rc, err := lrs.layerStore.driver.ReadStream(lrs.path, uint64(lrs.offset))
if err != nil {
return nil, err
}
lrs.rc = rc
if lrs.brd == nil {
// TODO(stevvooe): Set an optimal buffer size here. We'll have to
// understand the latency characteristics of the underlying network to
// set this correctly, so we may want to leave it to the driver. For
// out of process drivers, we'll have to optimize this buffer size for
// local communication.
lrs.brd = bufio.NewReader(lrs.rc)
} else {
lrs.brd.Reset(lrs.rc)
}
return lrs.brd, nil
}
// resetReader resets the reader, forcing the read method to open up a new
// connection and rebuild the buffered reader. This should be called when the
// offset and the reader will become out of sync, such as during a seek
// operation.
func (lrs *layerReader) resetReader() {
if err := lrs.closed(); err != nil {
return
}
if lrs.rc != nil {
lrs.rc.Close()
lrs.rc = nil
}
}
func (lrs *layerReader) closed() error {
return lrs.closedErr
}