Break the fs package up into smaller parts.
The purpose of this is to make it easier to maintain and eventually to allow the rclone backends to be re-used in other projects without having to use the rclone configuration system. The new code layout is documented in CONTRIBUTING.
This commit is contained in:
parent
92624bbbf1
commit
11da2a6c9b
183 changed files with 5749 additions and 5063 deletions
|
@ -116,6 +116,52 @@ then run in that directory
|
||||||
|
|
||||||
go run test_all.go
|
go run test_all.go
|
||||||
|
|
||||||
|
## Code Organisation ##
|
||||||
|
|
||||||
|
Rclone code is organised into a small number of top level directories
|
||||||
|
with modules beneath.
|
||||||
|
|
||||||
|
* backend - the rclone backends for interfacing to cloud providers -
|
||||||
|
* all - import this to load all the cloud providers
|
||||||
|
* ...providers
|
||||||
|
* bin - scripts for use while building or maintaining rclone
|
||||||
|
* cmd - the rclone commands
|
||||||
|
* all - import this to load all the commands
|
||||||
|
* ...commands
|
||||||
|
* docs - the documentation and website
|
||||||
|
* content - adjust these docs only - everything else is autogenerated
|
||||||
|
* fs - main rclone definitions - minimal amount of code
|
||||||
|
* accounting - bandwidth limiting and statistics
|
||||||
|
* asyncreader - an io.Reader which reads ahead
|
||||||
|
* config - manage the config file and flags
|
||||||
|
* driveletter - detect if a name is a drive letter
|
||||||
|
* filter - implements include/exclude filtering
|
||||||
|
* fserrors - rclone specific error handling
|
||||||
|
* fshttp - http handling for rclone
|
||||||
|
* fspath - path handling for rclone
|
||||||
|
* hash - defines rclones hash types and functions
|
||||||
|
* list - list a remote
|
||||||
|
* log - logging facilities
|
||||||
|
* march - iterates directories in lock step
|
||||||
|
* object - in memory Fs objects
|
||||||
|
* operations - primitives for sync, eg Copy, Move
|
||||||
|
* sync - sync directories
|
||||||
|
* walk - walk a directory
|
||||||
|
* fstest - provides integration test framework
|
||||||
|
* fstests - integration tests for the backends
|
||||||
|
* mockdir - mocks an fs.Directory
|
||||||
|
* mockobject - mocks an fs.Object
|
||||||
|
* test_all - Runs integration tests for everything
|
||||||
|
* graphics - the images used in the website etc
|
||||||
|
* lib - libraries used by the backend
|
||||||
|
* dircache - directory ID to name caching
|
||||||
|
* oauthutil - helpers for using oauth
|
||||||
|
* pacer - retries with backoff and paces operations
|
||||||
|
* readers - a selection of useful io.Readers
|
||||||
|
* rest - a thin abstraction over net/http for REST
|
||||||
|
* vendor - 3rd party code managed by the dep tool
|
||||||
|
* vfs - Virtual FileSystem layer for implementing rclone mount and similar
|
||||||
|
|
||||||
## Writing Documentation ##
|
## Writing Documentation ##
|
||||||
|
|
||||||
If you are adding a new feature then please update the documentation.
|
If you are adding a new feature then please update the documentation.
|
||||||
|
@ -240,10 +286,10 @@ Research
|
||||||
|
|
||||||
Getting going
|
Getting going
|
||||||
|
|
||||||
* Create `remote/remote.go` (copy this from a similar remote)
|
* Create `backend/remote/remote.go` (copy this from a similar remote)
|
||||||
* box is a good one to start from if you have a directory based remote
|
* box is a good one to start from if you have a directory based remote
|
||||||
* b2 is a good one to start from if you have a bucket based remote
|
* b2 is a good one to start from if you have a bucket based remote
|
||||||
* Add your remote to the imports in `fs/all/all.go`
|
* Add your remote to the imports in `backend/all/all.go`
|
||||||
* HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead.
|
* HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead.
|
||||||
* Try to implement as many optional methods as possible as it makes the remote more usable.
|
* Try to implement as many optional methods as possible as it makes the remote more usable.
|
||||||
|
|
||||||
|
@ -251,14 +297,14 @@ Unit tests
|
||||||
|
|
||||||
* Create a config entry called `TestRemote` for the unit tests to use
|
* Create a config entry called `TestRemote` for the unit tests to use
|
||||||
* Add your fs to the end of `fstest/fstests/gen_tests.go`
|
* Add your fs to the end of `fstest/fstests/gen_tests.go`
|
||||||
* generate `remote/remote_test.go` unit tests `cd fstest/fstests; go generate`
|
* generate `backend/remote/remote_test.go` unit tests `cd fstest/fstests; go generate`
|
||||||
* Make sure all tests pass with `go test -v`
|
* Make sure all tests pass with `go test -v`
|
||||||
|
|
||||||
Integration tests
|
Integration tests
|
||||||
|
|
||||||
* Add your fs to `fs/test_all.go`
|
* Add your fs to `fstest/test_all/test_all.go`
|
||||||
* Make sure integration tests pass with
|
* Make sure integration tests pass with
|
||||||
* `cd fs`
|
* `cd fs/operations`
|
||||||
* `go test -v -remote TestRemote:`
|
* `go test -v -remote TestRemote:`
|
||||||
* If you are making a bucket based remote, then check with this also
|
* If you are making a bucket based remote, then check with this also
|
||||||
* `go test -v -remote TestRemote: -subdir`
|
* `go test -v -remote TestRemote: -subdir`
|
||||||
|
|
3
Makefile
3
Makefile
|
@ -32,8 +32,9 @@ version:
|
||||||
|
|
||||||
# Full suite of integration tests
|
# Full suite of integration tests
|
||||||
test: rclone
|
test: rclone
|
||||||
|
go install github.com/ncw/fstest/test_all
|
||||||
-go test $(BUILDTAGS) $(GO_FILES) 2>&1 | tee test.log
|
-go test $(BUILDTAGS) $(GO_FILES) 2>&1 | tee test.log
|
||||||
-cd fs && go run $(BUILDTAGS) test_all.go 2>&1 | tee test_all.log
|
-test_all github.com/ncw/rclone/fs/operations github.com/ncw/rclone/fs/sync 2>&1 | tee fs/test_all.log
|
||||||
@echo "Written logs in test.log and fs/test_all.log"
|
@echo "Written logs in test.log and fs/test_all.log"
|
||||||
|
|
||||||
# Quick test
|
# Quick test
|
||||||
|
|
|
@ -24,6 +24,11 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/go-acd"
|
"github.com/ncw/go-acd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/ncw/rclone/lib/dircache"
|
"github.com/ncw/rclone/lib/dircache"
|
||||||
"github.com/ncw/rclone/lib/oauthutil"
|
"github.com/ncw/rclone/lib/oauthutil"
|
||||||
"github.com/ncw/rclone/lib/pacer"
|
"github.com/ncw/rclone/lib/pacer"
|
||||||
|
@ -46,7 +51,7 @@ const (
|
||||||
var (
|
var (
|
||||||
// Flags
|
// Flags
|
||||||
tempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink
|
tempLinkThreshold = fs.SizeSuffix(9 << 30) // Download files bigger than this via the tempLink
|
||||||
uploadWaitPerGB = fs.DurationP("acd-upload-wait-per-gb", "", 180*time.Second, "Additional time per GB to wait after a failed complete upload to see if it appears.")
|
uploadWaitPerGB = flags.DurationP("acd-upload-wait-per-gb", "", 180*time.Second, "Additional time per GB to wait after a failed complete upload to see if it appears.")
|
||||||
// Description of how to auth for this app
|
// Description of how to auth for this app
|
||||||
acdConfig = &oauth2.Config{
|
acdConfig = &oauth2.Config{
|
||||||
Scopes: []string{"clouddrive:read_all", "clouddrive:write"},
|
Scopes: []string{"clouddrive:read_all", "clouddrive:write"},
|
||||||
|
@ -73,20 +78,20 @@ func init() {
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
Options: []fs.Option{{
|
Options: []fs.Option{{
|
||||||
Name: fs.ConfigClientID,
|
Name: config.ConfigClientID,
|
||||||
Help: "Amazon Application Client Id - required.",
|
Help: "Amazon Application Client Id - required.",
|
||||||
}, {
|
}, {
|
||||||
Name: fs.ConfigClientSecret,
|
Name: config.ConfigClientSecret,
|
||||||
Help: "Amazon Application Client Secret - required.",
|
Help: "Amazon Application Client Secret - required.",
|
||||||
}, {
|
}, {
|
||||||
Name: fs.ConfigAuthURL,
|
Name: config.ConfigAuthURL,
|
||||||
Help: "Auth server URL - leave blank to use Amazon's.",
|
Help: "Auth server URL - leave blank to use Amazon's.",
|
||||||
}, {
|
}, {
|
||||||
Name: fs.ConfigTokenURL,
|
Name: config.ConfigTokenURL,
|
||||||
Help: "Token server url - leave blank to use Amazon's.",
|
Help: "Token server url - leave blank to use Amazon's.",
|
||||||
}},
|
}},
|
||||||
})
|
})
|
||||||
fs.VarP(&tempLinkThreshold, "acd-templink-threshold", "", "Files >= this size will be downloaded via their tempLink.")
|
flags.VarP(&tempLinkThreshold, "acd-templink-threshold", "", "Files >= this size will be downloaded via their tempLink.")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a remote acd server
|
// Fs represents a remote acd server
|
||||||
|
@ -171,7 +176,7 @@ func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) {
|
||||||
return true, err
|
return true, err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return fs.ShouldRetry(err) || fs.ShouldRetryHTTP(resp, retryErrorCodes), err
|
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
|
||||||
}
|
}
|
||||||
|
|
||||||
// If query parameters contain X-Amz-Algorithm remove Authorization header
|
// If query parameters contain X-Amz-Algorithm remove Authorization header
|
||||||
|
@ -193,7 +198,7 @@ func filterRequest(req *http.Request) {
|
||||||
// NewFs constructs an Fs from the path, container:path
|
// NewFs constructs an Fs from the path, container:path
|
||||||
func NewFs(name, root string) (fs.Fs, error) {
|
func NewFs(name, root string) (fs.Fs, error) {
|
||||||
root = parsePath(root)
|
root = parsePath(root)
|
||||||
baseClient := fs.Config.Client()
|
baseClient := fshttp.NewClient(fs.Config)
|
||||||
if do, ok := baseClient.Transport.(interface {
|
if do, ok := baseClient.Transport.(interface {
|
||||||
SetRequestFilter(f func(req *http.Request))
|
SetRequestFilter(f func(req *http.Request))
|
||||||
}); ok {
|
}); ok {
|
||||||
|
@ -212,7 +217,7 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
root: root,
|
root: root,
|
||||||
c: c,
|
c: c,
|
||||||
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer),
|
pacer: pacer.New().SetMinSleep(minSleep).SetPacer(pacer.AmazonCloudDrivePacer),
|
||||||
noAuthClient: fs.Config.Client(),
|
noAuthClient: fshttp.NewClient(fs.Config),
|
||||||
}
|
}
|
||||||
f.features = (&fs.Features{
|
f.features = (&fs.Features{
|
||||||
CaseInsensitive: true,
|
CaseInsensitive: true,
|
||||||
|
@ -472,7 +477,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
|
||||||
if iErr != nil {
|
if iErr != nil {
|
||||||
return nil, iErr
|
return nil, iErr
|
||||||
}
|
}
|
||||||
if fs.IsRetryError(err) {
|
if fserrors.IsRetryError(err) {
|
||||||
fs.Debugf(f, "Directory listing error for %q: %v - low level retry %d/%d", dir, err, tries, maxTries)
|
fs.Debugf(f, "Directory listing error for %q: %v - low level retry %d/%d", dir, err, tries, maxTries)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
@ -875,8 +880,8 @@ func (f *Fs) Precision() time.Duration {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashMD5)
|
return hash.Set(hash.HashMD5)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Copy src to this remote using server side copy operations.
|
// Copy src to this remote using server side copy operations.
|
||||||
|
@ -932,9 +937,9 @@ func (o *Object) Remote() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the Md5sum of an object returning a lowercase hex string
|
// Hash returns the Md5sum of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashMD5 {
|
if t != hash.HashMD5 {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
if o.info.ContentProperties != nil && o.info.ContentProperties.Md5 != nil {
|
if o.info.ContentProperties != nil && o.info.ContentProperties.Md5 != nil {
|
||||||
return *o.info.ContentProperties.Md5, nil
|
return *o.info.ContentProperties.Md5, nil
|
||||||
|
|
|
@ -11,7 +11,7 @@ import (
|
||||||
"encoding/binary"
|
"encoding/binary"
|
||||||
"encoding/hex"
|
"encoding/hex"
|
||||||
"fmt"
|
"fmt"
|
||||||
"hash"
|
gohash "hash"
|
||||||
"io"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"path"
|
"path"
|
||||||
|
@ -23,6 +23,12 @@ import (
|
||||||
|
|
||||||
"github.com/Azure/azure-sdk-for-go/storage"
|
"github.com/Azure/azure-sdk-for-go/storage"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
|
"github.com/ncw/rclone/fs/walk"
|
||||||
"github.com/ncw/rclone/lib/pacer"
|
"github.com/ncw/rclone/lib/pacer"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
@ -66,8 +72,8 @@ func init() {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
fs.VarP(&uploadCutoff, "azureblob-upload-cutoff", "", "Cutoff for switching to chunked upload")
|
flags.VarP(&uploadCutoff, "azureblob-upload-cutoff", "", "Cutoff for switching to chunked upload")
|
||||||
fs.VarP(&chunkSize, "azureblob-chunk-size", "", "Upload chunk size. Must fit in memory.")
|
flags.VarP(&chunkSize, "azureblob-chunk-size", "", "Upload chunk size. Must fit in memory.")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a remote azure server
|
// Fs represents a remote azure server
|
||||||
|
@ -165,7 +171,7 @@ func (f *Fs) shouldRetry(err error) (bool, error) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return fs.ShouldRetry(err), err
|
return fserrors.ShouldRetry(err), err
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewFs contstructs an Fs from the path, container:path
|
// NewFs contstructs an Fs from the path, container:path
|
||||||
|
@ -180,11 +186,11 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
account := fs.ConfigFileGet(name, "account")
|
account := config.FileGet(name, "account")
|
||||||
if account == "" {
|
if account == "" {
|
||||||
return nil, errors.New("account not found")
|
return nil, errors.New("account not found")
|
||||||
}
|
}
|
||||||
key := fs.ConfigFileGet(name, "key")
|
key := config.FileGet(name, "key")
|
||||||
if key == "" {
|
if key == "" {
|
||||||
return nil, errors.New("key not found")
|
return nil, errors.New("key not found")
|
||||||
}
|
}
|
||||||
|
@ -193,13 +199,13 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
return nil, errors.Errorf("malformed storage account key: %v", err)
|
return nil, errors.Errorf("malformed storage account key: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
endpoint := fs.ConfigFileGet(name, "endpoint", storage.DefaultBaseURL)
|
endpoint := config.FileGet(name, "endpoint", storage.DefaultBaseURL)
|
||||||
|
|
||||||
client, err := storage.NewClient(account, key, endpoint, apiVersion, true)
|
client, err := storage.NewClient(account, key, endpoint, apiVersion, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "failed to make azure storage client")
|
return nil, errors.Wrap(err, "failed to make azure storage client")
|
||||||
}
|
}
|
||||||
client.HTTPClient = fs.Config.Client()
|
client.HTTPClient = fshttp.NewClient(fs.Config)
|
||||||
bc := client.GetBlobService()
|
bc := client.GetBlobService()
|
||||||
|
|
||||||
f := &Fs{
|
f := &Fs{
|
||||||
|
@ -473,7 +479,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
|
||||||
if f.container == "" {
|
if f.container == "" {
|
||||||
return fs.ErrorListBucketRequired
|
return fs.ErrorListBucketRequired
|
||||||
}
|
}
|
||||||
list := fs.NewListRHelper(callback)
|
list := walk.NewListRHelper(callback)
|
||||||
err = f.list(dir, true, listChunkSize, func(remote string, object *storage.Blob, isDirectory bool) error {
|
err = f.list(dir, true, listChunkSize, func(remote string, object *storage.Blob, isDirectory bool) error {
|
||||||
entry, err := f.itemToDirEntry(remote, object, isDirectory)
|
entry, err := f.itemToDirEntry(remote, object, isDirectory)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -622,8 +628,8 @@ func (f *Fs) Precision() time.Duration {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashMD5)
|
return hash.Set(hash.HashMD5)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Purge deletes all the files and directories including the old versions.
|
// Purge deletes all the files and directories including the old versions.
|
||||||
|
@ -690,9 +696,9 @@ func (o *Object) Remote() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the MD5 of an object returning a lowercase hex string
|
// Hash returns the MD5 of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashMD5 {
|
if t != hash.HashMD5 {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
// Convert base64 encoded md5 into lower case hex
|
// Convert base64 encoded md5 into lower case hex
|
||||||
if o.md5 == "" {
|
if o.md5 == "" {
|
||||||
|
@ -834,7 +840,7 @@ type openFile struct {
|
||||||
o *Object // Object we are reading for
|
o *Object // Object we are reading for
|
||||||
resp *http.Response // response of the GET
|
resp *http.Response // response of the GET
|
||||||
body io.Reader // reading from here
|
body io.Reader // reading from here
|
||||||
hash hash.Hash // currently accumulating MD5
|
hash gohash.Hash // currently accumulating MD5
|
||||||
bytes int64 // number of bytes read on this connection
|
bytes int64 // number of bytes read on this connection
|
||||||
eof bool // whether we have read end of file
|
eof bool // whether we have read end of file
|
||||||
}
|
}
|
||||||
|
@ -1059,7 +1065,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
||||||
size := src.Size()
|
size := src.Size()
|
||||||
blob := o.getBlobWithModTime(src.ModTime())
|
blob := o.getBlobWithModTime(src.ModTime())
|
||||||
blob.Properties.ContentType = fs.MimeType(o)
|
blob.Properties.ContentType = fs.MimeType(o)
|
||||||
if sourceMD5, _ := src.Hash(fs.HashMD5); sourceMD5 != "" {
|
if sourceMD5, _ := src.Hash(hash.HashMD5); sourceMD5 != "" {
|
||||||
sourceMD5bytes, err := hex.DecodeString(sourceMD5)
|
sourceMD5bytes, err := hex.DecodeString(sourceMD5)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
blob.Properties.ContentMD5 = base64.StdEncoding.EncodeToString(sourceMD5bytes)
|
blob.Properties.ContentMD5 = base64.StdEncoding.EncodeToString(sourceMD5bytes)
|
||||||
|
|
|
@ -7,7 +7,7 @@ import (
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Error describes a B2 error response
|
// Error describes a B2 error response
|
||||||
|
@ -29,7 +29,7 @@ func (e *Error) Fatal() bool {
|
||||||
return e.Status == 403 // 403 errors shouldn't be retried
|
return e.Status == 403 // 403 errors shouldn't be retried
|
||||||
}
|
}
|
||||||
|
|
||||||
var _ fs.Fataler = (*Error)(nil)
|
var _ fserrors.Fataler = (*Error)(nil)
|
||||||
|
|
||||||
// Account describes a B2 account
|
// Account describes a B2 account
|
||||||
type Account struct {
|
type Account struct {
|
||||||
|
|
|
@ -9,7 +9,7 @@ import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"crypto/sha1"
|
"crypto/sha1"
|
||||||
"fmt"
|
"fmt"
|
||||||
"hash"
|
gohash "hash"
|
||||||
"io"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"path"
|
"path"
|
||||||
|
@ -21,6 +21,13 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/backend/b2/api"
|
"github.com/ncw/rclone/backend/b2/api"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/accounting"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
|
"github.com/ncw/rclone/fs/walk"
|
||||||
"github.com/ncw/rclone/lib/pacer"
|
"github.com/ncw/rclone/lib/pacer"
|
||||||
"github.com/ncw/rclone/lib/rest"
|
"github.com/ncw/rclone/lib/rest"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
|
@ -48,9 +55,9 @@ var (
|
||||||
minChunkSize = fs.SizeSuffix(5E6)
|
minChunkSize = fs.SizeSuffix(5E6)
|
||||||
chunkSize = fs.SizeSuffix(96 * 1024 * 1024)
|
chunkSize = fs.SizeSuffix(96 * 1024 * 1024)
|
||||||
uploadCutoff = fs.SizeSuffix(200E6)
|
uploadCutoff = fs.SizeSuffix(200E6)
|
||||||
b2TestMode = fs.StringP("b2-test-mode", "", "", "A flag string for X-Bz-Test-Mode header.")
|
b2TestMode = flags.StringP("b2-test-mode", "", "", "A flag string for X-Bz-Test-Mode header.")
|
||||||
b2Versions = fs.BoolP("b2-versions", "", false, "Include old versions in directory listings.")
|
b2Versions = flags.BoolP("b2-versions", "", false, "Include old versions in directory listings.")
|
||||||
b2HardDelete = fs.BoolP("b2-hard-delete", "", false, "Permanently delete files on remote removal, otherwise hide files.")
|
b2HardDelete = flags.BoolP("b2-hard-delete", "", false, "Permanently delete files on remote removal, otherwise hide files.")
|
||||||
errNotWithVersions = errors.New("can't modify or delete files in --b2-versions mode")
|
errNotWithVersions = errors.New("can't modify or delete files in --b2-versions mode")
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -72,8 +79,8 @@ func init() {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
fs.VarP(&uploadCutoff, "b2-upload-cutoff", "", "Cutoff for switching to chunked upload")
|
flags.VarP(&uploadCutoff, "b2-upload-cutoff", "", "Cutoff for switching to chunked upload")
|
||||||
fs.VarP(&chunkSize, "b2-chunk-size", "", "Upload chunk size. Must fit in memory.")
|
flags.VarP(&chunkSize, "b2-chunk-size", "", "Upload chunk size. Must fit in memory.")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a remote b2 server
|
// Fs represents a remote b2 server
|
||||||
|
@ -186,7 +193,7 @@ func (f *Fs) shouldRetryNoReauth(resp *http.Response, err error) (bool, error) {
|
||||||
}
|
}
|
||||||
return true, err
|
return true, err
|
||||||
}
|
}
|
||||||
return fs.ShouldRetry(err) || fs.ShouldRetryHTTP(resp, retryErrorCodes), err
|
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
|
||||||
}
|
}
|
||||||
|
|
||||||
// shouldRetry returns a boolean as to whether this resp and err
|
// shouldRetry returns a boolean as to whether this resp and err
|
||||||
|
@ -236,15 +243,15 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
account := fs.ConfigFileGet(name, "account")
|
account := config.FileGet(name, "account")
|
||||||
if account == "" {
|
if account == "" {
|
||||||
return nil, errors.New("account not found")
|
return nil, errors.New("account not found")
|
||||||
}
|
}
|
||||||
key := fs.ConfigFileGet(name, "key")
|
key := config.FileGet(name, "key")
|
||||||
if key == "" {
|
if key == "" {
|
||||||
return nil, errors.New("key not found")
|
return nil, errors.New("key not found")
|
||||||
}
|
}
|
||||||
endpoint := fs.ConfigFileGet(name, "endpoint", defaultEndpoint)
|
endpoint := config.FileGet(name, "endpoint", defaultEndpoint)
|
||||||
f := &Fs{
|
f := &Fs{
|
||||||
name: name,
|
name: name,
|
||||||
bucket: bucket,
|
bucket: bucket,
|
||||||
|
@ -252,7 +259,7 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
account: account,
|
account: account,
|
||||||
key: key,
|
key: key,
|
||||||
endpoint: endpoint,
|
endpoint: endpoint,
|
||||||
srv: rest.NewClient(fs.Config.Client()).SetErrorHandler(errorHandler),
|
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
|
||||||
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
|
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
|
||||||
bufferTokens: make(chan []byte, fs.Config.Transfers),
|
bufferTokens: make(chan []byte, fs.Config.Transfers),
|
||||||
}
|
}
|
||||||
|
@ -615,7 +622,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
|
||||||
if f.bucket == "" {
|
if f.bucket == "" {
|
||||||
return fs.ErrorListBucketRequired
|
return fs.ErrorListBucketRequired
|
||||||
}
|
}
|
||||||
list := fs.NewListRHelper(callback)
|
list := walk.NewListRHelper(callback)
|
||||||
last := ""
|
last := ""
|
||||||
err = f.list(dir, true, "", 0, *b2Versions, func(remote string, object *api.File, isDirectory bool) error {
|
err = f.list(dir, true, "", 0, *b2Versions, func(remote string, object *api.File, isDirectory bool) error {
|
||||||
entry, err := f.itemToDirEntry(remote, object, isDirectory, &last)
|
entry, err := f.itemToDirEntry(remote, object, isDirectory, &last)
|
||||||
|
@ -868,16 +875,16 @@ func (f *Fs) purge(oldOnly bool) error {
|
||||||
go func() {
|
go func() {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
for object := range toBeDeleted {
|
for object := range toBeDeleted {
|
||||||
fs.Stats.Checking(object.Name)
|
accounting.Stats.Checking(object.Name)
|
||||||
checkErr(f.deleteByID(object.ID, object.Name))
|
checkErr(f.deleteByID(object.ID, object.Name))
|
||||||
fs.Stats.DoneChecking(object.Name)
|
accounting.Stats.DoneChecking(object.Name)
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
last := ""
|
last := ""
|
||||||
checkErr(f.list("", true, "", 0, true, func(remote string, object *api.File, isDirectory bool) error {
|
checkErr(f.list("", true, "", 0, true, func(remote string, object *api.File, isDirectory bool) error {
|
||||||
if !isDirectory {
|
if !isDirectory {
|
||||||
fs.Stats.Checking(remote)
|
accounting.Stats.Checking(remote)
|
||||||
if oldOnly && last != remote {
|
if oldOnly && last != remote {
|
||||||
if object.Action == "hide" {
|
if object.Action == "hide" {
|
||||||
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
|
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
|
||||||
|
@ -890,7 +897,7 @@ func (f *Fs) purge(oldOnly bool) error {
|
||||||
toBeDeleted <- object
|
toBeDeleted <- object
|
||||||
}
|
}
|
||||||
last = remote
|
last = remote
|
||||||
fs.Stats.DoneChecking(remote)
|
accounting.Stats.DoneChecking(remote)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}))
|
}))
|
||||||
|
@ -914,8 +921,8 @@ func (f *Fs) CleanUp() error {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashSHA1)
|
return hash.Set(hash.HashSHA1)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
@ -939,9 +946,9 @@ func (o *Object) Remote() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the Sha-1 of an object returning a lowercase hex string
|
// Hash returns the Sha-1 of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashSHA1 {
|
if t != hash.HashSHA1 {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
if o.sha1 == "" {
|
if o.sha1 == "" {
|
||||||
// Error is logged in readMetaData
|
// Error is logged in readMetaData
|
||||||
|
@ -1094,7 +1101,7 @@ type openFile struct {
|
||||||
o *Object // Object we are reading for
|
o *Object // Object we are reading for
|
||||||
resp *http.Response // response of the GET
|
resp *http.Response // response of the GET
|
||||||
body io.Reader // reading from here
|
body io.Reader // reading from here
|
||||||
hash hash.Hash // currently accumulating SHA1
|
hash gohash.Hash // currently accumulating SHA1
|
||||||
bytes int64 // number of bytes read on this connection
|
bytes int64 // number of bytes read on this connection
|
||||||
eof bool // whether we have read end of file
|
eof bool // whether we have read end of file
|
||||||
}
|
}
|
||||||
|
@ -1279,7 +1286,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
||||||
|
|
||||||
modTime := src.ModTime()
|
modTime := src.ModTime()
|
||||||
|
|
||||||
calculatedSha1, _ := src.Hash(fs.HashSHA1)
|
calculatedSha1, _ := src.Hash(hash.HashSHA1)
|
||||||
if calculatedSha1 == "" {
|
if calculatedSha1 == "" {
|
||||||
calculatedSha1 = "hex_digits_at_end"
|
calculatedSha1 = "hex_digits_at_end"
|
||||||
har := newHashAppendingReader(in, sha1.New())
|
har := newHashAppendingReader(in, sha1.New())
|
||||||
|
|
|
@ -9,19 +9,21 @@ import (
|
||||||
"crypto/sha1"
|
"crypto/sha1"
|
||||||
"encoding/hex"
|
"encoding/hex"
|
||||||
"fmt"
|
"fmt"
|
||||||
"hash"
|
gohash "hash"
|
||||||
"io"
|
"io"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"github.com/ncw/rclone/backend/b2/api"
|
"github.com/ncw/rclone/backend/b2/api"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/accounting"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/ncw/rclone/lib/rest"
|
"github.com/ncw/rclone/lib/rest"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
type hashAppendingReader struct {
|
type hashAppendingReader struct {
|
||||||
h hash.Hash
|
h gohash.Hash
|
||||||
in io.Reader
|
in io.Reader
|
||||||
hexSum string
|
hexSum string
|
||||||
hexReader io.Reader
|
hexReader io.Reader
|
||||||
|
@ -58,7 +60,7 @@ func (har *hashAppendingReader) HexSum() string {
|
||||||
// newHashAppendingReader takes a Reader and a Hash and will append the hex sum
|
// newHashAppendingReader takes a Reader and a Hash and will append the hex sum
|
||||||
// after the original reader reaches EOF. The increased size depends on the
|
// after the original reader reaches EOF. The increased size depends on the
|
||||||
// given hash, which may be queried through AdditionalLength()
|
// given hash, which may be queried through AdditionalLength()
|
||||||
func newHashAppendingReader(in io.Reader, h hash.Hash) *hashAppendingReader {
|
func newHashAppendingReader(in io.Reader, h gohash.Hash) *hashAppendingReader {
|
||||||
withHash := io.TeeReader(in, h)
|
withHash := io.TeeReader(in, h)
|
||||||
return &hashAppendingReader{h: h, in: withHash}
|
return &hashAppendingReader{h: h, in: withHash}
|
||||||
}
|
}
|
||||||
|
@ -113,7 +115,7 @@ func (f *Fs) newLargeUpload(o *Object, in io.Reader, src fs.ObjectInfo) (up *lar
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
// Set the SHA1 if known
|
// Set the SHA1 if known
|
||||||
if calculatedSha1, err := src.Hash(fs.HashSHA1); err == nil && calculatedSha1 != "" {
|
if calculatedSha1, err := src.Hash(hash.HashSHA1); err == nil && calculatedSha1 != "" {
|
||||||
request.Info[sha1Key] = calculatedSha1
|
request.Info[sha1Key] = calculatedSha1
|
||||||
}
|
}
|
||||||
var response api.StartLargeFileResponse
|
var response api.StartLargeFileResponse
|
||||||
|
@ -219,7 +221,7 @@ func (up *largeUpload) transferChunk(part int64, body []byte) error {
|
||||||
opts := rest.Opts{
|
opts := rest.Opts{
|
||||||
Method: "POST",
|
Method: "POST",
|
||||||
RootURL: upload.UploadURL,
|
RootURL: upload.UploadURL,
|
||||||
Body: fs.AccountPart(up.o, in),
|
Body: accounting.AccountPart(up.o, in),
|
||||||
ExtraHeaders: map[string]string{
|
ExtraHeaders: map[string]string{
|
||||||
"Authorization": upload.AuthorizationToken,
|
"Authorization": upload.AuthorizationToken,
|
||||||
"X-Bz-Part-Number": fmt.Sprintf("%d", part),
|
"X-Bz-Part-Number": fmt.Sprintf("%d", part),
|
||||||
|
@ -329,7 +331,7 @@ func (up *largeUpload) Stream(initialUploadBlock []byte) (err error) {
|
||||||
errs := make(chan error, 1)
|
errs := make(chan error, 1)
|
||||||
hasMoreParts := true
|
hasMoreParts := true
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
fs.AccountByPart(up.o) // Cancel whole file accounting before reading
|
accounting.AccountByPart(up.o) // Cancel whole file accounting before reading
|
||||||
|
|
||||||
// Transfer initial chunk
|
// Transfer initial chunk
|
||||||
up.size = int64(len(initialUploadBlock))
|
up.size = int64(len(initialUploadBlock))
|
||||||
|
@ -390,7 +392,7 @@ func (up *largeUpload) Upload() error {
|
||||||
errs := make(chan error, 1)
|
errs := make(chan error, 1)
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
var err error
|
var err error
|
||||||
fs.AccountByPart(up.o) // Cancel whole file accounting before reading
|
accounting.AccountByPart(up.o) // Cancel whole file accounting before reading
|
||||||
outer:
|
outer:
|
||||||
for part := int64(1); part <= up.parts; part++ {
|
for part := int64(1); part <= up.parts; part++ {
|
||||||
// Check any errors
|
// Check any errors
|
||||||
|
|
|
@ -22,9 +22,11 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/backend/box/api"
|
"github.com/ncw/rclone/backend/box/api"
|
||||||
"github.com/ncw/rclone/box/api"
|
|
||||||
"github.com/ncw/rclone/dircache"
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/ncw/rclone/lib/dircache"
|
"github.com/ncw/rclone/lib/dircache"
|
||||||
"github.com/ncw/rclone/lib/oauthutil"
|
"github.com/ncw/rclone/lib/oauthutil"
|
||||||
"github.com/ncw/rclone/lib/pacer"
|
"github.com/ncw/rclone/lib/pacer"
|
||||||
|
@ -56,7 +58,7 @@ var (
|
||||||
TokenURL: "https://app.box.com/api/oauth2/token",
|
TokenURL: "https://app.box.com/api/oauth2/token",
|
||||||
},
|
},
|
||||||
ClientID: rcloneClientID,
|
ClientID: rcloneClientID,
|
||||||
ClientSecret: fs.MustReveal(rcloneEncryptedClientSecret),
|
ClientSecret: config.MustReveal(rcloneEncryptedClientSecret),
|
||||||
RedirectURL: oauthutil.RedirectURL,
|
RedirectURL: oauthutil.RedirectURL,
|
||||||
}
|
}
|
||||||
uploadCutoff = fs.SizeSuffix(50 * 1024 * 1024)
|
uploadCutoff = fs.SizeSuffix(50 * 1024 * 1024)
|
||||||
|
@ -75,14 +77,14 @@ func init() {
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
Options: []fs.Option{{
|
Options: []fs.Option{{
|
||||||
Name: fs.ConfigClientID,
|
Name: config.ConfigClientID,
|
||||||
Help: "Box App Client Id - leave blank normally.",
|
Help: "Box App Client Id - leave blank normally.",
|
||||||
}, {
|
}, {
|
||||||
Name: fs.ConfigClientSecret,
|
Name: config.ConfigClientSecret,
|
||||||
Help: "Box App Client Secret - leave blank normally.",
|
Help: "Box App Client Secret - leave blank normally.",
|
||||||
}},
|
}},
|
||||||
})
|
})
|
||||||
fs.VarP(&uploadCutoff, "box-upload-cutoff", "", "Cutoff for switching to multipart upload")
|
flags.VarP(&uploadCutoff, "box-upload-cutoff", "", "Cutoff for switching to multipart upload")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a remote box
|
// Fs represents a remote box
|
||||||
|
@ -160,7 +162,7 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
|
||||||
authRety = true
|
authRety = true
|
||||||
fs.Debugf(nil, "Should retry: %v", err)
|
fs.Debugf(nil, "Should retry: %v", err)
|
||||||
}
|
}
|
||||||
return authRety || fs.ShouldRetry(err) || fs.ShouldRetryHTTP(resp, retryErrorCodes), err
|
return authRety || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
|
||||||
}
|
}
|
||||||
|
|
||||||
// substitute reserved characters for box
|
// substitute reserved characters for box
|
||||||
|
@ -827,8 +829,8 @@ func (f *Fs) DirCacheFlush() {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashSHA1)
|
return hash.Set(hash.HashSHA1)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
@ -857,9 +859,9 @@ func (o *Object) srvPath() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the SHA-1 of an object returning a lowercase hex string
|
// Hash returns the SHA-1 of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashSHA1 {
|
if t != hash.HashSHA1 {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
return o.sha1, nil
|
return o.sha1, nil
|
||||||
}
|
}
|
||||||
|
|
56
backend/cache/cache.go
vendored
56
backend/cache/cache.go
vendored
|
@ -18,6 +18,10 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/backend/crypt"
|
"github.com/ncw/rclone/backend/crypt"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
|
"github.com/ncw/rclone/fs/walk"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"golang.org/x/net/context"
|
"golang.org/x/net/context"
|
||||||
"golang.org/x/time/rate"
|
"golang.org/x/time/rate"
|
||||||
|
@ -47,18 +51,18 @@ const (
|
||||||
// Globals
|
// Globals
|
||||||
var (
|
var (
|
||||||
// Flags
|
// Flags
|
||||||
cacheDbPath = fs.StringP("cache-db-path", "", filepath.Join(fs.CacheDir, "cache-backend"), "Directory to cache DB")
|
cacheDbPath = flags.StringP("cache-db-path", "", filepath.Join(config.CacheDir, "cache-backend"), "Directory to cache DB")
|
||||||
cacheChunkPath = fs.StringP("cache-chunk-path", "", filepath.Join(fs.CacheDir, "cache-backend"), "Directory to cached chunk files")
|
cacheChunkPath = flags.StringP("cache-chunk-path", "", filepath.Join(config.CacheDir, "cache-backend"), "Directory to cached chunk files")
|
||||||
cacheDbPurge = fs.BoolP("cache-db-purge", "", false, "Purge the cache DB before")
|
cacheDbPurge = flags.BoolP("cache-db-purge", "", false, "Purge the cache DB before")
|
||||||
cacheChunkSize = fs.StringP("cache-chunk-size", "", DefCacheChunkSize, "The size of a chunk")
|
cacheChunkSize = flags.StringP("cache-chunk-size", "", DefCacheChunkSize, "The size of a chunk")
|
||||||
cacheTotalChunkSize = fs.StringP("cache-total-chunk-size", "", DefCacheTotalChunkSize, "The total size which the chunks can take up from the disk")
|
cacheTotalChunkSize = flags.StringP("cache-total-chunk-size", "", DefCacheTotalChunkSize, "The total size which the chunks can take up from the disk")
|
||||||
cacheChunkCleanInterval = fs.StringP("cache-chunk-clean-interval", "", DefCacheChunkCleanInterval, "Interval at which chunk cleanup runs")
|
cacheChunkCleanInterval = flags.StringP("cache-chunk-clean-interval", "", DefCacheChunkCleanInterval, "Interval at which chunk cleanup runs")
|
||||||
cacheInfoAge = fs.StringP("cache-info-age", "", DefCacheInfoAge, "How much time should object info be stored in cache")
|
cacheInfoAge = flags.StringP("cache-info-age", "", DefCacheInfoAge, "How much time should object info be stored in cache")
|
||||||
cacheReadRetries = fs.IntP("cache-read-retries", "", DefCacheReadRetries, "How many times to retry a read from a cache storage")
|
cacheReadRetries = flags.IntP("cache-read-retries", "", DefCacheReadRetries, "How many times to retry a read from a cache storage")
|
||||||
cacheTotalWorkers = fs.IntP("cache-workers", "", DefCacheTotalWorkers, "How many workers should run in parallel to download chunks")
|
cacheTotalWorkers = flags.IntP("cache-workers", "", DefCacheTotalWorkers, "How many workers should run in parallel to download chunks")
|
||||||
cacheChunkNoMemory = fs.BoolP("cache-chunk-no-memory", "", DefCacheChunkNoMemory, "Disable the in-memory cache for storing chunks during streaming")
|
cacheChunkNoMemory = flags.BoolP("cache-chunk-no-memory", "", DefCacheChunkNoMemory, "Disable the in-memory cache for storing chunks during streaming")
|
||||||
cacheRps = fs.IntP("cache-rps", "", int(DefCacheRps), "Limits the number of requests per second to the source FS. -1 disables the rate limiter")
|
cacheRps = flags.IntP("cache-rps", "", int(DefCacheRps), "Limits the number of requests per second to the source FS. -1 disables the rate limiter")
|
||||||
cacheStoreWrites = fs.BoolP("cache-writes", "", DefCacheWrites, "Will cache file data on writes through the FS")
|
cacheStoreWrites = flags.BoolP("cache-writes", "", DefCacheWrites, "Will cache file data on writes through the FS")
|
||||||
)
|
)
|
||||||
|
|
||||||
// Register with Fs
|
// Register with Fs
|
||||||
|
@ -223,7 +227,7 @@ type Fs struct {
|
||||||
|
|
||||||
// NewFs contstructs an Fs from the path, container:path
|
// NewFs contstructs an Fs from the path, container:path
|
||||||
func NewFs(name, rpath string) (fs.Fs, error) {
|
func NewFs(name, rpath string) (fs.Fs, error) {
|
||||||
remote := fs.ConfigFileGet(name, "remote")
|
remote := config.FileGet(name, "remote")
|
||||||
if strings.HasPrefix(remote, name+":") {
|
if strings.HasPrefix(remote, name+":") {
|
||||||
return nil, errors.New("can't point cache remote at itself - check the value of the remote setting")
|
return nil, errors.New("can't point cache remote at itself - check the value of the remote setting")
|
||||||
}
|
}
|
||||||
|
@ -235,10 +239,10 @@ func NewFs(name, rpath string) (fs.Fs, error) {
|
||||||
}
|
}
|
||||||
fs.Debugf(name, "wrapped %v:%v at root %v", wrappedFs.Name(), wrappedFs.Root(), rpath)
|
fs.Debugf(name, "wrapped %v:%v at root %v", wrappedFs.Name(), wrappedFs.Root(), rpath)
|
||||||
|
|
||||||
plexURL := fs.ConfigFileGet(name, "plex_url")
|
plexURL := config.FileGet(name, "plex_url")
|
||||||
plexToken := fs.ConfigFileGet(name, "plex_token")
|
plexToken := config.FileGet(name, "plex_token")
|
||||||
var chunkSize fs.SizeSuffix
|
var chunkSize fs.SizeSuffix
|
||||||
chunkSizeString := fs.ConfigFileGet(name, "chunk_size", DefCacheChunkSize)
|
chunkSizeString := config.FileGet(name, "chunk_size", DefCacheChunkSize)
|
||||||
if *cacheChunkSize != DefCacheChunkSize {
|
if *cacheChunkSize != DefCacheChunkSize {
|
||||||
chunkSizeString = *cacheChunkSize
|
chunkSizeString = *cacheChunkSize
|
||||||
}
|
}
|
||||||
|
@ -247,7 +251,7 @@ func NewFs(name, rpath string) (fs.Fs, error) {
|
||||||
return nil, errors.Wrapf(err, "failed to understand chunk size", chunkSizeString)
|
return nil, errors.Wrapf(err, "failed to understand chunk size", chunkSizeString)
|
||||||
}
|
}
|
||||||
var chunkTotalSize fs.SizeSuffix
|
var chunkTotalSize fs.SizeSuffix
|
||||||
chunkTotalSizeString := fs.ConfigFileGet(name, "chunk_total_size", DefCacheTotalChunkSize)
|
chunkTotalSizeString := config.FileGet(name, "chunk_total_size", DefCacheTotalChunkSize)
|
||||||
if *cacheTotalChunkSize != DefCacheTotalChunkSize {
|
if *cacheTotalChunkSize != DefCacheTotalChunkSize {
|
||||||
chunkTotalSizeString = *cacheTotalChunkSize
|
chunkTotalSizeString = *cacheTotalChunkSize
|
||||||
}
|
}
|
||||||
|
@ -260,7 +264,7 @@ func NewFs(name, rpath string) (fs.Fs, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrapf(err, "failed to understand duration %v", chunkCleanIntervalStr)
|
return nil, errors.Wrapf(err, "failed to understand duration %v", chunkCleanIntervalStr)
|
||||||
}
|
}
|
||||||
infoAge := fs.ConfigFileGet(name, "info_age", DefCacheInfoAge)
|
infoAge := config.FileGet(name, "info_age", DefCacheInfoAge)
|
||||||
if *cacheInfoAge != DefCacheInfoAge {
|
if *cacheInfoAge != DefCacheInfoAge {
|
||||||
infoAge = *cacheInfoAge
|
infoAge = *cacheInfoAge
|
||||||
}
|
}
|
||||||
|
@ -301,10 +305,10 @@ func NewFs(name, rpath string) (fs.Fs, error) {
|
||||||
return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", plexURL)
|
return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", plexURL)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
plexUsername := fs.ConfigFileGet(name, "plex_username")
|
plexUsername := config.FileGet(name, "plex_username")
|
||||||
plexPassword := fs.ConfigFileGet(name, "plex_password")
|
plexPassword := config.FileGet(name, "plex_password")
|
||||||
if plexPassword != "" && plexUsername != "" {
|
if plexPassword != "" && plexUsername != "" {
|
||||||
decPass, err := fs.Reveal(plexPassword)
|
decPass, err := config.Reveal(plexPassword)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
decPass = plexPassword
|
decPass = plexPassword
|
||||||
}
|
}
|
||||||
|
@ -319,8 +323,8 @@ func NewFs(name, rpath string) (fs.Fs, error) {
|
||||||
dbPath := *cacheDbPath
|
dbPath := *cacheDbPath
|
||||||
chunkPath := *cacheChunkPath
|
chunkPath := *cacheChunkPath
|
||||||
// if the dbPath is non default but the chunk path is default, we overwrite the last to follow the same one as dbPath
|
// if the dbPath is non default but the chunk path is default, we overwrite the last to follow the same one as dbPath
|
||||||
if dbPath != filepath.Join(fs.CacheDir, "cache-backend") &&
|
if dbPath != filepath.Join(config.CacheDir, "cache-backend") &&
|
||||||
chunkPath == filepath.Join(fs.CacheDir, "cache-backend") {
|
chunkPath == filepath.Join(config.CacheDir, "cache-backend") {
|
||||||
chunkPath = dbPath
|
chunkPath = dbPath
|
||||||
}
|
}
|
||||||
if filepath.Ext(dbPath) != "" {
|
if filepath.Ext(dbPath) != "" {
|
||||||
|
@ -506,7 +510,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
|
||||||
return cachedEntries, nil
|
return cachedEntries, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (f *Fs) recurse(dir string, list *fs.ListRHelper) error {
|
func (f *Fs) recurse(dir string, list *walk.ListRHelper) error {
|
||||||
entries, err := f.List(dir)
|
entries, err := f.List(dir)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
@ -558,7 +562,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// if we're here, we're gonna do a standard recursive traversal and cache everything
|
// if we're here, we're gonna do a standard recursive traversal and cache everything
|
||||||
list := fs.NewListRHelper(callback)
|
list := walk.NewListRHelper(callback)
|
||||||
err = f.recurse(dir, list)
|
err = f.recurse(dir, list)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
@ -895,7 +899,7 @@ func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return f.Fs.Hashes()
|
return f.Fs.Hashes()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
60
backend/cache/cache_internal_test.go
vendored
60
backend/cache/cache_internal_test.go
vendored
|
@ -20,6 +20,8 @@ import (
|
||||||
//"strings"
|
//"strings"
|
||||||
|
|
||||||
"github.com/ncw/rclone/backend/cache"
|
"github.com/ncw/rclone/backend/cache"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/object"
|
||||||
//"github.com/ncw/rclone/cmd/mount"
|
//"github.com/ncw/rclone/cmd/mount"
|
||||||
//_ "github.com/ncw/rclone/cmd/cmount"
|
//_ "github.com/ncw/rclone/cmd/cmount"
|
||||||
//"github.com/ncw/rclone/cmd/mountlib"
|
//"github.com/ncw/rclone/cmd/mountlib"
|
||||||
|
@ -492,7 +494,7 @@ func writeObjectString(t *testing.T, f fs.Fs, remote, content string) fs.Object
|
||||||
func writeObjectBytes(t *testing.T, f fs.Fs, remote string, data []byte) fs.Object {
|
func writeObjectBytes(t *testing.T, f fs.Fs, remote string, data []byte) fs.Object {
|
||||||
in := bytes.NewReader(data)
|
in := bytes.NewReader(data)
|
||||||
modTime := time.Now()
|
modTime := time.Now()
|
||||||
objInfo := fs.NewStaticObjectInfo(remote, modTime, int64(len(data)), true, nil, f)
|
objInfo := object.NewStaticObjectInfo(remote, modTime, int64(len(data)), true, nil, f)
|
||||||
|
|
||||||
obj, err := f.Put(in, objInfo)
|
obj, err := f.Put(in, objInfo)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
@ -503,8 +505,8 @@ func writeObjectBytes(t *testing.T, f fs.Fs, remote string, data []byte) fs.Obje
|
||||||
func updateObjectBytes(t *testing.T, f fs.Fs, remote string, data1 []byte, data2 []byte) fs.Object {
|
func updateObjectBytes(t *testing.T, f fs.Fs, remote string, data1 []byte, data2 []byte) fs.Object {
|
||||||
in1 := bytes.NewReader(data1)
|
in1 := bytes.NewReader(data1)
|
||||||
in2 := bytes.NewReader(data2)
|
in2 := bytes.NewReader(data2)
|
||||||
objInfo1 := fs.NewStaticObjectInfo(remote, time.Now(), int64(len(data1)), true, nil, f)
|
objInfo1 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data1)), true, nil, f)
|
||||||
objInfo2 := fs.NewStaticObjectInfo(remote, time.Now(), int64(len(data2)), true, nil, f)
|
objInfo2 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data2)), true, nil, f)
|
||||||
|
|
||||||
obj, err := f.Put(in1, objInfo1)
|
obj, err := f.Put(in1, objInfo1)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
@ -540,15 +542,15 @@ func cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
|
||||||
|
|
||||||
func newLocalCacheCryptFs(t *testing.T, localRemote, cacheRemote, cryptRemote string, purge bool, cfg map[string]string) (fs.Fs, *cache.Persistent) {
|
func newLocalCacheCryptFs(t *testing.T, localRemote, cacheRemote, cryptRemote string, purge bool, cfg map[string]string) (fs.Fs, *cache.Persistent) {
|
||||||
fstest.Initialise()
|
fstest.Initialise()
|
||||||
dbPath := filepath.Join(fs.CacheDir, "cache-backend", cacheRemote+".db")
|
dbPath := filepath.Join(config.CacheDir, "cache-backend", cacheRemote+".db")
|
||||||
chunkPath := filepath.Join(fs.CacheDir, "cache-backend", cacheRemote)
|
chunkPath := filepath.Join(config.CacheDir, "cache-backend", cacheRemote)
|
||||||
boltDb, err := cache.GetPersistent(dbPath, chunkPath, &cache.Features{PurgeDb: true})
|
boltDb, err := cache.GetPersistent(dbPath, chunkPath, &cache.Features{PurgeDb: true})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
localExists := false
|
localExists := false
|
||||||
cacheExists := false
|
cacheExists := false
|
||||||
cryptExists := false
|
cryptExists := false
|
||||||
for _, s := range fs.ConfigFileSections() {
|
for _, s := range config.FileSections() {
|
||||||
if s == localRemote {
|
if s == localRemote {
|
||||||
localExists = true
|
localExists = true
|
||||||
}
|
}
|
||||||
|
@ -563,28 +565,28 @@ func newLocalCacheCryptFs(t *testing.T, localRemote, cacheRemote, cryptRemote st
|
||||||
localRemoteWrap := ""
|
localRemoteWrap := ""
|
||||||
if !localExists {
|
if !localExists {
|
||||||
localRemoteWrap = localRemote + ":/var/tmp/" + localRemote
|
localRemoteWrap = localRemote + ":/var/tmp/" + localRemote
|
||||||
fs.ConfigFileSet(localRemote, "type", "local")
|
config.FileSet(localRemote, "type", "local")
|
||||||
fs.ConfigFileSet(localRemote, "nounc", "true")
|
config.FileSet(localRemote, "nounc", "true")
|
||||||
}
|
}
|
||||||
|
|
||||||
if !cacheExists {
|
if !cacheExists {
|
||||||
fs.ConfigFileSet(cacheRemote, "type", "cache")
|
config.FileSet(cacheRemote, "type", "cache")
|
||||||
fs.ConfigFileSet(cacheRemote, "remote", localRemoteWrap)
|
config.FileSet(cacheRemote, "remote", localRemoteWrap)
|
||||||
}
|
}
|
||||||
if c, ok := cfg["chunk_size"]; ok {
|
if c, ok := cfg["chunk_size"]; ok {
|
||||||
fs.ConfigFileSet(cacheRemote, "chunk_size", c)
|
config.FileSet(cacheRemote, "chunk_size", c)
|
||||||
} else {
|
} else {
|
||||||
fs.ConfigFileSet(cacheRemote, "chunk_size", "1m")
|
config.FileSet(cacheRemote, "chunk_size", "1m")
|
||||||
}
|
}
|
||||||
if c, ok := cfg["chunk_total_size"]; ok {
|
if c, ok := cfg["chunk_total_size"]; ok {
|
||||||
fs.ConfigFileSet(cacheRemote, "chunk_total_size", c)
|
config.FileSet(cacheRemote, "chunk_total_size", c)
|
||||||
} else {
|
} else {
|
||||||
fs.ConfigFileSet(cacheRemote, "chunk_total_size", "2m")
|
config.FileSet(cacheRemote, "chunk_total_size", "2m")
|
||||||
}
|
}
|
||||||
if c, ok := cfg["info_age"]; ok {
|
if c, ok := cfg["info_age"]; ok {
|
||||||
fs.ConfigFileSet(cacheRemote, "info_age", c)
|
config.FileSet(cacheRemote, "info_age", c)
|
||||||
} else {
|
} else {
|
||||||
fs.ConfigFileSet(cacheRemote, "info_age", infoAge.String())
|
config.FileSet(cacheRemote, "info_age", infoAge.String())
|
||||||
}
|
}
|
||||||
|
|
||||||
if !cryptExists {
|
if !cryptExists {
|
||||||
|
@ -627,14 +629,14 @@ func newLocalCacheCryptFs(t *testing.T, localRemote, cacheRemote, cryptRemote st
|
||||||
|
|
||||||
func newLocalCacheFs(t *testing.T, localRemote, cacheRemote string, cfg map[string]string) (fs.Fs, *cache.Persistent) {
|
func newLocalCacheFs(t *testing.T, localRemote, cacheRemote string, cfg map[string]string) (fs.Fs, *cache.Persistent) {
|
||||||
fstest.Initialise()
|
fstest.Initialise()
|
||||||
dbPath := filepath.Join(fs.CacheDir, "cache-backend", cacheRemote+".db")
|
dbPath := filepath.Join(config.CacheDir, "cache-backend", cacheRemote+".db")
|
||||||
chunkPath := filepath.Join(fs.CacheDir, "cache-backend", cacheRemote)
|
chunkPath := filepath.Join(config.CacheDir, "cache-backend", cacheRemote)
|
||||||
boltDb, err := cache.GetPersistent(dbPath, chunkPath, &cache.Features{PurgeDb: true})
|
boltDb, err := cache.GetPersistent(dbPath, chunkPath, &cache.Features{PurgeDb: true})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
localExists := false
|
localExists := false
|
||||||
cacheExists := false
|
cacheExists := false
|
||||||
for _, s := range fs.ConfigFileSections() {
|
for _, s := range config.FileSections() {
|
||||||
if s == localRemote {
|
if s == localRemote {
|
||||||
localExists = true
|
localExists = true
|
||||||
}
|
}
|
||||||
|
@ -646,28 +648,28 @@ func newLocalCacheFs(t *testing.T, localRemote, cacheRemote string, cfg map[stri
|
||||||
localRemoteWrap := ""
|
localRemoteWrap := ""
|
||||||
if !localExists {
|
if !localExists {
|
||||||
localRemoteWrap = localRemote + ":/var/tmp/" + localRemote
|
localRemoteWrap = localRemote + ":/var/tmp/" + localRemote
|
||||||
fs.ConfigFileSet(localRemote, "type", "local")
|
config.FileSet(localRemote, "type", "local")
|
||||||
fs.ConfigFileSet(localRemote, "nounc", "true")
|
config.FileSet(localRemote, "nounc", "true")
|
||||||
}
|
}
|
||||||
|
|
||||||
if !cacheExists {
|
if !cacheExists {
|
||||||
fs.ConfigFileSet(cacheRemote, "type", "cache")
|
config.FileSet(cacheRemote, "type", "cache")
|
||||||
fs.ConfigFileSet(cacheRemote, "remote", localRemoteWrap)
|
config.FileSet(cacheRemote, "remote", localRemoteWrap)
|
||||||
}
|
}
|
||||||
if c, ok := cfg["chunk_size"]; ok {
|
if c, ok := cfg["chunk_size"]; ok {
|
||||||
fs.ConfigFileSet(cacheRemote, "chunk_size", c)
|
config.FileSet(cacheRemote, "chunk_size", c)
|
||||||
} else {
|
} else {
|
||||||
fs.ConfigFileSet(cacheRemote, "chunk_size", "1m")
|
config.FileSet(cacheRemote, "chunk_size", "1m")
|
||||||
}
|
}
|
||||||
if c, ok := cfg["chunk_total_size"]; ok {
|
if c, ok := cfg["chunk_total_size"]; ok {
|
||||||
fs.ConfigFileSet(cacheRemote, "chunk_total_size", c)
|
config.FileSet(cacheRemote, "chunk_total_size", c)
|
||||||
} else {
|
} else {
|
||||||
fs.ConfigFileSet(cacheRemote, "chunk_total_size", "2m")
|
config.FileSet(cacheRemote, "chunk_total_size", "2m")
|
||||||
}
|
}
|
||||||
if c, ok := cfg["info_age"]; ok {
|
if c, ok := cfg["info_age"]; ok {
|
||||||
fs.ConfigFileSet(cacheRemote, "info_age", c)
|
config.FileSet(cacheRemote, "info_age", c)
|
||||||
} else {
|
} else {
|
||||||
fs.ConfigFileSet(cacheRemote, "info_age", infoAge.String())
|
config.FileSet(cacheRemote, "info_age", infoAge.String())
|
||||||
}
|
}
|
||||||
|
|
||||||
if c, ok := cfg["cache-chunk-no-memory"]; ok {
|
if c, ok := cfg["cache-chunk-no-memory"]; ok {
|
||||||
|
|
15
backend/cache/object.go
vendored
15
backend/cache/object.go
vendored
|
@ -13,6 +13,7 @@ import (
|
||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Object is a generic file like object that stores basic information about it
|
// Object is a generic file like object that stores basic information about it
|
||||||
|
@ -27,7 +28,7 @@ type Object struct {
|
||||||
CacheStorable bool `json:"storable"` // says whether this object can be stored
|
CacheStorable bool `json:"storable"` // says whether this object can be stored
|
||||||
CacheType string `json:"cacheType"`
|
CacheType string `json:"cacheType"`
|
||||||
CacheTs time.Time `json:"cacheTs"`
|
CacheTs time.Time `json:"cacheTs"`
|
||||||
cacheHashes map[fs.HashType]string // all supported hashes cached
|
cacheHashes map[hash.Type]string // all supported hashes cached
|
||||||
|
|
||||||
refreshMutex sync.Mutex
|
refreshMutex sync.Mutex
|
||||||
}
|
}
|
||||||
|
@ -80,10 +81,10 @@ func (o *Object) UnmarshalJSON(b []byte) error {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
o.cacheHashes = make(map[fs.HashType]string)
|
o.cacheHashes = make(map[hash.Type]string)
|
||||||
for k, v := range aux.Hashes {
|
for k, v := range aux.Hashes {
|
||||||
ht, _ := strconv.Atoi(k)
|
ht, _ := strconv.Atoi(k)
|
||||||
o.cacheHashes[fs.HashType(ht)] = v
|
o.cacheHashes[hash.Type(ht)] = v
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
@ -112,7 +113,7 @@ func (o *Object) updateData(source fs.Object) {
|
||||||
o.CacheSize = source.Size()
|
o.CacheSize = source.Size()
|
||||||
o.CacheStorable = source.Storable()
|
o.CacheStorable = source.Storable()
|
||||||
o.CacheTs = time.Now()
|
o.CacheTs = time.Now()
|
||||||
o.cacheHashes = make(map[fs.HashType]string)
|
o.cacheHashes = make(map[hash.Type]string)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs returns its FS info
|
// Fs returns its FS info
|
||||||
|
@ -251,7 +252,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
||||||
|
|
||||||
o.CacheModTime = src.ModTime().UnixNano()
|
o.CacheModTime = src.ModTime().UnixNano()
|
||||||
o.CacheSize = src.Size()
|
o.CacheSize = src.Size()
|
||||||
o.cacheHashes = make(map[fs.HashType]string)
|
o.cacheHashes = make(map[hash.Type]string)
|
||||||
o.persist()
|
o.persist()
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
@ -274,9 +275,9 @@ func (o *Object) Remove() error {
|
||||||
|
|
||||||
// Hash requests a hash of the object and stores in the cache
|
// Hash requests a hash of the object and stores in the cache
|
||||||
// since it might or might not be called, this is lazy loaded
|
// since it might or might not be called, this is lazy loaded
|
||||||
func (o *Object) Hash(ht fs.HashType) (string, error) {
|
func (o *Object) Hash(ht hash.Type) (string, error) {
|
||||||
if o.cacheHashes == nil {
|
if o.cacheHashes == nil {
|
||||||
o.cacheHashes = make(map[fs.HashType]string)
|
o.cacheHashes = make(map[hash.Type]string)
|
||||||
}
|
}
|
||||||
|
|
||||||
cachedHash, found := o.cacheHashes[ht]
|
cachedHash, found := o.cacheHashes[ht]
|
||||||
|
|
5
backend/cache/plex.go
vendored
5
backend/cache/plex.go
vendored
|
@ -13,6 +13,7 @@ import (
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
|
@ -107,8 +108,8 @@ func (p *plexConnector) authenticate() error {
|
||||||
}
|
}
|
||||||
p.token = token
|
p.token = token
|
||||||
if p.token != "" {
|
if p.token != "" {
|
||||||
fs.ConfigFileSet(p.f.Name(), "plex_token", p.token)
|
config.FileSet(p.f.Name(), "plex_token", p.token)
|
||||||
fs.SaveConfig()
|
config.SaveConfig()
|
||||||
fs.Infof(p.f.Name(), "Connected to Plex server: %v", p.url.String())
|
fs.Infof(p.f.Name(), "Connected to Plex server: %v", p.url.String())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -10,13 +10,16 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Globals
|
// Globals
|
||||||
var (
|
var (
|
||||||
// Flags
|
// Flags
|
||||||
cryptShowMapping = fs.BoolP("crypt-show-mapping", "", false, "For all files listed show how the names encrypt.")
|
cryptShowMapping = flags.BoolP("crypt-show-mapping", "", false, "For all files listed show how the names encrypt.")
|
||||||
)
|
)
|
||||||
|
|
||||||
// Register with Fs
|
// Register with Fs
|
||||||
|
@ -71,25 +74,25 @@ func init() {
|
||||||
|
|
||||||
// NewFs contstructs an Fs from the path, container:path
|
// NewFs contstructs an Fs from the path, container:path
|
||||||
func NewFs(name, rpath string) (fs.Fs, error) {
|
func NewFs(name, rpath string) (fs.Fs, error) {
|
||||||
mode, err := NewNameEncryptionMode(fs.ConfigFileGet(name, "filename_encryption", "standard"))
|
mode, err := NewNameEncryptionMode(config.FileGet(name, "filename_encryption", "standard"))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
dirNameEncrypt, err := strconv.ParseBool(fs.ConfigFileGet(name, "directory_name_encryption", "true"))
|
dirNameEncrypt, err := strconv.ParseBool(config.FileGet(name, "directory_name_encryption", "true"))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
password := fs.ConfigFileGet(name, "password", "")
|
password := config.FileGet(name, "password", "")
|
||||||
if password == "" {
|
if password == "" {
|
||||||
return nil, errors.New("password not set in config file")
|
return nil, errors.New("password not set in config file")
|
||||||
}
|
}
|
||||||
password, err = fs.Reveal(password)
|
password, err = config.Reveal(password)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "failed to decrypt password")
|
return nil, errors.Wrap(err, "failed to decrypt password")
|
||||||
}
|
}
|
||||||
salt := fs.ConfigFileGet(name, "password2", "")
|
salt := config.FileGet(name, "password2", "")
|
||||||
if salt != "" {
|
if salt != "" {
|
||||||
salt, err = fs.Reveal(salt)
|
salt, err = config.Reveal(salt)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "failed to decrypt password2")
|
return nil, errors.Wrap(err, "failed to decrypt password2")
|
||||||
}
|
}
|
||||||
|
@ -98,7 +101,7 @@ func NewFs(name, rpath string) (fs.Fs, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "failed to make cipher")
|
return nil, errors.Wrap(err, "failed to make cipher")
|
||||||
}
|
}
|
||||||
remote := fs.ConfigFileGet(name, "remote")
|
remote := config.FileGet(name, "remote")
|
||||||
if strings.HasPrefix(remote, name+":") {
|
if strings.HasPrefix(remote, name+":") {
|
||||||
return nil, errors.New("can't point crypt remote at itself - check the value of the remote setting")
|
return nil, errors.New("can't point crypt remote at itself - check the value of the remote setting")
|
||||||
}
|
}
|
||||||
|
@ -305,8 +308,8 @@ func (f *Fs) PutStream(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashNone)
|
return hash.Set(hash.HashNone)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Mkdir makes the directory (container, bucket)
|
// Mkdir makes the directory (container, bucket)
|
||||||
|
@ -459,7 +462,7 @@ func (f *Fs) DecryptFileName(encryptedFileName string) (string, error) {
|
||||||
// src with it, and calcuates the hash given by HashType on the fly
|
// src with it, and calcuates the hash given by HashType on the fly
|
||||||
//
|
//
|
||||||
// Note that we break lots of encapsulation in this function.
|
// Note that we break lots of encapsulation in this function.
|
||||||
func (f *Fs) ComputeHash(o *Object, src fs.Object, hashType fs.HashType) (hash string, err error) {
|
func (f *Fs) ComputeHash(o *Object, src fs.Object, hashType hash.Type) (hashStr string, err error) {
|
||||||
// Read the nonce - opening the file is sufficient to read the nonce in
|
// Read the nonce - opening the file is sufficient to read the nonce in
|
||||||
in, err := o.Open()
|
in, err := o.Open()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -499,7 +502,7 @@ func (f *Fs) ComputeHash(o *Object, src fs.Object, hashType fs.HashType) (hash s
|
||||||
}
|
}
|
||||||
|
|
||||||
// pipe into hash
|
// pipe into hash
|
||||||
m := fs.NewMultiHasher()
|
m := hash.NewMultiHasher()
|
||||||
_, err = io.Copy(m, out)
|
_, err = io.Copy(m, out)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", errors.Wrap(err, "failed to hash data")
|
return "", errors.Wrap(err, "failed to hash data")
|
||||||
|
@ -558,8 +561,8 @@ func (o *Object) Size() int64 {
|
||||||
|
|
||||||
// Hash returns the selected checksum of the file
|
// Hash returns the selected checksum of the file
|
||||||
// If no checksum is available it returns ""
|
// If no checksum is available it returns ""
|
||||||
func (o *Object) Hash(hash fs.HashType) (string, error) {
|
func (o *Object) Hash(ht hash.Type) (string, error) {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
|
|
||||||
// UnWrap returns the wrapped Object
|
// UnWrap returns the wrapped Object
|
||||||
|
@ -652,7 +655,7 @@ func (o *ObjectInfo) Size() int64 {
|
||||||
|
|
||||||
// Hash returns the selected checksum of the file
|
// Hash returns the selected checksum of the file
|
||||||
// If no checksum is available it returns ""
|
// If no checksum is available it returns ""
|
||||||
func (o *ObjectInfo) Hash(hash fs.HashType) (string, error) {
|
func (o *ObjectInfo) Hash(hash hash.Type) (string, error) {
|
||||||
return "", nil
|
return "", nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -4,7 +4,7 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/config"
|
||||||
"github.com/ncw/rclone/fstest/fstests"
|
"github.com/ncw/rclone/fstest/fstests"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -19,15 +19,15 @@ func init() {
|
||||||
fstests.ExtraConfig = []fstests.ExtraConfigItem{
|
fstests.ExtraConfig = []fstests.ExtraConfigItem{
|
||||||
{Name: name, Key: "type", Value: "crypt"},
|
{Name: name, Key: "type", Value: "crypt"},
|
||||||
{Name: name, Key: "remote", Value: tempdir},
|
{Name: name, Key: "remote", Value: tempdir},
|
||||||
{Name: name, Key: "password", Value: fs.MustObscure("potato")},
|
{Name: name, Key: "password", Value: config.MustObscure("potato")},
|
||||||
{Name: name, Key: "filename_encryption", Value: "standard"},
|
{Name: name, Key: "filename_encryption", Value: "standard"},
|
||||||
{Name: name2, Key: "type", Value: "crypt"},
|
{Name: name2, Key: "type", Value: "crypt"},
|
||||||
{Name: name2, Key: "remote", Value: tempdir2},
|
{Name: name2, Key: "remote", Value: tempdir2},
|
||||||
{Name: name2, Key: "password", Value: fs.MustObscure("potato2")},
|
{Name: name2, Key: "password", Value: config.MustObscure("potato2")},
|
||||||
{Name: name2, Key: "filename_encryption", Value: "off"},
|
{Name: name2, Key: "filename_encryption", Value: "off"},
|
||||||
{Name: name3, Key: "type", Value: "crypt"},
|
{Name: name3, Key: "type", Value: "crypt"},
|
||||||
{Name: name3, Key: "remote", Value: tempdir3},
|
{Name: name3, Key: "remote", Value: tempdir3},
|
||||||
{Name: name3, Key: "password", Value: fs.MustObscure("potato2")},
|
{Name: name3, Key: "password", Value: config.MustObscure("potato2")},
|
||||||
{Name: name3, Key: "filename_encryption", Value: "obfuscate"},
|
{Name: name3, Key: "filename_encryption", Value: "obfuscate"},
|
||||||
}
|
}
|
||||||
fstests.SkipBadWindowsCharacters[name3+":"] = true
|
fstests.SkipBadWindowsCharacters[name3+":"] = true
|
||||||
|
|
|
@ -21,11 +21,15 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/ncw/rclone/lib/dircache"
|
"github.com/ncw/rclone/lib/dircache"
|
||||||
"github.com/ncw/rclone/lib/oauthutil"
|
"github.com/ncw/rclone/lib/oauthutil"
|
||||||
"github.com/ncw/rclone/lib/pacer"
|
"github.com/ncw/rclone/lib/pacer"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/pflag"
|
|
||||||
"golang.org/x/oauth2"
|
"golang.org/x/oauth2"
|
||||||
"golang.org/x/oauth2/google"
|
"golang.org/x/oauth2/google"
|
||||||
"google.golang.org/api/drive/v2"
|
"google.golang.org/api/drive/v2"
|
||||||
|
@ -46,13 +50,13 @@ const (
|
||||||
// Globals
|
// Globals
|
||||||
var (
|
var (
|
||||||
// Flags
|
// Flags
|
||||||
driveAuthOwnerOnly = fs.BoolP("drive-auth-owner-only", "", false, "Only consider files owned by the authenticated user.")
|
driveAuthOwnerOnly = flags.BoolP("drive-auth-owner-only", "", false, "Only consider files owned by the authenticated user.")
|
||||||
driveUseTrash = fs.BoolP("drive-use-trash", "", true, "Send files to the trash instead of deleting permanently.")
|
driveUseTrash = flags.BoolP("drive-use-trash", "", true, "Send files to the trash instead of deleting permanently.")
|
||||||
driveSkipGdocs = fs.BoolP("drive-skip-gdocs", "", false, "Skip google documents in all listings.")
|
driveSkipGdocs = flags.BoolP("drive-skip-gdocs", "", false, "Skip google documents in all listings.")
|
||||||
driveSharedWithMe = fs.BoolP("drive-shared-with-me", "", false, "Only show files that are shared with me")
|
driveSharedWithMe = flags.BoolP("drive-shared-with-me", "", false, "Only show files that are shared with me")
|
||||||
driveTrashedOnly = fs.BoolP("drive-trashed-only", "", false, "Only show files that are in the trash")
|
driveTrashedOnly = flags.BoolP("drive-trashed-only", "", false, "Only show files that are in the trash")
|
||||||
driveExtensions = fs.StringP("drive-formats", "", defaultExtensions, "Comma separated list of preferred formats for downloading Google docs.")
|
driveExtensions = flags.StringP("drive-formats", "", defaultExtensions, "Comma separated list of preferred formats for downloading Google docs.")
|
||||||
driveListChunk = pflag.Int64P("drive-list-chunk", "", 1000, "Size of listing chunk 100-1000. 0 to disable.")
|
driveListChunk = flags.Int64P("drive-list-chunk", "", 1000, "Size of listing chunk 100-1000. 0 to disable.")
|
||||||
// chunkSize is the size of the chunks created during a resumable upload and should be a power of two.
|
// chunkSize is the size of the chunks created during a resumable upload and should be a power of two.
|
||||||
// 1<<18 is the minimum size supported by the Google uploader, and there is no maximum.
|
// 1<<18 is the minimum size supported by the Google uploader, and there is no maximum.
|
||||||
chunkSize = fs.SizeSuffix(8 * 1024 * 1024)
|
chunkSize = fs.SizeSuffix(8 * 1024 * 1024)
|
||||||
|
@ -62,7 +66,7 @@ var (
|
||||||
Scopes: []string{"https://www.googleapis.com/auth/drive"},
|
Scopes: []string{"https://www.googleapis.com/auth/drive"},
|
||||||
Endpoint: google.Endpoint,
|
Endpoint: google.Endpoint,
|
||||||
ClientID: rcloneClientID,
|
ClientID: rcloneClientID,
|
||||||
ClientSecret: fs.MustReveal(rcloneEncryptedClientSecret),
|
ClientSecret: config.MustReveal(rcloneEncryptedClientSecret),
|
||||||
RedirectURL: oauthutil.TitleBarRedirectURL,
|
RedirectURL: oauthutil.TitleBarRedirectURL,
|
||||||
}
|
}
|
||||||
mimeTypeToExtension = map[string]string{
|
mimeTypeToExtension = map[string]string{
|
||||||
|
@ -99,7 +103,7 @@ func init() {
|
||||||
NewFs: NewFs,
|
NewFs: NewFs,
|
||||||
Config: func(name string) {
|
Config: func(name string) {
|
||||||
var err error
|
var err error
|
||||||
if fs.ConfigFileGet(name, "service_account_file") == "" {
|
if config.FileGet(name, "service_account_file") == "" {
|
||||||
err = oauthutil.Config("drive", name, driveConfig)
|
err = oauthutil.Config("drive", name, driveConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to configure token: %v", err)
|
log.Fatalf("Failed to configure token: %v", err)
|
||||||
|
@ -111,18 +115,18 @@ func init() {
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
Options: []fs.Option{{
|
Options: []fs.Option{{
|
||||||
Name: fs.ConfigClientID,
|
Name: config.ConfigClientID,
|
||||||
Help: "Google Application Client Id - leave blank normally.",
|
Help: "Google Application Client Id - leave blank normally.",
|
||||||
}, {
|
}, {
|
||||||
Name: fs.ConfigClientSecret,
|
Name: config.ConfigClientSecret,
|
||||||
Help: "Google Application Client Secret - leave blank normally.",
|
Help: "Google Application Client Secret - leave blank normally.",
|
||||||
}, {
|
}, {
|
||||||
Name: "service_account_file",
|
Name: "service_account_file",
|
||||||
Help: "Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.",
|
Help: "Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.",
|
||||||
}},
|
}},
|
||||||
})
|
})
|
||||||
fs.VarP(&driveUploadCutoff, "drive-upload-cutoff", "", "Cutoff for switching to chunked upload")
|
flags.VarP(&driveUploadCutoff, "drive-upload-cutoff", "", "Cutoff for switching to chunked upload")
|
||||||
fs.VarP(&chunkSize, "drive-chunk-size", "", "Upload chunk size. Must a power of 2 >= 256k.")
|
flags.VarP(&chunkSize, "drive-chunk-size", "", "Upload chunk size. Must a power of 2 >= 256k.")
|
||||||
|
|
||||||
// Invert mimeTypeToExtension
|
// Invert mimeTypeToExtension
|
||||||
extensionToMimeType = make(map[string]string, len(mimeTypeToExtension))
|
extensionToMimeType = make(map[string]string, len(mimeTypeToExtension))
|
||||||
|
@ -185,7 +189,7 @@ func (f *Fs) Features() *fs.Features {
|
||||||
func shouldRetry(err error) (again bool, errOut error) {
|
func shouldRetry(err error) (again bool, errOut error) {
|
||||||
again = false
|
again = false
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if fs.ShouldRetry(err) {
|
if fserrors.ShouldRetry(err) {
|
||||||
again = true
|
again = true
|
||||||
} else {
|
} else {
|
||||||
switch gerr := err.(type) {
|
switch gerr := err.(type) {
|
||||||
|
@ -337,13 +341,13 @@ func (f *Fs) parseExtensions(extensions string) error {
|
||||||
|
|
||||||
// Figure out if the user wants to use a team drive
|
// Figure out if the user wants to use a team drive
|
||||||
func configTeamDrive(name string) error {
|
func configTeamDrive(name string) error {
|
||||||
teamDrive := fs.ConfigFileGet(name, "team_drive")
|
teamDrive := config.FileGet(name, "team_drive")
|
||||||
if teamDrive == "" {
|
if teamDrive == "" {
|
||||||
fmt.Printf("Configure this as a team drive?\n")
|
fmt.Printf("Configure this as a team drive?\n")
|
||||||
} else {
|
} else {
|
||||||
fmt.Printf("Change current team drive ID %q?\n", teamDrive)
|
fmt.Printf("Change current team drive ID %q?\n", teamDrive)
|
||||||
}
|
}
|
||||||
if !fs.Confirm() {
|
if !config.Confirm() {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
client, err := authenticate(name)
|
client, err := authenticate(name)
|
||||||
|
@ -379,9 +383,9 @@ func configTeamDrive(name string) error {
|
||||||
if len(driveIDs) == 0 {
|
if len(driveIDs) == 0 {
|
||||||
fmt.Printf("No team drives found in your account")
|
fmt.Printf("No team drives found in your account")
|
||||||
} else {
|
} else {
|
||||||
driveID = fs.Choose("Enter a Team Drive ID", driveIDs, driveNames, true)
|
driveID = config.Choose("Enter a Team Drive ID", driveIDs, driveNames, true)
|
||||||
}
|
}
|
||||||
fs.ConfigFileSet(name, "team_drive", driveID)
|
config.FileSet(name, "team_drive", driveID)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -399,7 +403,7 @@ func getServiceAccountClient(keyJsonfilePath string) (*http.Client, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "error processing credentials")
|
return nil, errors.Wrap(err, "error processing credentials")
|
||||||
}
|
}
|
||||||
ctxWithSpecialClient := oauthutil.Context(fs.Config.Client())
|
ctxWithSpecialClient := oauthutil.Context(fshttp.NewClient(fs.Config))
|
||||||
return oauth2.NewClient(ctxWithSpecialClient, conf.TokenSource(ctxWithSpecialClient)), nil
|
return oauth2.NewClient(ctxWithSpecialClient, conf.TokenSource(ctxWithSpecialClient)), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -407,7 +411,7 @@ func authenticate(name string) (*http.Client, error) {
|
||||||
var oAuthClient *http.Client
|
var oAuthClient *http.Client
|
||||||
var err error
|
var err error
|
||||||
|
|
||||||
serviceAccountPath := fs.ConfigFileGet(name, "service_account_file")
|
serviceAccountPath := config.FileGet(name, "service_account_file")
|
||||||
if serviceAccountPath != "" {
|
if serviceAccountPath != "" {
|
||||||
oAuthClient, err = getServiceAccountClient(serviceAccountPath)
|
oAuthClient, err = getServiceAccountClient(serviceAccountPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -444,7 +448,7 @@ func NewFs(name, path string) (fs.Fs, error) {
|
||||||
root: root,
|
root: root,
|
||||||
pacer: newPacer(),
|
pacer: newPacer(),
|
||||||
}
|
}
|
||||||
f.teamDriveID = fs.ConfigFileGet(name, "team_drive")
|
f.teamDriveID = config.FileGet(name, "team_drive")
|
||||||
f.isTeamDrive = f.teamDriveID != ""
|
f.isTeamDrive = f.teamDriveID != ""
|
||||||
f.features = (&fs.Features{
|
f.features = (&fs.Features{
|
||||||
DuplicateFiles: true,
|
DuplicateFiles: true,
|
||||||
|
@ -1188,8 +1192,8 @@ func (f *Fs) DirCacheFlush() {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashMD5)
|
return hash.Set(hash.HashMD5)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
@ -1213,9 +1217,9 @@ func (o *Object) Remote() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the Md5sum of an object returning a lowercase hex string
|
// Hash returns the Md5sum of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashMD5 {
|
if t != hash.HashMD5 {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
return o.md5sum, nil
|
return o.md5sum, nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -20,6 +20,8 @@ import (
|
||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
|
"github.com/ncw/rclone/lib/readers"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"google.golang.org/api/drive/v2"
|
"google.golang.org/api/drive/v2"
|
||||||
"google.golang.org/api/googleapi"
|
"google.golang.org/api/googleapi"
|
||||||
|
@ -201,7 +203,7 @@ func (rx *resumableUpload) Upload() (*drive.File, error) {
|
||||||
if reqSize >= int64(chunkSize) {
|
if reqSize >= int64(chunkSize) {
|
||||||
reqSize = int64(chunkSize)
|
reqSize = int64(chunkSize)
|
||||||
}
|
}
|
||||||
chunk := fs.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize)
|
chunk := readers.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize)
|
||||||
|
|
||||||
// Transfer the chunk
|
// Transfer the chunk
|
||||||
err = rx.f.pacer.Call(func() (bool, error) {
|
err = rx.f.pacer.Call(func() (bool, error) {
|
||||||
|
@ -241,7 +243,7 @@ func (rx *resumableUpload) Upload() (*drive.File, error) {
|
||||||
// Handle 404 Not Found errors when doing resumable uploads by starting
|
// Handle 404 Not Found errors when doing resumable uploads by starting
|
||||||
// the entire upload over from the beginning.
|
// the entire upload over from the beginning.
|
||||||
if rx.ret == nil {
|
if rx.ret == nil {
|
||||||
return nil, fs.RetryErrorf("Incomplete upload - retry, last error %d", StatusCode)
|
return nil, fserrors.RetryErrorf("Incomplete upload - retry, last error %d", StatusCode)
|
||||||
}
|
}
|
||||||
return rx.ret, nil
|
return rx.ret, nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -34,8 +34,13 @@ import (
|
||||||
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox"
|
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox"
|
||||||
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/files"
|
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/files"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/ncw/rclone/lib/oauthutil"
|
"github.com/ncw/rclone/lib/oauthutil"
|
||||||
"github.com/ncw/rclone/lib/pacer"
|
"github.com/ncw/rclone/lib/pacer"
|
||||||
|
"github.com/ncw/rclone/lib/readers"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"golang.org/x/oauth2"
|
"golang.org/x/oauth2"
|
||||||
)
|
)
|
||||||
|
@ -59,7 +64,7 @@ var (
|
||||||
// },
|
// },
|
||||||
Endpoint: dropbox.OAuthEndpoint(""),
|
Endpoint: dropbox.OAuthEndpoint(""),
|
||||||
ClientID: rcloneClientID,
|
ClientID: rcloneClientID,
|
||||||
ClientSecret: fs.MustReveal(rcloneEncryptedClientSecret),
|
ClientSecret: config.MustReveal(rcloneEncryptedClientSecret),
|
||||||
RedirectURL: oauthutil.RedirectLocalhostURL,
|
RedirectURL: oauthutil.RedirectLocalhostURL,
|
||||||
}
|
}
|
||||||
// A regexp matching path names for files Dropbox ignores
|
// A regexp matching path names for files Dropbox ignores
|
||||||
|
@ -112,7 +117,7 @@ func init() {
|
||||||
Help: "Dropbox App Secret - leave blank normally.",
|
Help: "Dropbox App Secret - leave blank normally.",
|
||||||
}},
|
}},
|
||||||
})
|
})
|
||||||
fs.VarP(&uploadChunkSize, "dropbox-chunk-size", "", fmt.Sprintf("Upload chunk size. Max %v.", maxUploadChunkSize))
|
flags.VarP(&uploadChunkSize, "dropbox-chunk-size", "", fmt.Sprintf("Upload chunk size. Max %v.", maxUploadChunkSize))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a remote dropbox server
|
// Fs represents a remote dropbox server
|
||||||
|
@ -170,7 +175,7 @@ func shouldRetry(err error) (bool, error) {
|
||||||
if strings.Contains(baseErrString, "too_many_write_operations") || strings.Contains(baseErrString, "too_many_requests") {
|
if strings.Contains(baseErrString, "too_many_write_operations") || strings.Contains(baseErrString, "too_many_requests") {
|
||||||
return true, err
|
return true, err
|
||||||
}
|
}
|
||||||
return fs.ShouldRetry(err), err
|
return fserrors.ShouldRetry(err), err
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewFs contstructs an Fs from the path, container:path
|
// NewFs contstructs an Fs from the path, container:path
|
||||||
|
@ -181,11 +186,11 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
|
|
||||||
// Convert the old token if it exists. The old token was just
|
// Convert the old token if it exists. The old token was just
|
||||||
// just a string, the new one is a JSON blob
|
// just a string, the new one is a JSON blob
|
||||||
oldToken := strings.TrimSpace(fs.ConfigFileGet(name, fs.ConfigToken))
|
oldToken := strings.TrimSpace(config.FileGet(name, config.ConfigToken))
|
||||||
if oldToken != "" && oldToken[0] != '{' {
|
if oldToken != "" && oldToken[0] != '{' {
|
||||||
fs.Infof(name, "Converting token to new format")
|
fs.Infof(name, "Converting token to new format")
|
||||||
newToken := fmt.Sprintf(`{"access_token":"%s","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
|
newToken := fmt.Sprintf(`{"access_token":"%s","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
|
||||||
err := fs.ConfigSetValueAndSave(name, fs.ConfigToken, newToken)
|
err := config.SetValueAndSave(name, config.ConfigToken, newToken)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "NewFS convert token")
|
return nil, errors.Wrap(err, "NewFS convert token")
|
||||||
}
|
}
|
||||||
|
@ -675,8 +680,8 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashDropbox)
|
return hash.Set(hash.HashDropbox)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
@ -700,9 +705,9 @@ func (o *Object) Remote() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the dropbox special hash
|
// Hash returns the dropbox special hash
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashDropbox {
|
if t != hash.HashDropbox {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
err := o.readMetaData()
|
err := o.readMetaData()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -813,7 +818,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
|
||||||
case files.DownloadAPIError:
|
case files.DownloadAPIError:
|
||||||
// Don't attempt to retry copyright violation errors
|
// Don't attempt to retry copyright violation errors
|
||||||
if e.EndpointError.Path.Tag == files.LookupErrorRestrictedContent {
|
if e.EndpointError.Path.Tag == files.LookupErrorRestrictedContent {
|
||||||
return nil, fs.NoRetryError(err)
|
return nil, fserrors.NoRetryError(err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -831,7 +836,7 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
|
||||||
if size != -1 {
|
if size != -1 {
|
||||||
chunks = int(size/chunkSize) + 1
|
chunks = int(size/chunkSize) + 1
|
||||||
}
|
}
|
||||||
in := fs.NewCountingReader(in0)
|
in := readers.NewCountingReader(in0)
|
||||||
buf := make([]byte, int(chunkSize))
|
buf := make([]byte, int(chunkSize))
|
||||||
|
|
||||||
fmtChunk := func(cur int, last bool) {
|
fmtChunk := func(cur int, last bool) {
|
||||||
|
@ -847,7 +852,7 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
|
||||||
// write the first chunk
|
// write the first chunk
|
||||||
fmtChunk(1, false)
|
fmtChunk(1, false)
|
||||||
var res *files.UploadSessionStartResult
|
var res *files.UploadSessionStartResult
|
||||||
chunk := fs.NewRepeatableLimitReaderBuffer(in, buf, chunkSize)
|
chunk := readers.NewRepeatableLimitReaderBuffer(in, buf, chunkSize)
|
||||||
err = o.fs.pacer.Call(func() (bool, error) {
|
err = o.fs.pacer.Call(func() (bool, error) {
|
||||||
// seek to the start in case this is a retry
|
// seek to the start in case this is a retry
|
||||||
if _, err = chunk.Seek(0, 0); err != nil {
|
if _, err = chunk.Seek(0, 0); err != nil {
|
||||||
|
@ -883,7 +888,7 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
|
||||||
}
|
}
|
||||||
cursor.Offset = in.BytesRead()
|
cursor.Offset = in.BytesRead()
|
||||||
fmtChunk(currentChunk, false)
|
fmtChunk(currentChunk, false)
|
||||||
chunk = fs.NewRepeatableLimitReaderBuffer(in, buf, chunkSize)
|
chunk = readers.NewRepeatableLimitReaderBuffer(in, buf, chunkSize)
|
||||||
err = o.fs.pacer.Call(func() (bool, error) {
|
err = o.fs.pacer.Call(func() (bool, error) {
|
||||||
// seek to the start in case this is a retry
|
// seek to the start in case this is a retry
|
||||||
if _, err = chunk.Seek(0, 0); err != nil {
|
if _, err = chunk.Seek(0, 0); err != nil {
|
||||||
|
@ -906,7 +911,7 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
|
||||||
Commit: commitInfo,
|
Commit: commitInfo,
|
||||||
}
|
}
|
||||||
fmtChunk(currentChunk, true)
|
fmtChunk(currentChunk, true)
|
||||||
chunk = fs.NewRepeatableReaderBuffer(in, buf)
|
chunk = readers.NewRepeatableReaderBuffer(in, buf)
|
||||||
err = o.fs.pacer.Call(func() (bool, error) {
|
err = o.fs.pacer.Call(func() (bool, error) {
|
||||||
// seek to the start in case this is a retry
|
// seek to the start in case this is a retry
|
||||||
if _, err = chunk.Seek(0, 0); err != nil {
|
if _, err = chunk.Seek(0, 0); err != nil {
|
||||||
|
|
|
@ -13,6 +13,8 @@ import (
|
||||||
|
|
||||||
"github.com/jlaffaye/ftp"
|
"github.com/jlaffaye/ftp"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -160,33 +162,33 @@ func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
|
||||||
func NewFs(name, root string) (ff fs.Fs, err error) {
|
func NewFs(name, root string) (ff fs.Fs, err error) {
|
||||||
// defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err)
|
// defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err)
|
||||||
// FIXME Convert the old scheme used for the first beta - remove after release
|
// FIXME Convert the old scheme used for the first beta - remove after release
|
||||||
if ftpURL := fs.ConfigFileGet(name, "url"); ftpURL != "" {
|
if ftpURL := config.FileGet(name, "url"); ftpURL != "" {
|
||||||
fs.Infof(name, "Converting old configuration")
|
fs.Infof(name, "Converting old configuration")
|
||||||
u, err := url.Parse(ftpURL)
|
u, err := url.Parse(ftpURL)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrapf(err, "Failed to parse old url %q", ftpURL)
|
return nil, errors.Wrapf(err, "Failed to parse old url %q", ftpURL)
|
||||||
}
|
}
|
||||||
parts := strings.Split(u.Host, ":")
|
parts := strings.Split(u.Host, ":")
|
||||||
fs.ConfigFileSet(name, "host", parts[0])
|
config.FileSet(name, "host", parts[0])
|
||||||
if len(parts) > 1 {
|
if len(parts) > 1 {
|
||||||
fs.ConfigFileSet(name, "port", parts[1])
|
config.FileSet(name, "port", parts[1])
|
||||||
}
|
}
|
||||||
fs.ConfigFileSet(name, "host", u.Host)
|
config.FileSet(name, "host", u.Host)
|
||||||
fs.ConfigFileSet(name, "user", fs.ConfigFileGet(name, "username"))
|
config.FileSet(name, "user", config.FileGet(name, "username"))
|
||||||
fs.ConfigFileSet(name, "pass", fs.ConfigFileGet(name, "password"))
|
config.FileSet(name, "pass", config.FileGet(name, "password"))
|
||||||
fs.ConfigFileDeleteKey(name, "username")
|
config.FileDeleteKey(name, "username")
|
||||||
fs.ConfigFileDeleteKey(name, "password")
|
config.FileDeleteKey(name, "password")
|
||||||
fs.ConfigFileDeleteKey(name, "url")
|
config.FileDeleteKey(name, "url")
|
||||||
fs.SaveConfig()
|
config.SaveConfig()
|
||||||
if u.Path != "" && u.Path != "/" {
|
if u.Path != "" && u.Path != "/" {
|
||||||
fs.Errorf(name, "Path %q in FTP URL no longer supported - put it on the end of the remote %s:%s", u.Path, name, u.Path)
|
fs.Errorf(name, "Path %q in FTP URL no longer supported - put it on the end of the remote %s:%s", u.Path, name, u.Path)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
host := fs.ConfigFileGet(name, "host")
|
host := config.FileGet(name, "host")
|
||||||
user := fs.ConfigFileGet(name, "user")
|
user := config.FileGet(name, "user")
|
||||||
pass := fs.ConfigFileGet(name, "pass")
|
pass := config.FileGet(name, "pass")
|
||||||
port := fs.ConfigFileGet(name, "port")
|
port := config.FileGet(name, "port")
|
||||||
pass, err = fs.Reveal(pass)
|
pass, err = config.Reveal(pass)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "NewFS decrypt password")
|
return nil, errors.Wrap(err, "NewFS decrypt password")
|
||||||
}
|
}
|
||||||
|
@ -346,7 +348,7 @@ func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes are not supported
|
// Hashes are not supported
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -565,8 +567,8 @@ func (o *Object) Remote() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the hash of an object returning a lowercase hex string
|
// Hash returns the hash of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
|
|
||||||
// Size returns the size of an object in bytes
|
// Size returns the size of an object in bytes
|
||||||
|
|
|
@ -28,6 +28,11 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
|
"github.com/ncw/rclone/fs/walk"
|
||||||
"github.com/ncw/rclone/lib/oauthutil"
|
"github.com/ncw/rclone/lib/oauthutil"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"golang.org/x/oauth2"
|
"golang.org/x/oauth2"
|
||||||
|
@ -46,14 +51,14 @@ const (
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
gcsLocation = fs.StringP("gcs-location", "", "", "Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).")
|
gcsLocation = flags.StringP("gcs-location", "", "", "Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).")
|
||||||
gcsStorageClass = fs.StringP("gcs-storage-class", "", "", "Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).")
|
gcsStorageClass = flags.StringP("gcs-storage-class", "", "", "Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).")
|
||||||
// Description of how to auth for this app
|
// Description of how to auth for this app
|
||||||
storageConfig = &oauth2.Config{
|
storageConfig = &oauth2.Config{
|
||||||
Scopes: []string{storage.DevstorageFullControlScope},
|
Scopes: []string{storage.DevstorageFullControlScope},
|
||||||
Endpoint: google.Endpoint,
|
Endpoint: google.Endpoint,
|
||||||
ClientID: rcloneClientID,
|
ClientID: rcloneClientID,
|
||||||
ClientSecret: fs.MustReveal(rcloneEncryptedClientSecret),
|
ClientSecret: config.MustReveal(rcloneEncryptedClientSecret),
|
||||||
RedirectURL: oauthutil.TitleBarRedirectURL,
|
RedirectURL: oauthutil.TitleBarRedirectURL,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
@ -65,7 +70,7 @@ func init() {
|
||||||
Description: "Google Cloud Storage (this is not Google Drive)",
|
Description: "Google Cloud Storage (this is not Google Drive)",
|
||||||
NewFs: NewFs,
|
NewFs: NewFs,
|
||||||
Config: func(name string) {
|
Config: func(name string) {
|
||||||
if fs.ConfigFileGet(name, "service_account_file") != "" {
|
if config.FileGet(name, "service_account_file") != "" {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
err := oauthutil.Config("google cloud storage", name, storageConfig)
|
err := oauthutil.Config("google cloud storage", name, storageConfig)
|
||||||
|
@ -74,10 +79,10 @@ func init() {
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
Options: []fs.Option{{
|
Options: []fs.Option{{
|
||||||
Name: fs.ConfigClientID,
|
Name: config.ConfigClientID,
|
||||||
Help: "Google Application Client Id - leave blank normally.",
|
Help: "Google Application Client Id - leave blank normally.",
|
||||||
}, {
|
}, {
|
||||||
Name: fs.ConfigClientSecret,
|
Name: config.ConfigClientSecret,
|
||||||
Help: "Google Application Client Secret - leave blank normally.",
|
Help: "Google Application Client Secret - leave blank normally.",
|
||||||
}, {
|
}, {
|
||||||
Name: "project_number",
|
Name: "project_number",
|
||||||
|
@ -280,7 +285,7 @@ func getServiceAccountClient(keyJsonfilePath string) (*http.Client, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "error processing credentials")
|
return nil, errors.Wrap(err, "error processing credentials")
|
||||||
}
|
}
|
||||||
ctxWithSpecialClient := oauthutil.Context(fs.Config.Client())
|
ctxWithSpecialClient := oauthutil.Context(fshttp.NewClient(fs.Config))
|
||||||
return oauth2.NewClient(ctxWithSpecialClient, conf.TokenSource(ctxWithSpecialClient)), nil
|
return oauth2.NewClient(ctxWithSpecialClient, conf.TokenSource(ctxWithSpecialClient)), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -289,7 +294,7 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
var oAuthClient *http.Client
|
var oAuthClient *http.Client
|
||||||
var err error
|
var err error
|
||||||
|
|
||||||
serviceAccountPath := fs.ConfigFileGet(name, "service_account_file")
|
serviceAccountPath := config.FileGet(name, "service_account_file")
|
||||||
if serviceAccountPath != "" {
|
if serviceAccountPath != "" {
|
||||||
oAuthClient, err = getServiceAccountClient(serviceAccountPath)
|
oAuthClient, err = getServiceAccountClient(serviceAccountPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -311,11 +316,11 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
name: name,
|
name: name,
|
||||||
bucket: bucket,
|
bucket: bucket,
|
||||||
root: directory,
|
root: directory,
|
||||||
projectNumber: fs.ConfigFileGet(name, "project_number"),
|
projectNumber: config.FileGet(name, "project_number"),
|
||||||
objectACL: fs.ConfigFileGet(name, "object_acl"),
|
objectACL: config.FileGet(name, "object_acl"),
|
||||||
bucketACL: fs.ConfigFileGet(name, "bucket_acl"),
|
bucketACL: config.FileGet(name, "bucket_acl"),
|
||||||
location: fs.ConfigFileGet(name, "location"),
|
location: config.FileGet(name, "location"),
|
||||||
storageClass: fs.ConfigFileGet(name, "storage_class"),
|
storageClass: config.FileGet(name, "storage_class"),
|
||||||
}
|
}
|
||||||
f.features = (&fs.Features{
|
f.features = (&fs.Features{
|
||||||
ReadMimeType: true,
|
ReadMimeType: true,
|
||||||
|
@ -538,7 +543,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
|
||||||
if f.bucket == "" {
|
if f.bucket == "" {
|
||||||
return fs.ErrorListBucketRequired
|
return fs.ErrorListBucketRequired
|
||||||
}
|
}
|
||||||
list := fs.NewListRHelper(callback)
|
list := walk.NewListRHelper(callback)
|
||||||
err = f.list(dir, true, func(remote string, object *storage.Object, isDirectory bool) error {
|
err = f.list(dir, true, func(remote string, object *storage.Object, isDirectory bool) error {
|
||||||
entry, err := f.itemToDirEntry(remote, object, isDirectory)
|
entry, err := f.itemToDirEntry(remote, object, isDirectory)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -669,8 +674,8 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashMD5)
|
return hash.Set(hash.HashMD5)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
@ -694,9 +699,9 @@ func (o *Object) Remote() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the Md5sum of an object returning a lowercase hex string
|
// Hash returns the Md5sum of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashMD5 {
|
if t != hash.HashMD5 {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
return o.md5sum, nil
|
return o.md5sum, nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -17,6 +17,9 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/ncw/rclone/lib/rest"
|
"github.com/ncw/rclone/lib/rest"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"golang.org/x/net/html"
|
"golang.org/x/net/html"
|
||||||
|
@ -79,7 +82,7 @@ func statusError(res *http.Response, err error) error {
|
||||||
// NewFs creates a new Fs object from the name and root. It connects to
|
// NewFs creates a new Fs object from the name and root. It connects to
|
||||||
// the host specified in the config file.
|
// the host specified in the config file.
|
||||||
func NewFs(name, root string) (fs.Fs, error) {
|
func NewFs(name, root string) (fs.Fs, error) {
|
||||||
endpoint := fs.ConfigFileGet(name, "url")
|
endpoint := config.FileGet(name, "url")
|
||||||
if !strings.HasSuffix(endpoint, "/") {
|
if !strings.HasSuffix(endpoint, "/") {
|
||||||
endpoint += "/"
|
endpoint += "/"
|
||||||
}
|
}
|
||||||
|
@ -94,7 +97,7 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
client := fs.Config.Client()
|
client := fshttp.NewClient(fs.Config)
|
||||||
|
|
||||||
var isFile = false
|
var isFile = false
|
||||||
if !strings.HasSuffix(u.String(), "/") {
|
if !strings.HasSuffix(u.String(), "/") {
|
||||||
|
@ -363,8 +366,8 @@ func (o *Object) Remote() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns "" since HTTP (in Go or OpenSSH) doesn't support remote calculation of hashes
|
// Hash returns "" since HTTP (in Go or OpenSSH) doesn't support remote calculation of hashes
|
||||||
func (o *Object) Hash(r fs.HashType) (string, error) {
|
func (o *Object) Hash(r hash.Type) (string, error) {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
|
|
||||||
// Size returns the size in bytes of the remote http file
|
// Size returns the size in bytes of the remote http file
|
||||||
|
@ -434,9 +437,9 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
|
||||||
return res.Body, nil
|
return res.Body, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns fs.HashNone to indicate remote hashing is unavailable
|
// Hashes returns hash.HashNone to indicate remote hashing is unavailable
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashNone)
|
return hash.Set(hash.HashNone)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Mkdir makes the root directory of the Fs object
|
// Mkdir makes the root directory of the Fs object
|
||||||
|
|
|
@ -15,6 +15,7 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
"github.com/ncw/rclone/fstest"
|
"github.com/ncw/rclone/fstest"
|
||||||
"github.com/ncw/rclone/lib/rest"
|
"github.com/ncw/rclone/lib/rest"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
|
@ -36,12 +37,12 @@ func prepareServer(t *testing.T) func() {
|
||||||
ts := httptest.NewServer(fileServer)
|
ts := httptest.NewServer(fileServer)
|
||||||
|
|
||||||
// Configure the remote
|
// Configure the remote
|
||||||
fs.LoadConfig()
|
config.LoadConfig()
|
||||||
// fs.Config.LogLevel = fs.LogLevelDebug
|
// fs.Config.LogLevel = fs.LogLevelDebug
|
||||||
// fs.Config.DumpHeaders = true
|
// fs.Config.DumpHeaders = true
|
||||||
// fs.Config.DumpBodies = true
|
// fs.Config.DumpBodies = true
|
||||||
fs.ConfigFileSet(remoteName, "type", "http")
|
config.FileSet(remoteName, "type", "http")
|
||||||
fs.ConfigFileSet(remoteName, "url", ts.URL)
|
config.FileSet(remoteName, "url", ts.URL)
|
||||||
|
|
||||||
// return a function to tidy up
|
// return a function to tidy up
|
||||||
return ts.Close
|
return ts.Close
|
||||||
|
|
|
@ -15,9 +15,9 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/backend/swift"
|
"github.com/ncw/rclone/backend/swift"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
"github.com/ncw/rclone/lib/oauthutil"
|
"github.com/ncw/rclone/lib/oauthutil"
|
||||||
"github.com/ncw/rclone/oauthutil"
|
|
||||||
"github.com/ncw/rclone/swift"
|
|
||||||
swiftLib "github.com/ncw/swift"
|
swiftLib "github.com/ncw/swift"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"golang.org/x/oauth2"
|
"golang.org/x/oauth2"
|
||||||
|
@ -40,7 +40,7 @@ var (
|
||||||
TokenURL: "https://api.hubic.com/oauth/token/",
|
TokenURL: "https://api.hubic.com/oauth/token/",
|
||||||
},
|
},
|
||||||
ClientID: rcloneClientID,
|
ClientID: rcloneClientID,
|
||||||
ClientSecret: fs.MustReveal(rcloneEncryptedClientSecret),
|
ClientSecret: config.MustReveal(rcloneEncryptedClientSecret),
|
||||||
RedirectURL: oauthutil.RedirectLocalhostURL,
|
RedirectURL: oauthutil.RedirectLocalhostURL,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
@ -58,10 +58,10 @@ func init() {
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
Options: []fs.Option{{
|
Options: []fs.Option{{
|
||||||
Name: fs.ConfigClientID,
|
Name: config.ConfigClientID,
|
||||||
Help: "Hubic Client Id - leave blank normally.",
|
Help: "Hubic Client Id - leave blank normally.",
|
||||||
}, {
|
}, {
|
||||||
Name: fs.ConfigClientSecret,
|
Name: config.ConfigClientSecret,
|
||||||
Help: "Hubic Client Secret - leave blank normally.",
|
Help: "Hubic Client Secret - leave blank normally.",
|
||||||
}},
|
}},
|
||||||
})
|
})
|
||||||
|
@ -159,7 +159,7 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
Auth: newAuth(f),
|
Auth: newAuth(f),
|
||||||
ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport
|
ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport
|
||||||
Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport
|
Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport
|
||||||
Transport: fs.Config.Transport(),
|
Transport: fshttp.NewTransport(fs.Config),
|
||||||
}
|
}
|
||||||
err = c.Authenticate()
|
err = c.Authenticate()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
|
@ -16,14 +16,17 @@ import (
|
||||||
"unicode/utf8"
|
"unicode/utf8"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"google.golang.org/appengine/log"
|
"google.golang.org/appengine/log"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
followSymlinks = fs.BoolP("copy-links", "L", false, "Follow symlinks and copy the pointed to item.")
|
followSymlinks = flags.BoolP("copy-links", "L", false, "Follow symlinks and copy the pointed to item.")
|
||||||
skipSymlinks = fs.BoolP("skip-links", "", false, "Don't warn about skipped symlinks.")
|
skipSymlinks = flags.BoolP("skip-links", "", false, "Don't warn about skipped symlinks.")
|
||||||
noUTFNorm = fs.BoolP("local-no-unicode-normalization", "", false, "Don't apply unicode normalization to paths and filenames")
|
noUTFNorm = flags.BoolP("local-no-unicode-normalization", "", false, "Don't apply unicode normalization to paths and filenames")
|
||||||
)
|
)
|
||||||
|
|
||||||
// Constants
|
// Constants
|
||||||
|
@ -72,7 +75,7 @@ type Object struct {
|
||||||
size int64 // file metadata - always present
|
size int64 // file metadata - always present
|
||||||
mode os.FileMode
|
mode os.FileMode
|
||||||
modTime time.Time
|
modTime time.Time
|
||||||
hashes map[fs.HashType]string // Hashes
|
hashes map[hash.Type]string // Hashes
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
@ -85,7 +88,7 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
log.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed")
|
log.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed")
|
||||||
}
|
}
|
||||||
|
|
||||||
nounc := fs.ConfigFileGet(name, "nounc")
|
nounc := config.FileGet(name, "nounc")
|
||||||
f := &Fs{
|
f := &Fs{
|
||||||
name: name,
|
name: name,
|
||||||
warned: make(map[string]struct{}),
|
warned: make(map[string]struct{}),
|
||||||
|
@ -532,8 +535,8 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.SupportedHashes
|
return hash.SupportedHashes
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
@ -557,7 +560,7 @@ func (o *Object) Remote() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the requested hash of a file as a lowercase hex string
|
// Hash returns the requested hash of a file as a lowercase hex string
|
||||||
func (o *Object) Hash(r fs.HashType) (string, error) {
|
func (o *Object) Hash(r hash.Type) (string, error) {
|
||||||
// Check that the underlying file hasn't changed
|
// Check that the underlying file hasn't changed
|
||||||
oldtime := o.modTime
|
oldtime := o.modTime
|
||||||
oldsize := o.size
|
oldsize := o.size
|
||||||
|
@ -571,12 +574,12 @@ func (o *Object) Hash(r fs.HashType) (string, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if o.hashes == nil {
|
if o.hashes == nil {
|
||||||
o.hashes = make(map[fs.HashType]string)
|
o.hashes = make(map[hash.Type]string)
|
||||||
in, err := os.Open(o.path)
|
in, err := os.Open(o.path)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", errors.Wrap(err, "hash: failed to open")
|
return "", errors.Wrap(err, "hash: failed to open")
|
||||||
}
|
}
|
||||||
o.hashes, err = fs.HashStream(in)
|
o.hashes, err = hash.Stream(in)
|
||||||
closeErr := in.Close()
|
closeErr := in.Close()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", errors.Wrap(err, "hash: failed to read")
|
return "", errors.Wrap(err, "hash: failed to read")
|
||||||
|
@ -643,7 +646,7 @@ func (o *Object) Storable() bool {
|
||||||
type localOpenFile struct {
|
type localOpenFile struct {
|
||||||
o *Object // object that is open
|
o *Object // object that is open
|
||||||
in io.ReadCloser // handle we are wrapping
|
in io.ReadCloser // handle we are wrapping
|
||||||
hash *fs.MultiHasher // currently accumulating hashes
|
hash *hash.MultiHasher // currently accumulating hashes
|
||||||
}
|
}
|
||||||
|
|
||||||
// Read bytes from the object - see io.Reader
|
// Read bytes from the object - see io.Reader
|
||||||
|
@ -670,7 +673,7 @@ func (file *localOpenFile) Close() (err error) {
|
||||||
// Open an object for read
|
// Open an object for read
|
||||||
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
|
func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
|
||||||
var offset int64
|
var offset int64
|
||||||
hashes := fs.SupportedHashes
|
hashes := hash.SupportedHashes
|
||||||
for _, option := range options {
|
for _, option := range options {
|
||||||
switch x := option.(type) {
|
switch x := option.(type) {
|
||||||
case *fs.SeekOption:
|
case *fs.SeekOption:
|
||||||
|
@ -694,7 +697,7 @@ func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
|
||||||
// don't attempt to make checksums
|
// don't attempt to make checksums
|
||||||
return fd, err
|
return fd, err
|
||||||
}
|
}
|
||||||
hash, err := fs.NewMultiHasherTypes(hashes)
|
hash, err := hash.NewMultiHasherTypes(hashes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -715,7 +718,7 @@ func (o *Object) mkdirAll() error {
|
||||||
|
|
||||||
// Update the object from in with modTime and size
|
// Update the object from in with modTime and size
|
||||||
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
|
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
|
||||||
hashes := fs.SupportedHashes
|
hashes := hash.SupportedHashes
|
||||||
for _, option := range options {
|
for _, option := range options {
|
||||||
switch x := option.(type) {
|
switch x := option.(type) {
|
||||||
case *fs.HashesOption:
|
case *fs.HashesOption:
|
||||||
|
@ -734,7 +737,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate the hash of the object we are reading as we go along
|
// Calculate the hash of the object we are reading as we go along
|
||||||
hash, err := fs.NewMultiHasherTypes(hashes)
|
hash, err := hash.NewMultiHasherTypes(hashes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
|
@ -9,10 +9,11 @@ import (
|
||||||
"syscall"
|
"syscall"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
oneFileSystem = fs.BoolP("one-file-system", "x", false, "Don't cross filesystem boundaries.")
|
oneFileSystem = flags.BoolP("one-file-system", "x", false, "Don't cross filesystem boundaries.")
|
||||||
)
|
)
|
||||||
|
|
||||||
// readDevice turns a valid os.FileInfo into a device number,
|
// readDevice turns a valid os.FileInfo into a device number,
|
||||||
|
|
|
@ -15,16 +15,16 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/backend/onedrive/api"
|
"github.com/ncw/rclone/backend/onedrive/api"
|
||||||
"github.com/ncw/rclone/dircache"
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/ncw/rclone/lib/dircache"
|
"github.com/ncw/rclone/lib/dircache"
|
||||||
"github.com/ncw/rclone/lib/oauthutil"
|
"github.com/ncw/rclone/lib/oauthutil"
|
||||||
"github.com/ncw/rclone/lib/pacer"
|
"github.com/ncw/rclone/lib/pacer"
|
||||||
|
"github.com/ncw/rclone/lib/readers"
|
||||||
"github.com/ncw/rclone/lib/rest"
|
"github.com/ncw/rclone/lib/rest"
|
||||||
"github.com/ncw/rclone/oauthutil"
|
|
||||||
"github.com/ncw/rclone/onedrive/api"
|
|
||||||
"github.com/ncw/rclone/pacer"
|
|
||||||
"github.com/ncw/rclone/rest"
|
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"golang.org/x/oauth2"
|
"golang.org/x/oauth2"
|
||||||
)
|
)
|
||||||
|
@ -56,7 +56,7 @@ var (
|
||||||
TokenURL: "https://login.live.com/oauth20_token.srf",
|
TokenURL: "https://login.live.com/oauth20_token.srf",
|
||||||
},
|
},
|
||||||
ClientID: rclonePersonalClientID,
|
ClientID: rclonePersonalClientID,
|
||||||
ClientSecret: fs.MustReveal(rclonePersonalEncryptedClientSecret),
|
ClientSecret: config.MustReveal(rclonePersonalEncryptedClientSecret),
|
||||||
RedirectURL: oauthutil.RedirectLocalhostURL,
|
RedirectURL: oauthutil.RedirectLocalhostURL,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -67,7 +67,7 @@ var (
|
||||||
TokenURL: "https://login.microsoftonline.com/common/oauth2/token",
|
TokenURL: "https://login.microsoftonline.com/common/oauth2/token",
|
||||||
},
|
},
|
||||||
ClientID: rcloneBusinessClientID,
|
ClientID: rcloneBusinessClientID,
|
||||||
ClientSecret: fs.MustReveal(rcloneBusinessEncryptedClientSecret),
|
ClientSecret: config.MustReveal(rcloneBusinessEncryptedClientSecret),
|
||||||
RedirectURL: oauthutil.RedirectLocalhostURL,
|
RedirectURL: oauthutil.RedirectLocalhostURL,
|
||||||
}
|
}
|
||||||
oauthBusinessResource = oauth2.SetAuthURLParam("resource", discoveryServiceURL)
|
oauthBusinessResource = oauth2.SetAuthURLParam("resource", discoveryServiceURL)
|
||||||
|
@ -87,7 +87,7 @@ func init() {
|
||||||
fmt.Printf("Choose OneDrive account type?\n")
|
fmt.Printf("Choose OneDrive account type?\n")
|
||||||
fmt.Printf(" * Say b for a OneDrive business account\n")
|
fmt.Printf(" * Say b for a OneDrive business account\n")
|
||||||
fmt.Printf(" * Say p for a personal OneDrive account\n")
|
fmt.Printf(" * Say p for a personal OneDrive account\n")
|
||||||
isPersonal := fs.Command([]string{"bBusiness", "pPersonal"}) == 'p'
|
isPersonal := config.Command([]string{"bBusiness", "pPersonal"}) == 'p'
|
||||||
|
|
||||||
if isPersonal {
|
if isPersonal {
|
||||||
// for personal accounts we don't safe a field about the account
|
// for personal accounts we don't safe a field about the account
|
||||||
|
@ -103,7 +103,7 @@ func init() {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Are we running headless?
|
// Are we running headless?
|
||||||
if fs.ConfigFileGet(name, fs.ConfigAutomatic) != "" {
|
if config.FileGet(name, config.ConfigAutomatic) != "" {
|
||||||
// Yes, okay we are done
|
// Yes, okay we are done
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
@ -159,10 +159,10 @@ func init() {
|
||||||
} else if len(resourcesID) == 1 {
|
} else if len(resourcesID) == 1 {
|
||||||
foundService = resourcesID[0]
|
foundService = resourcesID[0]
|
||||||
} else {
|
} else {
|
||||||
foundService = fs.Choose("Choose resource URL", resourcesID, resourcesURL, false)
|
foundService = config.Choose("Choose resource URL", resourcesID, resourcesURL, false)
|
||||||
}
|
}
|
||||||
|
|
||||||
fs.ConfigFileSet(name, configResourceURL, foundService)
|
config.FileSet(name, configResourceURL, foundService)
|
||||||
oauthBusinessResource = oauth2.SetAuthURLParam("resource", foundService)
|
oauthBusinessResource = oauth2.SetAuthURLParam("resource", foundService)
|
||||||
|
|
||||||
// get the token from the inital config
|
// get the token from the inital config
|
||||||
|
@ -218,16 +218,16 @@ func init() {
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
Options: []fs.Option{{
|
Options: []fs.Option{{
|
||||||
Name: fs.ConfigClientID,
|
Name: config.ConfigClientID,
|
||||||
Help: "Microsoft App Client Id - leave blank normally.",
|
Help: "Microsoft App Client Id - leave blank normally.",
|
||||||
}, {
|
}, {
|
||||||
Name: fs.ConfigClientSecret,
|
Name: config.ConfigClientSecret,
|
||||||
Help: "Microsoft App Client Secret - leave blank normally.",
|
Help: "Microsoft App Client Secret - leave blank normally.",
|
||||||
}},
|
}},
|
||||||
})
|
})
|
||||||
|
|
||||||
fs.VarP(&chunkSize, "onedrive-chunk-size", "", "Above this size files will be chunked - must be multiple of 320k.")
|
flags.VarP(&chunkSize, "onedrive-chunk-size", "", "Above this size files will be chunked - must be multiple of 320k.")
|
||||||
fs.VarP(&uploadCutoff, "onedrive-upload-cutoff", "", "Cutoff for switching to chunked upload - must be <= 100MB")
|
flags.VarP(&uploadCutoff, "onedrive-upload-cutoff", "", "Cutoff for switching to chunked upload - must be <= 100MB")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a remote one drive
|
// Fs represents a remote one drive
|
||||||
|
@ -306,7 +306,7 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
|
||||||
authRety = true
|
authRety = true
|
||||||
fs.Debugf(nil, "Should retry: %v", err)
|
fs.Debugf(nil, "Should retry: %v", err)
|
||||||
}
|
}
|
||||||
return authRety || fs.ShouldRetry(err) || fs.ShouldRetryHTTP(resp, retryErrorCodes), err
|
return authRety || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
|
||||||
}
|
}
|
||||||
|
|
||||||
// readMetaDataForPath reads the metadata from the path
|
// readMetaDataForPath reads the metadata from the path
|
||||||
|
@ -339,7 +339,7 @@ func errorHandler(resp *http.Response) error {
|
||||||
// NewFs constructs an Fs from the path, container:path
|
// NewFs constructs an Fs from the path, container:path
|
||||||
func NewFs(name, root string) (fs.Fs, error) {
|
func NewFs(name, root string) (fs.Fs, error) {
|
||||||
// get the resource URL from the config file0
|
// get the resource URL from the config file0
|
||||||
resourceURL := fs.ConfigFileGet(name, configResourceURL, "")
|
resourceURL := config.FileGet(name, configResourceURL, "")
|
||||||
// if we have a resource URL it's a business account otherwise a personal one
|
// if we have a resource URL it's a business account otherwise a personal one
|
||||||
var rootURL string
|
var rootURL string
|
||||||
var oauthConfig *oauth2.Config
|
var oauthConfig *oauth2.Config
|
||||||
|
@ -743,10 +743,10 @@ func (f *Fs) waitForJob(location string, o *Object) error {
|
||||||
err = f.pacer.Call(func() (bool, error) {
|
err = f.pacer.Call(func() (bool, error) {
|
||||||
resp, err = f.srv.Call(&opts)
|
resp, err = f.srv.Call(&opts)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fs.ShouldRetry(err), err
|
return fserrors.ShouldRetry(err), err
|
||||||
}
|
}
|
||||||
body, err = rest.ReadBody(resp)
|
body, err = rest.ReadBody(resp)
|
||||||
return fs.ShouldRetry(err), err
|
return fserrors.ShouldRetry(err), err
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
@ -915,8 +915,8 @@ func (f *Fs) DirCacheFlush() {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashSHA1)
|
return hash.Set(hash.HashSHA1)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
@ -945,9 +945,9 @@ func (o *Object) srvPath() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the SHA-1 of an object returning a lowercase hex string
|
// Hash returns the SHA-1 of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashSHA1 {
|
if t != hash.HashSHA1 {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
return o.sha1, nil
|
return o.sha1, nil
|
||||||
}
|
}
|
||||||
|
@ -1161,7 +1161,7 @@ func (o *Object) uploadMultipart(in io.Reader, size int64) (err error) {
|
||||||
if remaining < n {
|
if remaining < n {
|
||||||
n = remaining
|
n = remaining
|
||||||
}
|
}
|
||||||
seg := fs.NewRepeatableReader(io.LimitReader(in, n))
|
seg := readers.NewRepeatableReader(io.LimitReader(in, n))
|
||||||
fs.Debugf(o, "Uploading segment %d/%d size %d", position, size, n)
|
fs.Debugf(o, "Uploading segment %d/%d size %d", position, size, n)
|
||||||
err = o.uploadFragment(uploadURL, position, size, seg, n)
|
err = o.uploadFragment(uploadURL, position, size, seg, n)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
|
@ -22,16 +22,15 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/backend/pcloud/api"
|
"github.com/ncw/rclone/backend/pcloud/api"
|
||||||
"github.com/ncw/rclone/dircache"
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/ncw/rclone/lib/dircache"
|
"github.com/ncw/rclone/lib/dircache"
|
||||||
"github.com/ncw/rclone/lib/oauthutil"
|
"github.com/ncw/rclone/lib/oauthutil"
|
||||||
"github.com/ncw/rclone/lib/pacer"
|
"github.com/ncw/rclone/lib/pacer"
|
||||||
"github.com/ncw/rclone/lib/rest"
|
"github.com/ncw/rclone/lib/rest"
|
||||||
"github.com/ncw/rclone/oauthutil"
|
|
||||||
"github.com/ncw/rclone/pacer"
|
|
||||||
"github.com/ncw/rclone/pcloud/api"
|
|
||||||
"github.com/ncw/rclone/rest"
|
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"golang.org/x/oauth2"
|
"golang.org/x/oauth2"
|
||||||
)
|
)
|
||||||
|
@ -56,7 +55,7 @@ var (
|
||||||
TokenURL: "https://api.pcloud.com/oauth2_token",
|
TokenURL: "https://api.pcloud.com/oauth2_token",
|
||||||
},
|
},
|
||||||
ClientID: rcloneClientID,
|
ClientID: rcloneClientID,
|
||||||
ClientSecret: fs.MustReveal(rcloneEncryptedClientSecret),
|
ClientSecret: config.MustReveal(rcloneEncryptedClientSecret),
|
||||||
RedirectURL: oauthutil.RedirectLocalhostURL,
|
RedirectURL: oauthutil.RedirectLocalhostURL,
|
||||||
}
|
}
|
||||||
uploadCutoff = fs.SizeSuffix(50 * 1024 * 1024)
|
uploadCutoff = fs.SizeSuffix(50 * 1024 * 1024)
|
||||||
|
@ -75,14 +74,14 @@ func init() {
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
Options: []fs.Option{{
|
Options: []fs.Option{{
|
||||||
Name: fs.ConfigClientID,
|
Name: config.ConfigClientID,
|
||||||
Help: "Pcloud App Client Id - leave blank normally.",
|
Help: "Pcloud App Client Id - leave blank normally.",
|
||||||
}, {
|
}, {
|
||||||
Name: fs.ConfigClientSecret,
|
Name: config.ConfigClientSecret,
|
||||||
Help: "Pcloud App Client Secret - leave blank normally.",
|
Help: "Pcloud App Client Secret - leave blank normally.",
|
||||||
}},
|
}},
|
||||||
})
|
})
|
||||||
fs.VarP(&uploadCutoff, "pcloud-upload-cutoff", "", "Cutoff for switching to multipart upload")
|
flags.VarP(&uploadCutoff, "pcloud-upload-cutoff", "", "Cutoff for switching to multipart upload")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a remote pcloud
|
// Fs represents a remote pcloud
|
||||||
|
@ -174,7 +173,7 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
|
||||||
doRetry = true
|
doRetry = true
|
||||||
fs.Debugf(nil, "Should retry: %v", err)
|
fs.Debugf(nil, "Should retry: %v", err)
|
||||||
}
|
}
|
||||||
return doRetry || fs.ShouldRetry(err) || fs.ShouldRetryHTTP(resp, retryErrorCodes), err
|
return doRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
|
||||||
}
|
}
|
||||||
|
|
||||||
// substitute reserved characters for pcloud
|
// substitute reserved characters for pcloud
|
||||||
|
@ -812,8 +811,8 @@ func (f *Fs) DirCacheFlush() {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashMD5 | fs.HashSHA1)
|
return hash.Set(hash.HashMD5 | hash.HashSHA1)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
@ -859,9 +858,9 @@ func (o *Object) getHashes() (err error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the SHA-1 of an object returning a lowercase hex string
|
// Hash returns the SHA-1 of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashMD5 && t != fs.HashSHA1 {
|
if t != hash.HashMD5 && t != hash.HashSHA1 {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
if o.md5 == "" && o.sha1 == "" {
|
if o.md5 == "" && o.sha1 == "" {
|
||||||
err := o.getHashes()
|
err := o.getHashes()
|
||||||
|
@ -869,7 +868,7 @@ func (o *Object) Hash(t fs.HashType) (string, error) {
|
||||||
return "", errors.Wrap(err, "failed to get hash")
|
return "", errors.Wrap(err, "failed to get hash")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if t == fs.HashMD5 {
|
if t == hash.HashMD5 {
|
||||||
return o.md5, nil
|
return o.md5, nil
|
||||||
}
|
}
|
||||||
return o.sha1, nil
|
return o.sha1, nil
|
||||||
|
|
|
@ -17,8 +17,12 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
|
"github.com/ncw/rclone/fs/walk"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/yunify/qingstor-sdk-go/config"
|
qsConfig "github.com/yunify/qingstor-sdk-go/config"
|
||||||
qsErr "github.com/yunify/qingstor-sdk-go/request/errors"
|
qsErr "github.com/yunify/qingstor-sdk-go/request/errors"
|
||||||
qs "github.com/yunify/qingstor-sdk-go/service"
|
qs "github.com/yunify/qingstor-sdk-go/service"
|
||||||
)
|
)
|
||||||
|
@ -162,11 +166,11 @@ func qsParseEndpoint(endpoint string) (protocol, host, port string, err error) {
|
||||||
|
|
||||||
// qsConnection makes a connection to qingstor
|
// qsConnection makes a connection to qingstor
|
||||||
func qsServiceConnection(name string) (*qs.Service, error) {
|
func qsServiceConnection(name string) (*qs.Service, error) {
|
||||||
accessKeyID := fs.ConfigFileGet(name, "access_key_id")
|
accessKeyID := config.FileGet(name, "access_key_id")
|
||||||
secretAccessKey := fs.ConfigFileGet(name, "secret_access_key")
|
secretAccessKey := config.FileGet(name, "secret_access_key")
|
||||||
|
|
||||||
switch {
|
switch {
|
||||||
case fs.ConfigFileGetBool(name, "env_auth", false):
|
case config.FileGetBool(name, "env_auth", false):
|
||||||
// No need for empty checks if "env_auth" is true
|
// No need for empty checks if "env_auth" is true
|
||||||
case accessKeyID == "" && secretAccessKey == "":
|
case accessKeyID == "" && secretAccessKey == "":
|
||||||
// if no access key/secret and iam is explicitly disabled then fall back to anon interaction
|
// if no access key/secret and iam is explicitly disabled then fall back to anon interaction
|
||||||
|
@ -180,7 +184,7 @@ func qsServiceConnection(name string) (*qs.Service, error) {
|
||||||
host := "qingstor.com"
|
host := "qingstor.com"
|
||||||
port := 443
|
port := 443
|
||||||
|
|
||||||
endpoint := fs.ConfigFileGet(name, "endpoint", "")
|
endpoint := config.FileGet(name, "endpoint", "")
|
||||||
if endpoint != "" {
|
if endpoint != "" {
|
||||||
_protocol, _host, _port, err := qsParseEndpoint(endpoint)
|
_protocol, _host, _port, err := qsParseEndpoint(endpoint)
|
||||||
|
|
||||||
|
@ -201,19 +205,19 @@ func qsServiceConnection(name string) (*qs.Service, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
connectionRetries := 3
|
connectionRetries := 3
|
||||||
retries := fs.ConfigFileGet(name, "connection_retries", "")
|
retries := config.FileGet(name, "connection_retries", "")
|
||||||
if retries != "" {
|
if retries != "" {
|
||||||
connectionRetries, _ = strconv.Atoi(retries)
|
connectionRetries, _ = strconv.Atoi(retries)
|
||||||
}
|
}
|
||||||
|
|
||||||
cf, err := config.NewDefault()
|
cf, err := qsConfig.NewDefault()
|
||||||
cf.AccessKeyID = accessKeyID
|
cf.AccessKeyID = accessKeyID
|
||||||
cf.SecretAccessKey = secretAccessKey
|
cf.SecretAccessKey = secretAccessKey
|
||||||
cf.Protocol = protocol
|
cf.Protocol = protocol
|
||||||
cf.Host = host
|
cf.Host = host
|
||||||
cf.Port = port
|
cf.Port = port
|
||||||
cf.ConnectionRetries = connectionRetries
|
cf.ConnectionRetries = connectionRetries
|
||||||
cf.Connection = fs.Config.Client()
|
cf.Connection = fshttp.NewClient(fs.Config)
|
||||||
|
|
||||||
svc, _ := qs.Init(cf)
|
svc, _ := qs.Init(cf)
|
||||||
|
|
||||||
|
@ -231,7 +235,7 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
zone := fs.ConfigFileGet(name, "zone")
|
zone := config.FileGet(name, "zone")
|
||||||
if zone == "" {
|
if zone == "" {
|
||||||
zone = "pek3a"
|
zone = "pek3a"
|
||||||
}
|
}
|
||||||
|
@ -302,9 +306,9 @@ func (f *Fs) Precision() time.Duration {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashMD5)
|
return hash.Set(hash.HashMD5)
|
||||||
//return fs.HashSet(fs.HashNone)
|
//return hash.HashSet(hash.HashNone)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Features returns the optional features of this Fs
|
// Features returns the optional features of this Fs
|
||||||
|
@ -591,7 +595,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
|
||||||
if f.bucket == "" {
|
if f.bucket == "" {
|
||||||
return fs.ErrorListBucketRequired
|
return fs.ErrorListBucketRequired
|
||||||
}
|
}
|
||||||
list := fs.NewListRHelper(callback)
|
list := walk.NewListRHelper(callback)
|
||||||
err = f.list(dir, true, func(remote string, object *qs.KeyType, isDirectory bool) error {
|
err = f.list(dir, true, func(remote string, object *qs.KeyType, isDirectory bool) error {
|
||||||
entry, err := f.itemToDirEntry(remote, object, isDirectory)
|
entry, err := f.itemToDirEntry(remote, object, isDirectory)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -925,9 +929,9 @@ var matchMd5 = regexp.MustCompile(`^[0-9a-f]{32}$`)
|
||||||
|
|
||||||
// Hash returns the selected checksum of the file
|
// Hash returns the selected checksum of the file
|
||||||
// If no checksum is available it returns ""
|
// If no checksum is available it returns ""
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashMD5 {
|
if t != hash.HashMD5 {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
etag := strings.Trim(strings.ToLower(o.etag), `"`)
|
etag := strings.Trim(strings.ToLower(o.etag), `"`)
|
||||||
// Check the etag is a valid md5sum
|
// Check the etag is a valid md5sum
|
||||||
|
|
|
@ -37,6 +37,11 @@ import (
|
||||||
"github.com/aws/aws-sdk-go/service/s3"
|
"github.com/aws/aws-sdk-go/service/s3"
|
||||||
"github.com/aws/aws-sdk-go/service/s3/s3manager"
|
"github.com/aws/aws-sdk-go/service/s3/s3manager"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
|
"github.com/ncw/rclone/fs/walk"
|
||||||
"github.com/ncw/rclone/lib/rest"
|
"github.com/ncw/rclone/lib/rest"
|
||||||
"github.com/ncw/swift"
|
"github.com/ncw/swift"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
|
@ -233,8 +238,8 @@ const (
|
||||||
// Globals
|
// Globals
|
||||||
var (
|
var (
|
||||||
// Flags
|
// Flags
|
||||||
s3ACL = fs.StringP("s3-acl", "", "", "Canned ACL used when creating buckets and/or storing objects in S3")
|
s3ACL = flags.StringP("s3-acl", "", "", "Canned ACL used when creating buckets and/or storing objects in S3")
|
||||||
s3StorageClass = fs.StringP("s3-storage-class", "", "", "Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)")
|
s3StorageClass = flags.StringP("s3-storage-class", "", "", "Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)")
|
||||||
)
|
)
|
||||||
|
|
||||||
// Fs represents a remote s3 server
|
// Fs represents a remote s3 server
|
||||||
|
@ -316,9 +321,9 @@ func s3ParsePath(path string) (bucket, directory string, err error) {
|
||||||
func s3Connection(name string) (*s3.S3, *session.Session, error) {
|
func s3Connection(name string) (*s3.S3, *session.Session, error) {
|
||||||
// Make the auth
|
// Make the auth
|
||||||
v := credentials.Value{
|
v := credentials.Value{
|
||||||
AccessKeyID: fs.ConfigFileGet(name, "access_key_id"),
|
AccessKeyID: config.FileGet(name, "access_key_id"),
|
||||||
SecretAccessKey: fs.ConfigFileGet(name, "secret_access_key"),
|
SecretAccessKey: config.FileGet(name, "secret_access_key"),
|
||||||
SessionToken: fs.ConfigFileGet(name, "session_token"),
|
SessionToken: config.FileGet(name, "session_token"),
|
||||||
}
|
}
|
||||||
|
|
||||||
lowTimeoutClient := &http.Client{Timeout: 1 * time.Second} // low timeout to ec2 metadata service
|
lowTimeoutClient := &http.Client{Timeout: 1 * time.Second} // low timeout to ec2 metadata service
|
||||||
|
@ -348,7 +353,7 @@ func s3Connection(name string) (*s3.S3, *session.Session, error) {
|
||||||
cred := credentials.NewChainCredentials(providers)
|
cred := credentials.NewChainCredentials(providers)
|
||||||
|
|
||||||
switch {
|
switch {
|
||||||
case fs.ConfigFileGetBool(name, "env_auth", false):
|
case config.FileGetBool(name, "env_auth", false):
|
||||||
// No need for empty checks if "env_auth" is true
|
// No need for empty checks if "env_auth" is true
|
||||||
case v.AccessKeyID == "" && v.SecretAccessKey == "":
|
case v.AccessKeyID == "" && v.SecretAccessKey == "":
|
||||||
// if no access key/secret and iam is explicitly disabled then fall back to anon interaction
|
// if no access key/secret and iam is explicitly disabled then fall back to anon interaction
|
||||||
|
@ -359,8 +364,8 @@ func s3Connection(name string) (*s3.S3, *session.Session, error) {
|
||||||
return nil, nil, errors.New("secret_access_key not found")
|
return nil, nil, errors.New("secret_access_key not found")
|
||||||
}
|
}
|
||||||
|
|
||||||
endpoint := fs.ConfigFileGet(name, "endpoint")
|
endpoint := config.FileGet(name, "endpoint")
|
||||||
region := fs.ConfigFileGet(name, "region")
|
region := config.FileGet(name, "region")
|
||||||
if region == "" && endpoint == "" {
|
if region == "" && endpoint == "" {
|
||||||
endpoint = "https://s3.amazonaws.com/"
|
endpoint = "https://s3.amazonaws.com/"
|
||||||
}
|
}
|
||||||
|
@ -372,7 +377,7 @@ func s3Connection(name string) (*s3.S3, *session.Session, error) {
|
||||||
WithMaxRetries(maxRetries).
|
WithMaxRetries(maxRetries).
|
||||||
WithCredentials(cred).
|
WithCredentials(cred).
|
||||||
WithEndpoint(endpoint).
|
WithEndpoint(endpoint).
|
||||||
WithHTTPClient(fs.Config.Client()).
|
WithHTTPClient(fshttp.NewClient(fs.Config)).
|
||||||
WithS3ForcePathStyle(true)
|
WithS3ForcePathStyle(true)
|
||||||
// awsConfig.WithLogLevel(aws.LogDebugWithSigning)
|
// awsConfig.WithLogLevel(aws.LogDebugWithSigning)
|
||||||
ses := session.New()
|
ses := session.New()
|
||||||
|
@ -408,11 +413,11 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
c: c,
|
c: c,
|
||||||
bucket: bucket,
|
bucket: bucket,
|
||||||
ses: ses,
|
ses: ses,
|
||||||
acl: fs.ConfigFileGet(name, "acl"),
|
acl: config.FileGet(name, "acl"),
|
||||||
root: directory,
|
root: directory,
|
||||||
locationConstraint: fs.ConfigFileGet(name, "location_constraint"),
|
locationConstraint: config.FileGet(name, "location_constraint"),
|
||||||
sse: fs.ConfigFileGet(name, "server_side_encryption"),
|
sse: config.FileGet(name, "server_side_encryption"),
|
||||||
storageClass: fs.ConfigFileGet(name, "storage_class"),
|
storageClass: config.FileGet(name, "storage_class"),
|
||||||
}
|
}
|
||||||
f.features = (&fs.Features{
|
f.features = (&fs.Features{
|
||||||
ReadMimeType: true,
|
ReadMimeType: true,
|
||||||
|
@ -657,7 +662,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
|
||||||
if f.bucket == "" {
|
if f.bucket == "" {
|
||||||
return fs.ErrorListBucketRequired
|
return fs.ErrorListBucketRequired
|
||||||
}
|
}
|
||||||
list := fs.NewListRHelper(callback)
|
list := walk.NewListRHelper(callback)
|
||||||
err = f.list(dir, true, func(remote string, object *s3.Object, isDirectory bool) error {
|
err = f.list(dir, true, func(remote string, object *s3.Object, isDirectory bool) error {
|
||||||
entry, err := f.itemToDirEntry(remote, object, isDirectory)
|
entry, err := f.itemToDirEntry(remote, object, isDirectory)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -804,8 +809,8 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashMD5)
|
return hash.Set(hash.HashMD5)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
@ -831,9 +836,9 @@ func (o *Object) Remote() string {
|
||||||
var matchMd5 = regexp.MustCompile(`^[0-9a-f]{32}$`)
|
var matchMd5 = regexp.MustCompile(`^[0-9a-f]{32}$`)
|
||||||
|
|
||||||
// Hash returns the Md5sum of an object returning a lowercase hex string
|
// Hash returns the Md5sum of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashMD5 {
|
if t != hash.HashMD5 {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
hash := strings.Trim(strings.ToLower(o.etag), `"`)
|
hash := strings.Trim(strings.ToLower(o.etag), `"`)
|
||||||
// Check the etag is a valid md5sum
|
// Check the etag is a valid md5sum
|
||||||
|
@ -1027,7 +1032,7 @@ func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOptio
|
||||||
}
|
}
|
||||||
|
|
||||||
if size > uploader.PartSize {
|
if size > uploader.PartSize {
|
||||||
hash, err := src.Hash(fs.HashMD5)
|
hash, err := src.Hash(hash.HashMD5)
|
||||||
|
|
||||||
if err == nil && matchMd5.MatchString(hash) {
|
if err == nil && matchMd5.MatchString(hash) {
|
||||||
hashBytes, err := hex.DecodeString(hash)
|
hashBytes, err := hex.DecodeString(hash)
|
||||||
|
|
|
@ -16,6 +16,9 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/pkg/sftp"
|
"github.com/pkg/sftp"
|
||||||
"github.com/xanzy/ssh-agent"
|
"github.com/xanzy/ssh-agent"
|
||||||
|
@ -94,7 +97,7 @@ type Fs struct {
|
||||||
port string
|
port string
|
||||||
url string
|
url string
|
||||||
mkdirLock *stringLock
|
mkdirLock *stringLock
|
||||||
cachedHashes *fs.HashSet
|
cachedHashes *hash.Set
|
||||||
poolMu sync.Mutex
|
poolMu sync.Mutex
|
||||||
pool []*conn
|
pool []*conn
|
||||||
connLimit *rate.Limiter // for limiting number of connections per second
|
connLimit *rate.Limiter // for limiting number of connections per second
|
||||||
|
@ -134,13 +137,13 @@ func readCurrentUser() (userName string) {
|
||||||
// Dial starts a client connection to the given SSH server. It is a
|
// Dial starts a client connection to the given SSH server. It is a
|
||||||
// convenience function that connects to the given network address,
|
// convenience function that connects to the given network address,
|
||||||
// initiates the SSH handshake, and then sets up a Client.
|
// initiates the SSH handshake, and then sets up a Client.
|
||||||
func Dial(network, addr string, config *ssh.ClientConfig) (*ssh.Client, error) {
|
func Dial(network, addr string, sshConfig *ssh.ClientConfig) (*ssh.Client, error) {
|
||||||
dialer := fs.Config.NewDialer()
|
dialer := fshttp.NewDialer(fs.Config)
|
||||||
conn, err := dialer.Dial(network, addr)
|
conn, err := dialer.Dial(network, addr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
c, chans, reqs, err := ssh.NewClientConn(conn, addr, config)
|
c, chans, reqs, err := ssh.NewClientConn(conn, addr, sshConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
@ -263,19 +266,19 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
|
||||||
// NewFs creates a new Fs object from the name and root. It connects to
|
// NewFs creates a new Fs object from the name and root. It connects to
|
||||||
// the host specified in the config file.
|
// the host specified in the config file.
|
||||||
func NewFs(name, root string) (fs.Fs, error) {
|
func NewFs(name, root string) (fs.Fs, error) {
|
||||||
user := fs.ConfigFileGet(name, "user")
|
user := config.FileGet(name, "user")
|
||||||
host := fs.ConfigFileGet(name, "host")
|
host := config.FileGet(name, "host")
|
||||||
port := fs.ConfigFileGet(name, "port")
|
port := config.FileGet(name, "port")
|
||||||
pass := fs.ConfigFileGet(name, "pass")
|
pass := config.FileGet(name, "pass")
|
||||||
keyFile := fs.ConfigFileGet(name, "key_file")
|
keyFile := config.FileGet(name, "key_file")
|
||||||
insecureCipher := fs.ConfigFileGetBool(name, "use_insecure_cipher")
|
insecureCipher := config.FileGetBool(name, "use_insecure_cipher")
|
||||||
if user == "" {
|
if user == "" {
|
||||||
user = currentUser
|
user = currentUser
|
||||||
}
|
}
|
||||||
if port == "" {
|
if port == "" {
|
||||||
port = "22"
|
port = "22"
|
||||||
}
|
}
|
||||||
config := &ssh.ClientConfig{
|
sshConfig := &ssh.ClientConfig{
|
||||||
User: user,
|
User: user,
|
||||||
Auth: []ssh.AuthMethod{},
|
Auth: []ssh.AuthMethod{},
|
||||||
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
|
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
|
||||||
|
@ -283,8 +286,8 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if insecureCipher {
|
if insecureCipher {
|
||||||
config.Config.SetDefaults()
|
sshConfig.Config.SetDefaults()
|
||||||
config.Config.Ciphers = append(config.Config.Ciphers, "aes128-cbc")
|
sshConfig.Config.Ciphers = append(sshConfig.Config.Ciphers, "aes128-cbc")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add ssh agent-auth if no password or file specified
|
// Add ssh agent-auth if no password or file specified
|
||||||
|
@ -297,7 +300,7 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "couldn't read ssh agent signers")
|
return nil, errors.Wrap(err, "couldn't read ssh agent signers")
|
||||||
}
|
}
|
||||||
config.Auth = append(config.Auth, ssh.PublicKeys(signers...))
|
sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(signers...))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Load key file if specified
|
// Load key file if specified
|
||||||
|
@ -310,22 +313,22 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "failed to parse private key file")
|
return nil, errors.Wrap(err, "failed to parse private key file")
|
||||||
}
|
}
|
||||||
config.Auth = append(config.Auth, ssh.PublicKeys(signer))
|
sshConfig.Auth = append(sshConfig.Auth, ssh.PublicKeys(signer))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Auth from password if specified
|
// Auth from password if specified
|
||||||
if pass != "" {
|
if pass != "" {
|
||||||
clearpass, err := fs.Reveal(pass)
|
clearpass, err := config.Reveal(pass)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
config.Auth = append(config.Auth, ssh.Password(clearpass))
|
sshConfig.Auth = append(sshConfig.Auth, ssh.Password(clearpass))
|
||||||
}
|
}
|
||||||
|
|
||||||
f := &Fs{
|
f := &Fs{
|
||||||
name: name,
|
name: name,
|
||||||
root: root,
|
root: root,
|
||||||
config: config,
|
config: sshConfig,
|
||||||
host: host,
|
host: host,
|
||||||
port: port,
|
port: port,
|
||||||
url: "sftp://" + user + "@" + host + ":" + port + "/" + root,
|
url: "sftp://" + user + "@" + host + ":" + port + "/" + root,
|
||||||
|
@ -631,25 +634,25 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash types of the filesystem
|
// Hashes returns the supported hash types of the filesystem
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
if f.cachedHashes != nil {
|
if f.cachedHashes != nil {
|
||||||
return *f.cachedHashes
|
return *f.cachedHashes
|
||||||
}
|
}
|
||||||
|
|
||||||
hashcheckDisabled := fs.ConfigFileGetBool(f.name, "disable_hashcheck")
|
hashcheckDisabled := config.FileGetBool(f.name, "disable_hashcheck")
|
||||||
if hashcheckDisabled {
|
if hashcheckDisabled {
|
||||||
return fs.HashSet(fs.HashNone)
|
return hash.Set(hash.HashNone)
|
||||||
}
|
}
|
||||||
|
|
||||||
c, err := f.getSftpConnection()
|
c, err := f.getSftpConnection()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Errorf(f, "Couldn't get SSH connection to figure out Hashes: %v", err)
|
fs.Errorf(f, "Couldn't get SSH connection to figure out Hashes: %v", err)
|
||||||
return fs.HashSet(fs.HashNone)
|
return hash.Set(hash.HashNone)
|
||||||
}
|
}
|
||||||
defer f.putSftpConnection(&c, err)
|
defer f.putSftpConnection(&c, err)
|
||||||
session, err := c.sshClient.NewSession()
|
session, err := c.sshClient.NewSession()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fs.HashSet(fs.HashNone)
|
return hash.Set(hash.HashNone)
|
||||||
}
|
}
|
||||||
sha1Output, _ := session.Output("echo 'abc' | sha1sum")
|
sha1Output, _ := session.Output("echo 'abc' | sha1sum")
|
||||||
expectedSha1 := "03cfd743661f07975fa2f1220c5194cbaff48451"
|
expectedSha1 := "03cfd743661f07975fa2f1220c5194cbaff48451"
|
||||||
|
@ -657,7 +660,7 @@ func (f *Fs) Hashes() fs.HashSet {
|
||||||
|
|
||||||
session, err = c.sshClient.NewSession()
|
session, err = c.sshClient.NewSession()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fs.HashSet(fs.HashNone)
|
return hash.Set(hash.HashNone)
|
||||||
}
|
}
|
||||||
md5Output, _ := session.Output("echo 'abc' | md5sum")
|
md5Output, _ := session.Output("echo 'abc' | md5sum")
|
||||||
expectedMd5 := "0bee89b07a248e27c83fc3d5951213c1"
|
expectedMd5 := "0bee89b07a248e27c83fc3d5951213c1"
|
||||||
|
@ -666,15 +669,15 @@ func (f *Fs) Hashes() fs.HashSet {
|
||||||
sha1Works := parseHash(sha1Output) == expectedSha1
|
sha1Works := parseHash(sha1Output) == expectedSha1
|
||||||
md5Works := parseHash(md5Output) == expectedMd5
|
md5Works := parseHash(md5Output) == expectedMd5
|
||||||
|
|
||||||
set := fs.NewHashSet()
|
set := hash.NewHashSet()
|
||||||
if !sha1Works && !md5Works {
|
if !sha1Works && !md5Works {
|
||||||
set.Add(fs.HashNone)
|
set.Add(hash.HashNone)
|
||||||
}
|
}
|
||||||
if sha1Works {
|
if sha1Works {
|
||||||
set.Add(fs.HashSHA1)
|
set.Add(hash.HashSHA1)
|
||||||
}
|
}
|
||||||
if md5Works {
|
if md5Works {
|
||||||
set.Add(fs.HashMD5)
|
set.Add(hash.HashMD5)
|
||||||
}
|
}
|
||||||
|
|
||||||
_ = session.Close()
|
_ = session.Close()
|
||||||
|
@ -702,10 +705,10 @@ func (o *Object) Remote() string {
|
||||||
|
|
||||||
// Hash returns the selected checksum of the file
|
// Hash returns the selected checksum of the file
|
||||||
// If no checksum is available it returns ""
|
// If no checksum is available it returns ""
|
||||||
func (o *Object) Hash(r fs.HashType) (string, error) {
|
func (o *Object) Hash(r hash.Type) (string, error) {
|
||||||
if r == fs.HashMD5 && o.md5sum != nil {
|
if r == hash.HashMD5 && o.md5sum != nil {
|
||||||
return *o.md5sum, nil
|
return *o.md5sum, nil
|
||||||
} else if r == fs.HashSHA1 && o.sha1sum != nil {
|
} else if r == hash.HashSHA1 && o.sha1sum != nil {
|
||||||
return *o.sha1sum, nil
|
return *o.sha1sum, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -717,29 +720,29 @@ func (o *Object) Hash(r fs.HashType) (string, error) {
|
||||||
o.fs.putSftpConnection(&c, err)
|
o.fs.putSftpConnection(&c, err)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
o.fs.cachedHashes = nil // Something has changed on the remote system
|
o.fs.cachedHashes = nil // Something has changed on the remote system
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
|
|
||||||
err = fs.ErrHashUnsupported
|
err = hash.ErrHashUnsupported
|
||||||
var outputBytes []byte
|
var outputBytes []byte
|
||||||
escapedPath := shellEscape(o.path())
|
escapedPath := shellEscape(o.path())
|
||||||
if r == fs.HashMD5 {
|
if r == hash.HashMD5 {
|
||||||
outputBytes, err = session.Output("md5sum " + escapedPath)
|
outputBytes, err = session.Output("md5sum " + escapedPath)
|
||||||
} else if r == fs.HashSHA1 {
|
} else if r == hash.HashSHA1 {
|
||||||
outputBytes, err = session.Output("sha1sum " + escapedPath)
|
outputBytes, err = session.Output("sha1sum " + escapedPath)
|
||||||
}
|
}
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
o.fs.cachedHashes = nil // Something has changed on the remote system
|
o.fs.cachedHashes = nil // Something has changed on the remote system
|
||||||
_ = session.Close()
|
_ = session.Close()
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
|
|
||||||
_ = session.Close()
|
_ = session.Close()
|
||||||
str := parseHash(outputBytes)
|
str := parseHash(outputBytes)
|
||||||
if r == fs.HashMD5 {
|
if r == hash.HashMD5 {
|
||||||
o.md5sum = &str
|
o.md5sum = &str
|
||||||
} else if r == fs.HashSHA1 {
|
} else if r == hash.HashSHA1 {
|
||||||
o.sha1sum = &str
|
o.sha1sum = &str
|
||||||
}
|
}
|
||||||
return str, nil
|
return str, nil
|
||||||
|
@ -812,7 +815,7 @@ func (o *Object) SetModTime(modTime time.Time) error {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return errors.Wrap(err, "SetModTime")
|
return errors.Wrap(err, "SetModTime")
|
||||||
}
|
}
|
||||||
if fs.ConfigFileGetBool(o.fs.name, "set_modtime", true) {
|
if config.FileGetBool(o.fs.name, "set_modtime", true) {
|
||||||
err = c.sftpClient.Chtimes(o.path(), modTime, modTime)
|
err = c.sftpClient.Chtimes(o.path(), modTime, modTime)
|
||||||
o.fs.putSftpConnection(&c, err)
|
o.fs.putSftpConnection(&c, err)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
|
@ -14,6 +14,13 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
|
"github.com/ncw/rclone/fs/operations"
|
||||||
|
"github.com/ncw/rclone/fs/walk"
|
||||||
"github.com/ncw/swift"
|
"github.com/ncw/swift"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
@ -118,7 +125,7 @@ func init() {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
})
|
})
|
||||||
fs.VarP(&chunkSize, "swift-chunk-size", "", "Above this size files will be chunked into a _segments container.")
|
flags.VarP(&chunkSize, "swift-chunk-size", "", "Above this size files will be chunked into a _segments container.")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs represents a remote swift server
|
// Fs represents a remote swift server
|
||||||
|
@ -191,24 +198,24 @@ func parsePath(path string) (container, directory string, err error) {
|
||||||
func swiftConnection(name string) (*swift.Connection, error) {
|
func swiftConnection(name string) (*swift.Connection, error) {
|
||||||
c := &swift.Connection{
|
c := &swift.Connection{
|
||||||
// Keep these in the same order as the Config for ease of checking
|
// Keep these in the same order as the Config for ease of checking
|
||||||
UserName: fs.ConfigFileGet(name, "user"),
|
UserName: config.FileGet(name, "user"),
|
||||||
ApiKey: fs.ConfigFileGet(name, "key"),
|
ApiKey: config.FileGet(name, "key"),
|
||||||
AuthUrl: fs.ConfigFileGet(name, "auth"),
|
AuthUrl: config.FileGet(name, "auth"),
|
||||||
UserId: fs.ConfigFileGet(name, "user_id"),
|
UserId: config.FileGet(name, "user_id"),
|
||||||
Domain: fs.ConfigFileGet(name, "domain"),
|
Domain: config.FileGet(name, "domain"),
|
||||||
Tenant: fs.ConfigFileGet(name, "tenant"),
|
Tenant: config.FileGet(name, "tenant"),
|
||||||
TenantId: fs.ConfigFileGet(name, "tenant_id"),
|
TenantId: config.FileGet(name, "tenant_id"),
|
||||||
TenantDomain: fs.ConfigFileGet(name, "tenant_domain"),
|
TenantDomain: config.FileGet(name, "tenant_domain"),
|
||||||
Region: fs.ConfigFileGet(name, "region"),
|
Region: config.FileGet(name, "region"),
|
||||||
StorageUrl: fs.ConfigFileGet(name, "storage_url"),
|
StorageUrl: config.FileGet(name, "storage_url"),
|
||||||
AuthToken: fs.ConfigFileGet(name, "auth_token"),
|
AuthToken: config.FileGet(name, "auth_token"),
|
||||||
AuthVersion: fs.ConfigFileGetInt(name, "auth_version", 0),
|
AuthVersion: config.FileGetInt(name, "auth_version", 0),
|
||||||
EndpointType: swift.EndpointType(fs.ConfigFileGet(name, "endpoint_type", "public")),
|
EndpointType: swift.EndpointType(config.FileGet(name, "endpoint_type", "public")),
|
||||||
ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport
|
ConnectTimeout: 10 * fs.Config.ConnectTimeout, // Use the timeouts in the transport
|
||||||
Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport
|
Timeout: 10 * fs.Config.Timeout, // Use the timeouts in the transport
|
||||||
Transport: fs.Config.Transport(),
|
Transport: fshttp.NewTransport(fs.Config),
|
||||||
}
|
}
|
||||||
if fs.ConfigFileGetBool(name, "env_auth", false) {
|
if config.FileGetBool(name, "env_auth", false) {
|
||||||
err := c.ApplyEnvironment()
|
err := c.ApplyEnvironment()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "failed to read environment variables")
|
return nil, errors.Wrap(err, "failed to read environment variables")
|
||||||
|
@ -466,7 +473,7 @@ func (f *Fs) ListR(dir string, callback fs.ListRCallback) (err error) {
|
||||||
if f.container == "" {
|
if f.container == "" {
|
||||||
return errors.New("container needed for recursive list")
|
return errors.New("container needed for recursive list")
|
||||||
}
|
}
|
||||||
list := fs.NewListRHelper(callback)
|
list := walk.NewListRHelper(callback)
|
||||||
err = f.list(dir, true, func(entry fs.DirEntry) error {
|
err = f.list(dir, true, func(entry fs.DirEntry) error {
|
||||||
return list.Add(entry)
|
return list.Add(entry)
|
||||||
})
|
})
|
||||||
|
@ -549,7 +556,7 @@ func (f *Fs) Purge() error {
|
||||||
toBeDeleted := make(chan fs.Object, fs.Config.Transfers)
|
toBeDeleted := make(chan fs.Object, fs.Config.Transfers)
|
||||||
delErr := make(chan error, 1)
|
delErr := make(chan error, 1)
|
||||||
go func() {
|
go func() {
|
||||||
delErr <- fs.DeleteFiles(toBeDeleted)
|
delErr <- operations.DeleteFiles(toBeDeleted)
|
||||||
}()
|
}()
|
||||||
err := f.list("", true, func(entry fs.DirEntry) error {
|
err := f.list("", true, func(entry fs.DirEntry) error {
|
||||||
if o, ok := entry.(*Object); ok {
|
if o, ok := entry.(*Object); ok {
|
||||||
|
@ -596,8 +603,8 @@ func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashMD5)
|
return hash.Set(hash.HashMD5)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
@ -621,9 +628,9 @@ func (o *Object) Remote() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the Md5sum of an object returning a lowercase hex string
|
// Hash returns the Md5sum of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashMD5 {
|
if t != hash.HashMD5 {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
isDynamicLargeObject, err := o.isDynamicLargeObject()
|
isDynamicLargeObject, err := o.isDynamicLargeObject()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@ -855,7 +862,7 @@ func (o *Object) updateChunks(in0 io.Reader, headers swift.Headers, size int64,
|
||||||
// The new object may have been created if an error is returned
|
// The new object may have been created if an error is returned
|
||||||
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
|
func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
|
||||||
if o.fs.container == "" {
|
if o.fs.container == "" {
|
||||||
return fs.FatalError(errors.New("container name needed in remote"))
|
return fserrors.FatalError(errors.New("container name needed in remote"))
|
||||||
}
|
}
|
||||||
err := o.fs.Mkdir("")
|
err := o.fs.Mkdir("")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
|
@ -30,11 +30,12 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/backend/webdav/api"
|
"github.com/ncw/rclone/backend/webdav/api"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/ncw/rclone/lib/pacer"
|
"github.com/ncw/rclone/lib/pacer"
|
||||||
"github.com/ncw/rclone/lib/rest"
|
"github.com/ncw/rclone/lib/rest"
|
||||||
"github.com/ncw/rclone/pacer"
|
|
||||||
"github.com/ncw/rclone/rest"
|
|
||||||
"github.com/ncw/rclone/webdav/api"
|
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -159,7 +160,7 @@ var retryErrorCodes = []int{
|
||||||
// shouldRetry returns a boolean as to whether this resp and err
|
// shouldRetry returns a boolean as to whether this resp and err
|
||||||
// deserve to be retried. It returns the err as a convenience
|
// deserve to be retried. It returns the err as a convenience
|
||||||
func shouldRetry(resp *http.Response, err error) (bool, error) {
|
func shouldRetry(resp *http.Response, err error) (bool, error) {
|
||||||
return fs.ShouldRetry(err) || fs.ShouldRetryHTTP(resp, retryErrorCodes), err
|
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
|
||||||
}
|
}
|
||||||
|
|
||||||
// itemIsDir returns true if the item is a directory
|
// itemIsDir returns true if the item is a directory
|
||||||
|
@ -250,21 +251,21 @@ func (o *Object) filePath() string {
|
||||||
|
|
||||||
// NewFs constructs an Fs from the path, container:path
|
// NewFs constructs an Fs from the path, container:path
|
||||||
func NewFs(name, root string) (fs.Fs, error) {
|
func NewFs(name, root string) (fs.Fs, error) {
|
||||||
endpoint := fs.ConfigFileGet(name, "url")
|
endpoint := config.FileGet(name, "url")
|
||||||
if !strings.HasSuffix(endpoint, "/") {
|
if !strings.HasSuffix(endpoint, "/") {
|
||||||
endpoint += "/"
|
endpoint += "/"
|
||||||
}
|
}
|
||||||
|
|
||||||
user := fs.ConfigFileGet(name, "user")
|
user := config.FileGet(name, "user")
|
||||||
pass := fs.ConfigFileGet(name, "pass")
|
pass := config.FileGet(name, "pass")
|
||||||
if pass != "" {
|
if pass != "" {
|
||||||
var err error
|
var err error
|
||||||
pass, err = fs.Reveal(pass)
|
pass, err = config.Reveal(pass)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.Wrap(err, "couldn't decrypt password")
|
return nil, errors.Wrap(err, "couldn't decrypt password")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
vendor := fs.ConfigFileGet(name, "vendor")
|
vendor := config.FileGet(name, "vendor")
|
||||||
|
|
||||||
// Parse the endpoint
|
// Parse the endpoint
|
||||||
u, err := url.Parse(endpoint)
|
u, err := url.Parse(endpoint)
|
||||||
|
@ -277,7 +278,7 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
root: root,
|
root: root,
|
||||||
endpoint: u,
|
endpoint: u,
|
||||||
endpointURL: u.String(),
|
endpointURL: u.String(),
|
||||||
srv: rest.NewClient(fs.Config.Client()).SetRoot(u.String()).SetUserPass(user, pass),
|
srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetRoot(u.String()).SetUserPass(user, pass),
|
||||||
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
|
pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
|
||||||
user: user,
|
user: user,
|
||||||
pass: pass,
|
pass: pass,
|
||||||
|
@ -765,8 +766,8 @@ func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashNone)
|
return hash.Set(hash.HashNone)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
@ -790,9 +791,9 @@ func (o *Object) Remote() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the SHA-1 of an object returning a lowercase hex string
|
// Hash returns the SHA-1 of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashSHA1 {
|
if t != hash.HashSHA1 {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
return o.sha1, nil
|
return o.sha1, nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -15,9 +15,11 @@ import (
|
||||||
|
|
||||||
yandex "github.com/ncw/rclone/backend/yandex/api"
|
yandex "github.com/ncw/rclone/backend/yandex/api"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/fshttp"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
"github.com/ncw/rclone/lib/oauthutil"
|
"github.com/ncw/rclone/lib/oauthutil"
|
||||||
"github.com/ncw/rclone/oauthutil"
|
"github.com/ncw/rclone/lib/readers"
|
||||||
yandex "github.com/ncw/rclone/yandex/api"
|
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"golang.org/x/oauth2"
|
"golang.org/x/oauth2"
|
||||||
)
|
)
|
||||||
|
@ -37,7 +39,7 @@ var (
|
||||||
TokenURL: "https://oauth.yandex.com/token", //same as https://oauth.yandex.ru/token
|
TokenURL: "https://oauth.yandex.com/token", //same as https://oauth.yandex.ru/token
|
||||||
},
|
},
|
||||||
ClientID: rcloneClientID,
|
ClientID: rcloneClientID,
|
||||||
ClientSecret: fs.MustReveal(rcloneEncryptedClientSecret),
|
ClientSecret: config.MustReveal(rcloneEncryptedClientSecret),
|
||||||
RedirectURL: oauthutil.RedirectURL,
|
RedirectURL: oauthutil.RedirectURL,
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
@ -55,10 +57,10 @@ func init() {
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
Options: []fs.Option{{
|
Options: []fs.Option{{
|
||||||
Name: fs.ConfigClientID,
|
Name: config.ConfigClientID,
|
||||||
Help: "Yandex Client Id - leave blank normally.",
|
Help: "Yandex Client Id - leave blank normally.",
|
||||||
}, {
|
}, {
|
||||||
Name: fs.ConfigClientSecret,
|
Name: config.ConfigClientSecret,
|
||||||
Help: "Yandex Client Secret - leave blank normally.",
|
Help: "Yandex Client Secret - leave blank normally.",
|
||||||
}},
|
}},
|
||||||
})
|
})
|
||||||
|
@ -109,7 +111,7 @@ func (f *Fs) Features() *fs.Features {
|
||||||
// read access token from ConfigFile string
|
// read access token from ConfigFile string
|
||||||
func getAccessToken(name string) (*oauth2.Token, error) {
|
func getAccessToken(name string) (*oauth2.Token, error) {
|
||||||
// Read the token from the config file
|
// Read the token from the config file
|
||||||
tokenConfig := fs.ConfigFileGet(name, "token")
|
tokenConfig := config.FileGet(name, "token")
|
||||||
//Get access token from config string
|
//Get access token from config string
|
||||||
decoder := json.NewDecoder(strings.NewReader(tokenConfig))
|
decoder := json.NewDecoder(strings.NewReader(tokenConfig))
|
||||||
var result *oauth2.Token
|
var result *oauth2.Token
|
||||||
|
@ -129,7 +131,7 @@ func NewFs(name, root string) (fs.Fs, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
//create new client
|
//create new client
|
||||||
yandexDisk := yandex.NewClient(token.AccessToken, fs.Config.Client())
|
yandexDisk := yandex.NewClient(token.AccessToken, fshttp.NewClient(fs.Config))
|
||||||
|
|
||||||
f := &Fs{
|
f := &Fs{
|
||||||
name: name,
|
name: name,
|
||||||
|
@ -487,8 +489,8 @@ func (f *Fs) CleanUp() error {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hashes returns the supported hash sets.
|
// Hashes returns the supported hash sets.
|
||||||
func (f *Fs) Hashes() fs.HashSet {
|
func (f *Fs) Hashes() hash.Set {
|
||||||
return fs.HashSet(fs.HashMD5)
|
return hash.Set(hash.HashMD5)
|
||||||
}
|
}
|
||||||
|
|
||||||
// ------------------------------------------------------------
|
// ------------------------------------------------------------
|
||||||
|
@ -512,9 +514,9 @@ func (o *Object) Remote() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hash returns the Md5sum of an object returning a lowercase hex string
|
// Hash returns the Md5sum of an object returning a lowercase hex string
|
||||||
func (o *Object) Hash(t fs.HashType) (string, error) {
|
func (o *Object) Hash(t hash.Type) (string, error) {
|
||||||
if t != fs.HashMD5 {
|
if t != hash.HashMD5 {
|
||||||
return "", fs.ErrHashUnsupported
|
return "", hash.ErrHashUnsupported
|
||||||
}
|
}
|
||||||
return o.md5sum, nil
|
return o.md5sum, nil
|
||||||
}
|
}
|
||||||
|
@ -578,7 +580,7 @@ func (o *Object) remotePath() string {
|
||||||
//
|
//
|
||||||
// The new object may have been created if an error is returned
|
// The new object may have been created if an error is returned
|
||||||
func (o *Object) Update(in0 io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
|
func (o *Object) Update(in0 io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
|
||||||
in := fs.NewCountingReader(in0)
|
in := readers.NewCountingReader(in0)
|
||||||
modTime := src.ModTime()
|
modTime := src.ModTime()
|
||||||
|
|
||||||
remote := o.remotePath()
|
remote := o.remotePath()
|
||||||
|
|
|
@ -2,7 +2,7 @@ package authorize
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/config"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -19,6 +19,6 @@ rclone from a machine with a browser - use as instructed by
|
||||||
rclone config.`,
|
rclone config.`,
|
||||||
Run: func(command *cobra.Command, args []string) {
|
Run: func(command *cobra.Command, args []string) {
|
||||||
cmd.CheckArgs(1, 3, command, args)
|
cmd.CheckArgs(1, 3, command, args)
|
||||||
fs.Authorize(args)
|
config.Authorize(args)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -9,6 +9,7 @@ import (
|
||||||
"github.com/ncw/rclone/backend/cache"
|
"github.com/ncw/rclone/backend/cache"
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
@ -32,9 +33,9 @@ Print cache stats for a remote in JSON format
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
if !fs.ConfigFileGetBool(configName, "read_only", false) {
|
if !config.FileGetBool(configName, "read_only", false) {
|
||||||
fs.ConfigFileSet(configName, "read_only", "true")
|
config.FileSet(configName, "read_only", "true")
|
||||||
defer fs.ConfigFileDeleteKey(configName, "read_only")
|
defer config.FileDeleteKey(configName, "read_only")
|
||||||
}
|
}
|
||||||
|
|
||||||
fsrc := cmd.NewFsSrc(args)
|
fsrc := cmd.NewFsSrc(args)
|
||||||
|
|
|
@ -7,7 +7,7 @@ import (
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -74,7 +74,7 @@ Note that if offset is negative it will count from the end, so
|
||||||
w = ioutil.Discard
|
w = ioutil.Discard
|
||||||
}
|
}
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
return fs.Cat(fsrc, w, offset, count)
|
return operations.Cat(fsrc, w, offset, count)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,7 +2,7 @@ package check
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -37,9 +37,9 @@ to check all the data.
|
||||||
fsrc, fdst := cmd.NewFsSrcDst(args)
|
fsrc, fdst := cmd.NewFsSrcDst(args)
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
if download {
|
if download {
|
||||||
return fs.CheckDownload(fdst, fsrc)
|
return operations.CheckDownload(fdst, fsrc)
|
||||||
}
|
}
|
||||||
return fs.Check(fdst, fsrc)
|
return operations.Check(fdst, fsrc)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,7 +2,7 @@ package cleanup
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -21,7 +21,7 @@ versions. Not supported by all remotes.
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fsrc := cmd.NewFsSrc(args)
|
fsrc := cmd.NewFsSrc(args)
|
||||||
cmd.Run(true, false, command, func() error {
|
cmd.Run(true, false, command, func() error {
|
||||||
return fs.CleanUp(fsrc)
|
return operations.CleanUp(fsrc)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
97
cmd/cmd.go
97
cmd/cmd.go
|
@ -21,17 +21,26 @@ import (
|
||||||
"github.com/spf13/pflag"
|
"github.com/spf13/pflag"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/accounting"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/configflags"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/ncw/rclone/fs/filter"
|
||||||
|
"github.com/ncw/rclone/fs/filter/filterflags"
|
||||||
|
"github.com/ncw/rclone/fs/fserrors"
|
||||||
|
"github.com/ncw/rclone/fs/fspath"
|
||||||
|
fslog "github.com/ncw/rclone/fs/log"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Globals
|
// Globals
|
||||||
var (
|
var (
|
||||||
// Flags
|
// Flags
|
||||||
cpuProfile = fs.StringP("cpuprofile", "", "", "Write cpu profile to file")
|
cpuProfile = flags.StringP("cpuprofile", "", "", "Write cpu profile to file")
|
||||||
memProfile = fs.StringP("memprofile", "", "", "Write memory profile to file")
|
memProfile = flags.StringP("memprofile", "", "", "Write memory profile to file")
|
||||||
statsInterval = fs.DurationP("stats", "", time.Minute*1, "Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable)")
|
statsInterval = flags.DurationP("stats", "", time.Minute*1, "Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable)")
|
||||||
dataRateUnit = fs.StringP("stats-unit", "", "bytes", "Show data rate in stats as either 'bits' or 'bytes'/s")
|
dataRateUnit = flags.StringP("stats-unit", "", "bytes", "Show data rate in stats as either 'bits' or 'bytes'/s")
|
||||||
version bool
|
version bool
|
||||||
retries = fs.IntP("retries", "", 3, "Retry operations this many times if they fail")
|
retries = flags.IntP("retries", "", 3, "Retry operations this many times if they fail")
|
||||||
// Errors
|
// Errors
|
||||||
errorCommandNotFound = errors.New("command not found")
|
errorCommandNotFound = errors.New("command not found")
|
||||||
errorUncategorized = errors.New("uncategorized error")
|
errorUncategorized = errors.New("uncategorized error")
|
||||||
|
@ -113,6 +122,10 @@ func runRoot(cmd *cobra.Command, args []string) {
|
||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
|
// Add global flags
|
||||||
|
configflags.AddFlags(pflag.CommandLine)
|
||||||
|
filterflags.AddFlags(pflag.CommandLine)
|
||||||
|
|
||||||
Root.Run = runRoot
|
Root.Run = runRoot
|
||||||
Root.Flags().BoolVarP(&version, "version", "V", false, "Print the version number")
|
Root.Flags().BoolVarP(&version, "version", "V", false, "Print the version number")
|
||||||
cobra.OnInitialize(initConfig)
|
cobra.OnInitialize(initConfig)
|
||||||
|
@ -131,7 +144,7 @@ func ShowVersion() {
|
||||||
func newFsFile(remote string) (fs.Fs, string) {
|
func newFsFile(remote string) (fs.Fs, string) {
|
||||||
fsInfo, configName, fsPath, err := fs.ParseRemote(remote)
|
fsInfo, configName, fsPath, err := fs.ParseRemote(remote)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
log.Fatalf("Failed to create file system for %q: %v", remote, err)
|
log.Fatalf("Failed to create file system for %q: %v", remote, err)
|
||||||
}
|
}
|
||||||
f, err := fsInfo.NewFs(configName, fsPath)
|
f, err := fsInfo.NewFs(configName, fsPath)
|
||||||
|
@ -141,7 +154,7 @@ func newFsFile(remote string) (fs.Fs, string) {
|
||||||
case nil:
|
case nil:
|
||||||
return f, ""
|
return f, ""
|
||||||
default:
|
default:
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
log.Fatalf("Failed to create file system for %q: %v", remote, err)
|
log.Fatalf("Failed to create file system for %q: %v", remote, err)
|
||||||
}
|
}
|
||||||
return nil, ""
|
return nil, ""
|
||||||
|
@ -155,15 +168,15 @@ func newFsFile(remote string) (fs.Fs, string) {
|
||||||
func newFsSrc(remote string) (fs.Fs, string) {
|
func newFsSrc(remote string) (fs.Fs, string) {
|
||||||
f, fileName := newFsFile(remote)
|
f, fileName := newFsFile(remote)
|
||||||
if fileName != "" {
|
if fileName != "" {
|
||||||
if !fs.Config.Filter.InActive() {
|
if !filter.Active.InActive() {
|
||||||
err := errors.Errorf("Can't limit to single files when using filters: %v", remote)
|
err := errors.Errorf("Can't limit to single files when using filters: %v", remote)
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
log.Fatalf(err.Error())
|
log.Fatalf(err.Error())
|
||||||
}
|
}
|
||||||
// Limit transfers to this file
|
// Limit transfers to this file
|
||||||
err := fs.Config.Filter.AddFile(fileName)
|
err := filter.Active.AddFile(fileName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
log.Fatalf("Failed to limit to single file %q: %v", remote, err)
|
log.Fatalf("Failed to limit to single file %q: %v", remote, err)
|
||||||
}
|
}
|
||||||
// Set --no-traverse as only one file
|
// Set --no-traverse as only one file
|
||||||
|
@ -178,7 +191,7 @@ func newFsSrc(remote string) (fs.Fs, string) {
|
||||||
func newFsDst(remote string) fs.Fs {
|
func newFsDst(remote string) fs.Fs {
|
||||||
f, err := fs.NewFs(remote)
|
f, err := fs.NewFs(remote)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
log.Fatalf("Failed to create file system for %q: %v", remote, err)
|
log.Fatalf("Failed to create file system for %q: %v", remote, err)
|
||||||
}
|
}
|
||||||
return f
|
return f
|
||||||
|
@ -201,7 +214,7 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
|
||||||
// If file exists then srcFileName != "", however if the file
|
// If file exists then srcFileName != "", however if the file
|
||||||
// doesn't exist then we assume it is a directory...
|
// doesn't exist then we assume it is a directory...
|
||||||
if srcFileName != "" {
|
if srcFileName != "" {
|
||||||
dstRemote, dstFileName = fs.RemoteSplit(dstRemote)
|
dstRemote, dstFileName = fspath.RemoteSplit(dstRemote)
|
||||||
if dstRemote == "" {
|
if dstRemote == "" {
|
||||||
dstRemote = "."
|
dstRemote = "."
|
||||||
}
|
}
|
||||||
|
@ -212,11 +225,11 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
|
||||||
fdst, err := fs.NewFs(dstRemote)
|
fdst, err := fs.NewFs(dstRemote)
|
||||||
switch err {
|
switch err {
|
||||||
case fs.ErrorIsFile:
|
case fs.ErrorIsFile:
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
log.Fatalf("Source doesn't exist or is a directory and destination is a file")
|
log.Fatalf("Source doesn't exist or is a directory and destination is a file")
|
||||||
case nil:
|
case nil:
|
||||||
default:
|
default:
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
log.Fatalf("Failed to create file system for destination %q: %v", dstRemote, err)
|
log.Fatalf("Failed to create file system for destination %q: %v", dstRemote, err)
|
||||||
}
|
}
|
||||||
fs.CalculateModifyWindow(fdst, fsrc)
|
fs.CalculateModifyWindow(fdst, fsrc)
|
||||||
|
@ -241,7 +254,7 @@ func NewFsDst(args []string) fs.Fs {
|
||||||
|
|
||||||
// NewFsDstFile creates a new dst fs with a destination file name from the arguments
|
// NewFsDstFile creates a new dst fs with a destination file name from the arguments
|
||||||
func NewFsDstFile(args []string) (fdst fs.Fs, dstFileName string) {
|
func NewFsDstFile(args []string) (fdst fs.Fs, dstFileName string) {
|
||||||
dstRemote, dstFileName := fs.RemoteSplit(args[0])
|
dstRemote, dstFileName := fspath.RemoteSplit(args[0])
|
||||||
if dstRemote == "" {
|
if dstRemote == "" {
|
||||||
dstRemote = "."
|
dstRemote = "."
|
||||||
}
|
}
|
||||||
|
@ -274,27 +287,27 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
|
||||||
}
|
}
|
||||||
for try := 1; try <= *retries; try++ {
|
for try := 1; try <= *retries; try++ {
|
||||||
err = f()
|
err = f()
|
||||||
if !Retry || (err == nil && !fs.Stats.Errored()) {
|
if !Retry || (err == nil && !accounting.Stats.Errored()) {
|
||||||
if try > 1 {
|
if try > 1 {
|
||||||
fs.Errorf(nil, "Attempt %d/%d succeeded", try, *retries)
|
fs.Errorf(nil, "Attempt %d/%d succeeded", try, *retries)
|
||||||
}
|
}
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
if fs.IsFatalError(err) {
|
if fserrors.IsFatalError(err) {
|
||||||
fs.Errorf(nil, "Fatal error received - not attempting retries")
|
fs.Errorf(nil, "Fatal error received - not attempting retries")
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
if fs.IsNoRetryError(err) {
|
if fserrors.IsNoRetryError(err) {
|
||||||
fs.Errorf(nil, "Can't retry this error - not attempting retries")
|
fs.Errorf(nil, "Can't retry this error - not attempting retries")
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Errorf(nil, "Attempt %d/%d failed with %d errors and: %v", try, *retries, fs.Stats.GetErrors(), err)
|
fs.Errorf(nil, "Attempt %d/%d failed with %d errors and: %v", try, *retries, accounting.Stats.GetErrors(), err)
|
||||||
} else {
|
} else {
|
||||||
fs.Errorf(nil, "Attempt %d/%d failed with %d errors", try, *retries, fs.Stats.GetErrors())
|
fs.Errorf(nil, "Attempt %d/%d failed with %d errors", try, *retries, accounting.Stats.GetErrors())
|
||||||
}
|
}
|
||||||
if try < *retries {
|
if try < *retries {
|
||||||
fs.Stats.ResetErrors()
|
accounting.Stats.ResetErrors()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if showStats {
|
if showStats {
|
||||||
|
@ -304,12 +317,12 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
|
||||||
log.Printf("Failed to %s: %v", cmd.Name(), err)
|
log.Printf("Failed to %s: %v", cmd.Name(), err)
|
||||||
resolveExitCode(err)
|
resolveExitCode(err)
|
||||||
}
|
}
|
||||||
if showStats && (fs.Stats.Errored() || *statsInterval > 0) {
|
if showStats && (accounting.Stats.Errored() || *statsInterval > 0) {
|
||||||
fs.Stats.Log()
|
accounting.Stats.Log()
|
||||||
}
|
}
|
||||||
fs.Debugf(nil, "Go routines at exit %d\n", runtime.NumGoroutine())
|
fs.Debugf(nil, "Go routines at exit %d\n", runtime.NumGoroutine())
|
||||||
if fs.Stats.Errored() {
|
if accounting.Stats.Errored() {
|
||||||
resolveExitCode(fs.Stats.GetLastError())
|
resolveExitCode(accounting.Stats.GetLastError())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -339,7 +352,7 @@ func StartStats() chan struct{} {
|
||||||
for {
|
for {
|
||||||
select {
|
select {
|
||||||
case <-ticker.C:
|
case <-ticker.C:
|
||||||
fs.Stats.Log()
|
accounting.Stats.Log()
|
||||||
case <-stopStats:
|
case <-stopStats:
|
||||||
ticker.Stop()
|
ticker.Stop()
|
||||||
return
|
return
|
||||||
|
@ -353,10 +366,20 @@ func StartStats() chan struct{} {
|
||||||
// initConfig is run by cobra after initialising the flags
|
// initConfig is run by cobra after initialising the flags
|
||||||
func initConfig() {
|
func initConfig() {
|
||||||
// Start the logger
|
// Start the logger
|
||||||
fs.InitLogging()
|
fslog.InitLogging()
|
||||||
|
|
||||||
|
// Finish parsing any command line flags
|
||||||
|
configflags.SetFlags()
|
||||||
|
|
||||||
// Load the rest of the config now we have started the logger
|
// Load the rest of the config now we have started the logger
|
||||||
fs.LoadConfig()
|
config.LoadConfig()
|
||||||
|
|
||||||
|
// Load filters
|
||||||
|
var err error
|
||||||
|
filter.Active, err = filter.NewFilter(&filterflags.Opt)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to load filters: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
// Write the args for debug purposes
|
// Write the args for debug purposes
|
||||||
fs.Debugf("rclone", "Version %q starting with parameters %q", fs.Version, os.Args)
|
fs.Debugf("rclone", "Version %q starting with parameters %q", fs.Version, os.Args)
|
||||||
|
@ -366,12 +389,12 @@ func initConfig() {
|
||||||
fs.Infof(nil, "Creating CPU profile %q\n", *cpuProfile)
|
fs.Infof(nil, "Creating CPU profile %q\n", *cpuProfile)
|
||||||
f, err := os.Create(*cpuProfile)
|
f, err := os.Create(*cpuProfile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
log.Fatal(err)
|
log.Fatal(err)
|
||||||
}
|
}
|
||||||
err = pprof.StartCPUProfile(f)
|
err = pprof.StartCPUProfile(f)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
log.Fatal(err)
|
log.Fatal(err)
|
||||||
}
|
}
|
||||||
AtExit(func() {
|
AtExit(func() {
|
||||||
|
@ -385,17 +408,17 @@ func initConfig() {
|
||||||
fs.Infof(nil, "Saving Memory profile %q\n", *memProfile)
|
fs.Infof(nil, "Saving Memory profile %q\n", *memProfile)
|
||||||
f, err := os.Create(*memProfile)
|
f, err := os.Create(*memProfile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
log.Fatal(err)
|
log.Fatal(err)
|
||||||
}
|
}
|
||||||
err = pprof.WriteHeapProfile(f)
|
err = pprof.WriteHeapProfile(f)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
log.Fatal(err)
|
log.Fatal(err)
|
||||||
}
|
}
|
||||||
err = f.Close()
|
err = f.Close()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
log.Fatal(err)
|
log.Fatal(err)
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
@ -423,11 +446,11 @@ func resolveExitCode(err error) {
|
||||||
os.Exit(exitCodeFileNotFound)
|
os.Exit(exitCodeFileNotFound)
|
||||||
case err == errorUncategorized:
|
case err == errorUncategorized:
|
||||||
os.Exit(exitCodeUncategorizedError)
|
os.Exit(exitCodeUncategorizedError)
|
||||||
case fs.ShouldRetry(err):
|
case fserrors.ShouldRetry(err):
|
||||||
os.Exit(exitCodeRetryError)
|
os.Exit(exitCodeRetryError)
|
||||||
case fs.IsNoRetryError(err):
|
case fserrors.IsNoRetryError(err):
|
||||||
os.Exit(exitCodeNoRetryError)
|
os.Exit(exitCodeNoRetryError)
|
||||||
case fs.IsFatalError(err):
|
case fserrors.IsFatalError(err):
|
||||||
os.Exit(exitCodeFatalError)
|
os.Exit(exitCodeFatalError)
|
||||||
default:
|
default:
|
||||||
os.Exit(exitCodeUsageError)
|
os.Exit(exitCodeUsageError)
|
||||||
|
|
|
@ -14,6 +14,7 @@ import (
|
||||||
|
|
||||||
"github.com/billziss-gh/cgofuse/fuse"
|
"github.com/billziss-gh/cgofuse/fuse"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/log"
|
||||||
"github.com/ncw/rclone/vfs"
|
"github.com/ncw/rclone/vfs"
|
||||||
"github.com/ncw/rclone/vfs/vfsflags"
|
"github.com/ncw/rclone/vfs/vfsflags"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
|
@ -178,7 +179,7 @@ func (fsys *FS) stat(node vfs.Node, stat *fuse.Stat_t) (errc int) {
|
||||||
|
|
||||||
// Init is called after the filesystem is ready
|
// Init is called after the filesystem is ready
|
||||||
func (fsys *FS) Init() {
|
func (fsys *FS) Init() {
|
||||||
defer fs.Trace(fsys.f, "")("")
|
defer log.Trace(fsys.f, "")("")
|
||||||
close(fsys.ready)
|
close(fsys.ready)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -186,12 +187,12 @@ func (fsys *FS) Init() {
|
||||||
// the file system is terminated the file system may not receive the
|
// the file system is terminated the file system may not receive the
|
||||||
// Destroy call).
|
// Destroy call).
|
||||||
func (fsys *FS) Destroy() {
|
func (fsys *FS) Destroy() {
|
||||||
defer fs.Trace(fsys.f, "")("")
|
defer log.Trace(fsys.f, "")("")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Getattr reads the attributes for path
|
// Getattr reads the attributes for path
|
||||||
func (fsys *FS) Getattr(path string, stat *fuse.Stat_t, fh uint64) (errc int) {
|
func (fsys *FS) Getattr(path string, stat *fuse.Stat_t, fh uint64) (errc int) {
|
||||||
defer fs.Trace(path, "fh=0x%X", fh)("errc=%v", &errc)
|
defer log.Trace(path, "fh=0x%X", fh)("errc=%v", &errc)
|
||||||
node, _, errc := fsys.getNode(path, fh)
|
node, _, errc := fsys.getNode(path, fh)
|
||||||
if errc == 0 {
|
if errc == 0 {
|
||||||
errc = fsys.stat(node, stat)
|
errc = fsys.stat(node, stat)
|
||||||
|
@ -201,7 +202,7 @@ func (fsys *FS) Getattr(path string, stat *fuse.Stat_t, fh uint64) (errc int) {
|
||||||
|
|
||||||
// Opendir opens path as a directory
|
// Opendir opens path as a directory
|
||||||
func (fsys *FS) Opendir(path string) (errc int, fh uint64) {
|
func (fsys *FS) Opendir(path string) (errc int, fh uint64) {
|
||||||
defer fs.Trace(path, "")("errc=%d, fh=0x%X", &errc, &fh)
|
defer log.Trace(path, "")("errc=%d, fh=0x%X", &errc, &fh)
|
||||||
handle, err := fsys.VFS.OpenFile(path, os.O_RDONLY, 0777)
|
handle, err := fsys.VFS.OpenFile(path, os.O_RDONLY, 0777)
|
||||||
if errc != 0 {
|
if errc != 0 {
|
||||||
return translateError(err), fhUnset
|
return translateError(err), fhUnset
|
||||||
|
@ -215,7 +216,7 @@ func (fsys *FS) Readdir(dirPath string,
|
||||||
ofst int64,
|
ofst int64,
|
||||||
fh uint64) (errc int) {
|
fh uint64) (errc int) {
|
||||||
itemsRead := -1
|
itemsRead := -1
|
||||||
defer fs.Trace(dirPath, "ofst=%d, fh=0x%X", ofst, fh)("items=%d, errc=%d", &itemsRead, &errc)
|
defer log.Trace(dirPath, "ofst=%d, fh=0x%X", ofst, fh)("items=%d, errc=%d", &itemsRead, &errc)
|
||||||
|
|
||||||
node, errc := fsys.getHandle(fh)
|
node, errc := fsys.getHandle(fh)
|
||||||
if errc != 0 {
|
if errc != 0 {
|
||||||
|
@ -254,13 +255,13 @@ func (fsys *FS) Readdir(dirPath string,
|
||||||
|
|
||||||
// Releasedir finished reading the directory
|
// Releasedir finished reading the directory
|
||||||
func (fsys *FS) Releasedir(path string, fh uint64) (errc int) {
|
func (fsys *FS) Releasedir(path string, fh uint64) (errc int) {
|
||||||
defer fs.Trace(path, "fh=0x%X", fh)("errc=%d", &errc)
|
defer log.Trace(path, "fh=0x%X", fh)("errc=%d", &errc)
|
||||||
return fsys.closeHandle(fh)
|
return fsys.closeHandle(fh)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Statfs reads overall stats on the filessystem
|
// Statfs reads overall stats on the filessystem
|
||||||
func (fsys *FS) Statfs(path string, stat *fuse.Statfs_t) (errc int) {
|
func (fsys *FS) Statfs(path string, stat *fuse.Statfs_t) (errc int) {
|
||||||
defer fs.Trace(path, "")("stat=%+v, errc=%d", stat, &errc)
|
defer log.Trace(path, "")("stat=%+v, errc=%d", stat, &errc)
|
||||||
const blockSize = 4096
|
const blockSize = 4096
|
||||||
fsBlocks := uint64(1 << 50)
|
fsBlocks := uint64(1 << 50)
|
||||||
if runtime.GOOS == "windows" {
|
if runtime.GOOS == "windows" {
|
||||||
|
@ -279,7 +280,7 @@ func (fsys *FS) Statfs(path string, stat *fuse.Statfs_t) (errc int) {
|
||||||
|
|
||||||
// Open opens a file
|
// Open opens a file
|
||||||
func (fsys *FS) Open(path string, flags int) (errc int, fh uint64) {
|
func (fsys *FS) Open(path string, flags int) (errc int, fh uint64) {
|
||||||
defer fs.Trace(path, "flags=0x%X", flags)("errc=%d, fh=0x%X", &errc, &fh)
|
defer log.Trace(path, "flags=0x%X", flags)("errc=%d, fh=0x%X", &errc, &fh)
|
||||||
|
|
||||||
// translate the fuse flags to os flags
|
// translate the fuse flags to os flags
|
||||||
flags = translateOpenFlags(flags) | os.O_CREATE
|
flags = translateOpenFlags(flags) | os.O_CREATE
|
||||||
|
@ -293,7 +294,7 @@ func (fsys *FS) Open(path string, flags int) (errc int, fh uint64) {
|
||||||
|
|
||||||
// Create creates and opens a file.
|
// Create creates and opens a file.
|
||||||
func (fsys *FS) Create(filePath string, flags int, mode uint32) (errc int, fh uint64) {
|
func (fsys *FS) Create(filePath string, flags int, mode uint32) (errc int, fh uint64) {
|
||||||
defer fs.Trace(filePath, "flags=0x%X, mode=0%o", flags, mode)("errc=%d, fh=0x%X", &errc, &fh)
|
defer log.Trace(filePath, "flags=0x%X, mode=0%o", flags, mode)("errc=%d, fh=0x%X", &errc, &fh)
|
||||||
leaf, parentDir, errc := fsys.lookupParentDir(filePath)
|
leaf, parentDir, errc := fsys.lookupParentDir(filePath)
|
||||||
if errc != 0 {
|
if errc != 0 {
|
||||||
return errc, fhUnset
|
return errc, fhUnset
|
||||||
|
@ -313,7 +314,7 @@ func (fsys *FS) Create(filePath string, flags int, mode uint32) (errc int, fh ui
|
||||||
|
|
||||||
// Truncate truncates a file to size
|
// Truncate truncates a file to size
|
||||||
func (fsys *FS) Truncate(path string, size int64, fh uint64) (errc int) {
|
func (fsys *FS) Truncate(path string, size int64, fh uint64) (errc int) {
|
||||||
defer fs.Trace(path, "size=%d, fh=0x%X", size, fh)("errc=%d", &errc)
|
defer log.Trace(path, "size=%d, fh=0x%X", size, fh)("errc=%d", &errc)
|
||||||
node, handle, errc := fsys.getNode(path, fh)
|
node, handle, errc := fsys.getNode(path, fh)
|
||||||
if errc != 0 {
|
if errc != 0 {
|
||||||
return errc
|
return errc
|
||||||
|
@ -332,7 +333,7 @@ func (fsys *FS) Truncate(path string, size int64, fh uint64) (errc int) {
|
||||||
|
|
||||||
// Read data from file handle
|
// Read data from file handle
|
||||||
func (fsys *FS) Read(path string, buff []byte, ofst int64, fh uint64) (n int) {
|
func (fsys *FS) Read(path string, buff []byte, ofst int64, fh uint64) (n int) {
|
||||||
defer fs.Trace(path, "ofst=%d, fh=0x%X", ofst, fh)("n=%d", &n)
|
defer log.Trace(path, "ofst=%d, fh=0x%X", ofst, fh)("n=%d", &n)
|
||||||
handle, errc := fsys.getHandle(fh)
|
handle, errc := fsys.getHandle(fh)
|
||||||
if errc != 0 {
|
if errc != 0 {
|
||||||
return errc
|
return errc
|
||||||
|
@ -348,7 +349,7 @@ func (fsys *FS) Read(path string, buff []byte, ofst int64, fh uint64) (n int) {
|
||||||
|
|
||||||
// Write data to file handle
|
// Write data to file handle
|
||||||
func (fsys *FS) Write(path string, buff []byte, ofst int64, fh uint64) (n int) {
|
func (fsys *FS) Write(path string, buff []byte, ofst int64, fh uint64) (n int) {
|
||||||
defer fs.Trace(path, "ofst=%d, fh=0x%X", ofst, fh)("n=%d", &n)
|
defer log.Trace(path, "ofst=%d, fh=0x%X", ofst, fh)("n=%d", &n)
|
||||||
handle, errc := fsys.getHandle(fh)
|
handle, errc := fsys.getHandle(fh)
|
||||||
if errc != 0 {
|
if errc != 0 {
|
||||||
return errc
|
return errc
|
||||||
|
@ -362,7 +363,7 @@ func (fsys *FS) Write(path string, buff []byte, ofst int64, fh uint64) (n int) {
|
||||||
|
|
||||||
// Flush flushes an open file descriptor or path
|
// Flush flushes an open file descriptor or path
|
||||||
func (fsys *FS) Flush(path string, fh uint64) (errc int) {
|
func (fsys *FS) Flush(path string, fh uint64) (errc int) {
|
||||||
defer fs.Trace(path, "fh=0x%X", fh)("errc=%d", &errc)
|
defer log.Trace(path, "fh=0x%X", fh)("errc=%d", &errc)
|
||||||
handle, errc := fsys.getHandle(fh)
|
handle, errc := fsys.getHandle(fh)
|
||||||
if errc != 0 {
|
if errc != 0 {
|
||||||
return errc
|
return errc
|
||||||
|
@ -372,7 +373,7 @@ func (fsys *FS) Flush(path string, fh uint64) (errc int) {
|
||||||
|
|
||||||
// Release closes the file if still open
|
// Release closes the file if still open
|
||||||
func (fsys *FS) Release(path string, fh uint64) (errc int) {
|
func (fsys *FS) Release(path string, fh uint64) (errc int) {
|
||||||
defer fs.Trace(path, "fh=0x%X", fh)("errc=%d", &errc)
|
defer log.Trace(path, "fh=0x%X", fh)("errc=%d", &errc)
|
||||||
handle, errc := fsys.getHandle(fh)
|
handle, errc := fsys.getHandle(fh)
|
||||||
if errc != 0 {
|
if errc != 0 {
|
||||||
return errc
|
return errc
|
||||||
|
@ -383,7 +384,7 @@ func (fsys *FS) Release(path string, fh uint64) (errc int) {
|
||||||
|
|
||||||
// Unlink removes a file.
|
// Unlink removes a file.
|
||||||
func (fsys *FS) Unlink(filePath string) (errc int) {
|
func (fsys *FS) Unlink(filePath string) (errc int) {
|
||||||
defer fs.Trace(filePath, "")("errc=%d", &errc)
|
defer log.Trace(filePath, "")("errc=%d", &errc)
|
||||||
leaf, parentDir, errc := fsys.lookupParentDir(filePath)
|
leaf, parentDir, errc := fsys.lookupParentDir(filePath)
|
||||||
if errc != 0 {
|
if errc != 0 {
|
||||||
return errc
|
return errc
|
||||||
|
@ -393,7 +394,7 @@ func (fsys *FS) Unlink(filePath string) (errc int) {
|
||||||
|
|
||||||
// Mkdir creates a directory.
|
// Mkdir creates a directory.
|
||||||
func (fsys *FS) Mkdir(dirPath string, mode uint32) (errc int) {
|
func (fsys *FS) Mkdir(dirPath string, mode uint32) (errc int) {
|
||||||
defer fs.Trace(dirPath, "mode=0%o", mode)("errc=%d", &errc)
|
defer log.Trace(dirPath, "mode=0%o", mode)("errc=%d", &errc)
|
||||||
leaf, parentDir, errc := fsys.lookupParentDir(dirPath)
|
leaf, parentDir, errc := fsys.lookupParentDir(dirPath)
|
||||||
if errc != 0 {
|
if errc != 0 {
|
||||||
return errc
|
return errc
|
||||||
|
@ -404,7 +405,7 @@ func (fsys *FS) Mkdir(dirPath string, mode uint32) (errc int) {
|
||||||
|
|
||||||
// Rmdir removes a directory
|
// Rmdir removes a directory
|
||||||
func (fsys *FS) Rmdir(dirPath string) (errc int) {
|
func (fsys *FS) Rmdir(dirPath string) (errc int) {
|
||||||
defer fs.Trace(dirPath, "")("errc=%d", &errc)
|
defer log.Trace(dirPath, "")("errc=%d", &errc)
|
||||||
leaf, parentDir, errc := fsys.lookupParentDir(dirPath)
|
leaf, parentDir, errc := fsys.lookupParentDir(dirPath)
|
||||||
if errc != 0 {
|
if errc != 0 {
|
||||||
return errc
|
return errc
|
||||||
|
@ -414,13 +415,13 @@ func (fsys *FS) Rmdir(dirPath string) (errc int) {
|
||||||
|
|
||||||
// Rename renames a file.
|
// Rename renames a file.
|
||||||
func (fsys *FS) Rename(oldPath string, newPath string) (errc int) {
|
func (fsys *FS) Rename(oldPath string, newPath string) (errc int) {
|
||||||
defer fs.Trace(oldPath, "newPath=%q", newPath)("errc=%d", &errc)
|
defer log.Trace(oldPath, "newPath=%q", newPath)("errc=%d", &errc)
|
||||||
return translateError(fsys.VFS.Rename(oldPath, newPath))
|
return translateError(fsys.VFS.Rename(oldPath, newPath))
|
||||||
}
|
}
|
||||||
|
|
||||||
// Utimens changes the access and modification times of a file.
|
// Utimens changes the access and modification times of a file.
|
||||||
func (fsys *FS) Utimens(path string, tmsp []fuse.Timespec) (errc int) {
|
func (fsys *FS) Utimens(path string, tmsp []fuse.Timespec) (errc int) {
|
||||||
defer fs.Trace(path, "tmsp=%+v", tmsp)("errc=%d", &errc)
|
defer log.Trace(path, "tmsp=%+v", tmsp)("errc=%d", &errc)
|
||||||
node, errc := fsys.lookupNode(path)
|
node, errc := fsys.lookupNode(path)
|
||||||
if errc != 0 {
|
if errc != 0 {
|
||||||
return errc
|
return errc
|
||||||
|
@ -436,59 +437,59 @@ func (fsys *FS) Utimens(path string, tmsp []fuse.Timespec) (errc int) {
|
||||||
|
|
||||||
// Mknod creates a file node.
|
// Mknod creates a file node.
|
||||||
func (fsys *FS) Mknod(path string, mode uint32, dev uint64) (errc int) {
|
func (fsys *FS) Mknod(path string, mode uint32, dev uint64) (errc int) {
|
||||||
defer fs.Trace(path, "mode=0x%X, dev=0x%X", mode, dev)("errc=%d", &errc)
|
defer log.Trace(path, "mode=0x%X, dev=0x%X", mode, dev)("errc=%d", &errc)
|
||||||
return -fuse.ENOSYS
|
return -fuse.ENOSYS
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fsync synchronizes file contents.
|
// Fsync synchronizes file contents.
|
||||||
func (fsys *FS) Fsync(path string, datasync bool, fh uint64) (errc int) {
|
func (fsys *FS) Fsync(path string, datasync bool, fh uint64) (errc int) {
|
||||||
defer fs.Trace(path, "datasync=%v, fh=0x%X", datasync, fh)("errc=%d", &errc)
|
defer log.Trace(path, "datasync=%v, fh=0x%X", datasync, fh)("errc=%d", &errc)
|
||||||
// This is a no-op for rclone
|
// This is a no-op for rclone
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
// Link creates a hard link to a file.
|
// Link creates a hard link to a file.
|
||||||
func (fsys *FS) Link(oldpath string, newpath string) (errc int) {
|
func (fsys *FS) Link(oldpath string, newpath string) (errc int) {
|
||||||
defer fs.Trace(oldpath, "newpath=%q", newpath)("errc=%d", &errc)
|
defer log.Trace(oldpath, "newpath=%q", newpath)("errc=%d", &errc)
|
||||||
return -fuse.ENOSYS
|
return -fuse.ENOSYS
|
||||||
}
|
}
|
||||||
|
|
||||||
// Symlink creates a symbolic link.
|
// Symlink creates a symbolic link.
|
||||||
func (fsys *FS) Symlink(target string, newpath string) (errc int) {
|
func (fsys *FS) Symlink(target string, newpath string) (errc int) {
|
||||||
defer fs.Trace(target, "newpath=%q", newpath)("errc=%d", &errc)
|
defer log.Trace(target, "newpath=%q", newpath)("errc=%d", &errc)
|
||||||
return -fuse.ENOSYS
|
return -fuse.ENOSYS
|
||||||
}
|
}
|
||||||
|
|
||||||
// Readlink reads the target of a symbolic link.
|
// Readlink reads the target of a symbolic link.
|
||||||
func (fsys *FS) Readlink(path string) (errc int, linkPath string) {
|
func (fsys *FS) Readlink(path string) (errc int, linkPath string) {
|
||||||
defer fs.Trace(path, "")("linkPath=%q, errc=%d", &linkPath, &errc)
|
defer log.Trace(path, "")("linkPath=%q, errc=%d", &linkPath, &errc)
|
||||||
return -fuse.ENOSYS, ""
|
return -fuse.ENOSYS, ""
|
||||||
}
|
}
|
||||||
|
|
||||||
// Chmod changes the permission bits of a file.
|
// Chmod changes the permission bits of a file.
|
||||||
func (fsys *FS) Chmod(path string, mode uint32) (errc int) {
|
func (fsys *FS) Chmod(path string, mode uint32) (errc int) {
|
||||||
defer fs.Trace(path, "mode=0%o", mode)("errc=%d", &errc)
|
defer log.Trace(path, "mode=0%o", mode)("errc=%d", &errc)
|
||||||
// This is a no-op for rclone
|
// This is a no-op for rclone
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
// Chown changes the owner and group of a file.
|
// Chown changes the owner and group of a file.
|
||||||
func (fsys *FS) Chown(path string, uid uint32, gid uint32) (errc int) {
|
func (fsys *FS) Chown(path string, uid uint32, gid uint32) (errc int) {
|
||||||
defer fs.Trace(path, "uid=%d, gid=%d", uid, gid)("errc=%d", &errc)
|
defer log.Trace(path, "uid=%d, gid=%d", uid, gid)("errc=%d", &errc)
|
||||||
// This is a no-op for rclone
|
// This is a no-op for rclone
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
// Access checks file access permissions.
|
// Access checks file access permissions.
|
||||||
func (fsys *FS) Access(path string, mask uint32) (errc int) {
|
func (fsys *FS) Access(path string, mask uint32) (errc int) {
|
||||||
defer fs.Trace(path, "mask=0%o", mask)("errc=%d", &errc)
|
defer log.Trace(path, "mask=0%o", mask)("errc=%d", &errc)
|
||||||
// This is a no-op for rclone
|
// This is a no-op for rclone
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fsyncdir synchronizes directory contents.
|
// Fsyncdir synchronizes directory contents.
|
||||||
func (fsys *FS) Fsyncdir(path string, datasync bool, fh uint64) (errc int) {
|
func (fsys *FS) Fsyncdir(path string, datasync bool, fh uint64) (errc int) {
|
||||||
defer fs.Trace(path, "datasync=%v, fh=0x%X", datasync, fh)("errc=%d", &errc)
|
defer log.Trace(path, "datasync=%v, fh=0x%X", datasync, fh)("errc=%d", &errc)
|
||||||
// This is a no-op for rclone
|
// This is a no-op for rclone
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,7 +2,7 @@ package config
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/config"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -28,7 +28,7 @@ password to protect your configuration.
|
||||||
`,
|
`,
|
||||||
Run: func(command *cobra.Command, args []string) {
|
Run: func(command *cobra.Command, args []string) {
|
||||||
cmd.CheckArgs(0, 0, command, args)
|
cmd.CheckArgs(0, 0, command, args)
|
||||||
fs.EditConfig()
|
config.EditConfig()
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -44,7 +44,7 @@ var configFileCommand = &cobra.Command{
|
||||||
Short: `Show path of configuration file in use.`,
|
Short: `Show path of configuration file in use.`,
|
||||||
Run: func(command *cobra.Command, args []string) {
|
Run: func(command *cobra.Command, args []string) {
|
||||||
cmd.CheckArgs(0, 0, command, args)
|
cmd.CheckArgs(0, 0, command, args)
|
||||||
fs.ShowConfigLocation()
|
config.ShowConfigLocation()
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -54,9 +54,9 @@ var configShowCommand = &cobra.Command{
|
||||||
Run: func(command *cobra.Command, args []string) {
|
Run: func(command *cobra.Command, args []string) {
|
||||||
cmd.CheckArgs(0, 1, command, args)
|
cmd.CheckArgs(0, 1, command, args)
|
||||||
if len(args) == 0 {
|
if len(args) == 0 {
|
||||||
fs.ShowConfig()
|
config.ShowConfig()
|
||||||
} else {
|
} else {
|
||||||
fs.ShowRemote(args[0])
|
config.ShowRemote(args[0])
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
@ -66,7 +66,7 @@ var configDumpCommand = &cobra.Command{
|
||||||
Short: `Dump the config file as JSON.`,
|
Short: `Dump the config file as JSON.`,
|
||||||
RunE: func(command *cobra.Command, args []string) error {
|
RunE: func(command *cobra.Command, args []string) error {
|
||||||
cmd.CheckArgs(0, 0, command, args)
|
cmd.CheckArgs(0, 0, command, args)
|
||||||
return fs.ConfigDump()
|
return config.Dump()
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -75,7 +75,7 @@ var configProvidersCommand = &cobra.Command{
|
||||||
Short: `List in JSON format all the providers and options.`,
|
Short: `List in JSON format all the providers and options.`,
|
||||||
RunE: func(command *cobra.Command, args []string) error {
|
RunE: func(command *cobra.Command, args []string) error {
|
||||||
cmd.CheckArgs(0, 0, command, args)
|
cmd.CheckArgs(0, 0, command, args)
|
||||||
return fs.JSONListProviders()
|
return config.JSONListProviders()
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -93,7 +93,7 @@ you would do:
|
||||||
`,
|
`,
|
||||||
RunE: func(command *cobra.Command, args []string) error {
|
RunE: func(command *cobra.Command, args []string) error {
|
||||||
cmd.CheckArgs(2, 256, command, args)
|
cmd.CheckArgs(2, 256, command, args)
|
||||||
return fs.CreateRemote(args[0], args[1], args[2:])
|
return config.CreateRemote(args[0], args[1], args[2:])
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -110,7 +110,7 @@ For example to update the env_auth field of a remote of name myremote you would
|
||||||
`,
|
`,
|
||||||
RunE: func(command *cobra.Command, args []string) error {
|
RunE: func(command *cobra.Command, args []string) error {
|
||||||
cmd.CheckArgs(3, 256, command, args)
|
cmd.CheckArgs(3, 256, command, args)
|
||||||
return fs.UpdateRemote(args[0], args[1:])
|
return config.UpdateRemote(args[0], args[1:])
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -119,7 +119,7 @@ var configDeleteCommand = &cobra.Command{
|
||||||
Short: `Delete an existing remote <name>.`,
|
Short: `Delete an existing remote <name>.`,
|
||||||
Run: func(command *cobra.Command, args []string) {
|
Run: func(command *cobra.Command, args []string) {
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fs.DeleteRemote(args[0])
|
config.DeleteRemote(args[0])
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -136,6 +136,6 @@ For example to set password of a remote of name myremote you would do:
|
||||||
`,
|
`,
|
||||||
RunE: func(command *cobra.Command, args []string) error {
|
RunE: func(command *cobra.Command, args []string) error {
|
||||||
cmd.CheckArgs(3, 256, command, args)
|
cmd.CheckArgs(3, 256, command, args)
|
||||||
return fs.PasswordRemote(args[0], args[1:])
|
return config.PasswordRemote(args[0], args[1:])
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,7 +2,7 @@ package copy
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/sync"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -57,7 +57,7 @@ the destination directory or not.
|
||||||
cmd.CheckArgs(2, 2, command, args)
|
cmd.CheckArgs(2, 2, command, args)
|
||||||
fsrc, fdst := cmd.NewFsSrcDst(args)
|
fsrc, fdst := cmd.NewFsSrcDst(args)
|
||||||
cmd.Run(true, true, command, func() error {
|
cmd.Run(true, true, command, func() error {
|
||||||
return fs.CopyDir(fdst, fsrc)
|
return sync.CopyDir(fdst, fsrc)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,7 +2,8 @@ package copyto
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
|
"github.com/ncw/rclone/fs/sync"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -45,9 +46,9 @@ destination.
|
||||||
fsrc, srcFileName, fdst, dstFileName := cmd.NewFsSrcDstFiles(args)
|
fsrc, srcFileName, fdst, dstFileName := cmd.NewFsSrcDstFiles(args)
|
||||||
cmd.Run(true, true, command, func() error {
|
cmd.Run(true, true, command, func() error {
|
||||||
if srcFileName == "" {
|
if srcFileName == "" {
|
||||||
return fs.CopyDir(fdst, fsrc)
|
return sync.CopyDir(fdst, fsrc)
|
||||||
}
|
}
|
||||||
return fs.CopyFile(fdst, fsrc, dstFileName, srcFileName)
|
return operations.CopyFile(fdst, fsrc, dstFileName, srcFileName)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,6 +4,8 @@ import (
|
||||||
"github.com/ncw/rclone/backend/crypt"
|
"github.com/ncw/rclone/backend/crypt"
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
@ -58,7 +60,7 @@ func cryptCheck(fdst, fsrc fs.Fs) error {
|
||||||
// Find a hash to use
|
// Find a hash to use
|
||||||
funderlying := fcrypt.UnWrap()
|
funderlying := fcrypt.UnWrap()
|
||||||
hashType := funderlying.Hashes().GetOne()
|
hashType := funderlying.Hashes().GetOne()
|
||||||
if hashType == fs.HashNone {
|
if hashType == hash.HashNone {
|
||||||
return errors.Errorf("%s:%s does not support any hashes", funderlying.Name(), funderlying.Root())
|
return errors.Errorf("%s:%s does not support any hashes", funderlying.Name(), funderlying.Root())
|
||||||
}
|
}
|
||||||
fs.Infof(nil, "Using %v for hash comparisons", hashType)
|
fs.Infof(nil, "Using %v for hash comparisons", hashType)
|
||||||
|
@ -72,7 +74,7 @@ func cryptCheck(fdst, fsrc fs.Fs) error {
|
||||||
underlyingDst := cryptDst.UnWrap()
|
underlyingDst := cryptDst.UnWrap()
|
||||||
underlyingHash, err := underlyingDst.Hash(hashType)
|
underlyingHash, err := underlyingDst.Hash(hashType)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
fs.Errorf(dst, "Error reading hash from underlying %v: %v", underlyingDst, err)
|
fs.Errorf(dst, "Error reading hash from underlying %v: %v", underlyingDst, err)
|
||||||
return true, false
|
return true, false
|
||||||
}
|
}
|
||||||
|
@ -81,7 +83,7 @@ func cryptCheck(fdst, fsrc fs.Fs) error {
|
||||||
}
|
}
|
||||||
cryptHash, err := fcrypt.ComputeHash(cryptDst, src, hashType)
|
cryptHash, err := fcrypt.ComputeHash(cryptDst, src, hashType)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
fs.Errorf(dst, "Error computing hash: %v", err)
|
fs.Errorf(dst, "Error computing hash: %v", err)
|
||||||
return true, false
|
return true, false
|
||||||
}
|
}
|
||||||
|
@ -90,7 +92,7 @@ func cryptCheck(fdst, fsrc fs.Fs) error {
|
||||||
}
|
}
|
||||||
if cryptHash != underlyingHash {
|
if cryptHash != underlyingHash {
|
||||||
err = errors.Errorf("hashes differ (%s:%s) %q vs (%s:%s) %q", fdst.Name(), fdst.Root(), cryptHash, fsrc.Name(), fsrc.Root(), underlyingHash)
|
err = errors.Errorf("hashes differ (%s:%s) %q vs (%s:%s) %q", fdst.Name(), fdst.Root(), cryptHash, fsrc.Name(), fsrc.Root(), underlyingHash)
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
fs.Errorf(src, err.Error())
|
fs.Errorf(src, err.Error())
|
||||||
return true, false
|
return true, false
|
||||||
}
|
}
|
||||||
|
@ -98,5 +100,5 @@ func cryptCheck(fdst, fsrc fs.Fs) error {
|
||||||
return false, false
|
return false, false
|
||||||
}
|
}
|
||||||
|
|
||||||
return fs.CheckFn(fcrypt, fsrc, checkIdentical)
|
return operations.CheckFn(fcrypt, fsrc, checkIdentical)
|
||||||
}
|
}
|
||||||
|
|
|
@ -6,6 +6,7 @@ import (
|
||||||
"github.com/ncw/rclone/backend/crypt"
|
"github.com/ncw/rclone/backend/crypt"
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
@ -17,8 +18,8 @@ var (
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
cmd.Root.AddCommand(commandDefinition)
|
cmd.Root.AddCommand(commandDefinition)
|
||||||
flags := commandDefinition.Flags()
|
flagSet := commandDefinition.Flags()
|
||||||
fs.BoolVarP(flags, &Reverse, "reverse", "", Reverse, "Reverse cryptdecode, encrypts filenames")
|
flags.BoolVarP(flagSet, &Reverse, "reverse", "", Reverse, "Reverse cryptdecode, encrypts filenames")
|
||||||
}
|
}
|
||||||
|
|
||||||
var commandDefinition = &cobra.Command{
|
var commandDefinition = &cobra.Command{
|
||||||
|
|
|
@ -4,7 +4,7 @@ import (
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -25,7 +25,7 @@ The output is in the same format as md5sum and sha1sum.
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fsrc := cmd.NewFsSrc(args)
|
fsrc := cmd.NewFsSrc(args)
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
return fs.DropboxHashSum(fsrc, os.Stdout)
|
return operations.DropboxHashSum(fsrc, os.Stdout)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,12 +4,12 @@ import (
|
||||||
"log"
|
"log"
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
dedupeMode = fs.DeduplicateInteractive
|
dedupeMode = operations.DeduplicateInteractive
|
||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
|
@ -111,7 +111,7 @@ Or
|
||||||
}
|
}
|
||||||
fdst := cmd.NewFsSrc(args)
|
fdst := cmd.NewFsSrc(args)
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
return fs.Deduplicate(fdst, dedupeMode)
|
return operations.Deduplicate(fdst, dedupeMode)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,7 +2,7 @@ package delete
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -35,7 +35,7 @@ delete all files bigger than 100MBytes.
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fsrc := cmd.NewFsSrc(args)
|
fsrc := cmd.NewFsSrc(args)
|
||||||
cmd.Run(true, false, command, func() error {
|
cmd.Run(true, false, command, func() error {
|
||||||
return fs.Delete(fsrc)
|
return operations.Delete(fsrc)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -14,6 +14,8 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
|
"github.com/ncw/rclone/fs/object"
|
||||||
"github.com/ncw/rclone/fstest"
|
"github.com/ncw/rclone/fstest"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
@ -103,7 +105,7 @@ func (r *results) Print() {
|
||||||
// writeFile writes a file with some random contents
|
// writeFile writes a file with some random contents
|
||||||
func (r *results) writeFile(path string) (fs.Object, error) {
|
func (r *results) writeFile(path string) (fs.Object, error) {
|
||||||
contents := fstest.RandomString(50)
|
contents := fstest.RandomString(50)
|
||||||
src := fs.NewStaticObjectInfo(path, time.Now(), int64(len(contents)), true, nil, r.f)
|
src := object.NewStaticObjectInfo(path, time.Now(), int64(len(contents)), true, nil, r.f)
|
||||||
return r.f.Put(bytes.NewBufferString(contents), src)
|
return r.f.Put(bytes.NewBufferString(contents), src)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -210,10 +212,10 @@ func (r *results) checkStreaming() {
|
||||||
|
|
||||||
contents := "thinking of test strings is hard"
|
contents := "thinking of test strings is hard"
|
||||||
buf := bytes.NewBufferString(contents)
|
buf := bytes.NewBufferString(contents)
|
||||||
hashIn := fs.NewMultiHasher()
|
hashIn := hash.NewMultiHasher()
|
||||||
in := io.TeeReader(buf, hashIn)
|
in := io.TeeReader(buf, hashIn)
|
||||||
|
|
||||||
objIn := fs.NewStaticObjectInfo("checkStreamingTest", time.Now(), -1, true, nil, r.f)
|
objIn := object.NewStaticObjectInfo("checkStreamingTest", time.Now(), -1, true, nil, r.f)
|
||||||
objR, err := putter(in, objIn)
|
objR, err := putter(in, objIn)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Infof(r.f, "Streamed file failed to upload (%v)", err)
|
fs.Infof(r.f, "Streamed file failed to upload (%v)", err)
|
||||||
|
@ -223,15 +225,15 @@ func (r *results) checkStreaming() {
|
||||||
|
|
||||||
hashes := hashIn.Sums()
|
hashes := hashIn.Sums()
|
||||||
types := objR.Fs().Hashes().Array()
|
types := objR.Fs().Hashes().Array()
|
||||||
for _, hash := range types {
|
for _, Hash := range types {
|
||||||
sum, err := objR.Hash(hash)
|
sum, err := objR.Hash(Hash)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Infof(r.f, "Streamed file failed when getting hash %v (%v)", hash, err)
|
fs.Infof(r.f, "Streamed file failed when getting hash %v (%v)", Hash, err)
|
||||||
r.canStream = false
|
r.canStream = false
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if !fs.HashEquals(hashes[hash], sum) {
|
if !hash.Equals(hashes[Hash], sum) {
|
||||||
fs.Infof(r.f, "Streamed file has incorrect hash %v: expecting %q got %q", hash, hashes[hash], sum)
|
fs.Infof(r.f, "Streamed file has incorrect hash %v: expecting %q got %q", Hash, hashes[Hash], sum)
|
||||||
r.canStream = false
|
r.canStream = false
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
|
@ -6,6 +6,7 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -29,7 +30,7 @@ When uses with the -l flag it lists the types too.
|
||||||
`,
|
`,
|
||||||
Run: func(command *cobra.Command, args []string) {
|
Run: func(command *cobra.Command, args []string) {
|
||||||
cmd.CheckArgs(0, 0, command, args)
|
cmd.CheckArgs(0, 0, command, args)
|
||||||
remotes := fs.ConfigFileSections()
|
remotes := config.FileSections()
|
||||||
sort.Strings(remotes)
|
sort.Strings(remotes)
|
||||||
maxlen := 1
|
maxlen := 1
|
||||||
for _, remote := range remotes {
|
for _, remote := range remotes {
|
||||||
|
|
|
@ -5,7 +5,7 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/cmd/ls/lshelp"
|
"github.com/ncw/rclone/cmd/ls/lshelp"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -24,7 +24,7 @@ readable format with size and path. Recurses by default.
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fsrc := cmd.NewFsSrc(args)
|
fsrc := cmd.NewFsSrc(args)
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
return fs.List(fsrc, os.Stdout)
|
return operations.List(fsrc, os.Stdout)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -5,7 +5,7 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/cmd/ls/lshelp"
|
"github.com/ncw/rclone/cmd/ls/lshelp"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -24,7 +24,7 @@ by default.
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fsrc := cmd.NewFsSrc(args)
|
fsrc := cmd.NewFsSrc(args)
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
return fs.ListDir(fsrc, os.Stdout)
|
return operations.ListDir(fsrc, os.Stdout)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -8,6 +8,9 @@ import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/cmd/ls/lshelp"
|
"github.com/ncw/rclone/cmd/ls/lshelp"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/hash"
|
||||||
|
"github.com/ncw/rclone/fs/operations"
|
||||||
|
"github.com/ncw/rclone/fs/walk"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
@ -17,7 +20,7 @@ var (
|
||||||
separator string
|
separator string
|
||||||
dirSlash bool
|
dirSlash bool
|
||||||
recurse bool
|
recurse bool
|
||||||
hashType = fs.HashMD5
|
hashType = hash.HashMD5
|
||||||
filesOnly bool
|
filesOnly bool
|
||||||
dirsOnly bool
|
dirsOnly bool
|
||||||
)
|
)
|
||||||
|
@ -84,7 +87,7 @@ putting it last is a good strategy.
|
||||||
// Lsf lists all the objects in the path with modification time, size
|
// Lsf lists all the objects in the path with modification time, size
|
||||||
// and path in specific format.
|
// and path in specific format.
|
||||||
func Lsf(fsrc fs.Fs, out io.Writer) error {
|
func Lsf(fsrc fs.Fs, out io.Writer) error {
|
||||||
var list fs.ListFormat
|
var list operations.ListFormat
|
||||||
list.SetSeparator(separator)
|
list.SetSeparator(separator)
|
||||||
list.SetDirSlash(dirSlash)
|
list.SetDirSlash(dirSlash)
|
||||||
|
|
||||||
|
@ -103,9 +106,9 @@ func Lsf(fsrc fs.Fs, out io.Writer) error {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return fs.Walk(fsrc, "", false, fs.ConfigMaxDepth(recurse), func(path string, entries fs.DirEntries, err error) error {
|
return walk.Walk(fsrc, "", false, operations.ConfigMaxDepth(recurse), func(path string, entries fs.DirEntries, err error) error {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
fs.Errorf(path, "error listing: %v", err)
|
fs.Errorf(path, "error listing: %v", err)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
@ -120,7 +123,7 @@ func Lsf(fsrc fs.Fs, out io.Writer) error {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
fmt.Fprintln(out, fs.ListFormatted(&entry, &list))
|
fmt.Fprintln(out, operations.ListFormatted(&entry, &list))
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
|
|
@ -5,6 +5,7 @@ import (
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/list"
|
||||||
"github.com/ncw/rclone/fstest"
|
"github.com/ncw/rclone/fstest"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
@ -137,7 +138,7 @@ file3
|
||||||
err = Lsf(f, buf)
|
err = Lsf(f, buf)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
items, _ := fs.ListDirSorted(f, true, "")
|
items, _ := list.DirSorted(f, true, "")
|
||||||
var expectedOutput string
|
var expectedOutput string
|
||||||
for _, item := range items {
|
for _, item := range items {
|
||||||
expectedOutput += item.ModTime().Format("2006-01-02 15:04:05") + "\n"
|
expectedOutput += item.ModTime().Format("2006-01-02 15:04:05") + "\n"
|
||||||
|
@ -198,8 +199,8 @@ func TestWholeLsf(t *testing.T) {
|
||||||
err = Lsf(f, buf)
|
err = Lsf(f, buf)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
items, _ := fs.ListDirSorted(f, true, "")
|
items, _ := list.DirSorted(f, true, "")
|
||||||
itemsInSubdir, _ := fs.ListDirSorted(f, true, "subdir")
|
itemsInSubdir, _ := list.DirSorted(f, true, "subdir")
|
||||||
var expectedOutput []string
|
var expectedOutput []string
|
||||||
for _, item := range items {
|
for _, item := range items {
|
||||||
expectedOutput = append(expectedOutput, item.ModTime().Format("2006-01-02 15:04:05"))
|
expectedOutput = append(expectedOutput, item.ModTime().Format("2006-01-02 15:04:05"))
|
||||||
|
|
|
@ -10,6 +10,8 @@ import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/cmd/ls/lshelp"
|
"github.com/ncw/rclone/cmd/ls/lshelp"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/operations"
|
||||||
|
"github.com/ncw/rclone/fs/walk"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
@ -84,9 +86,9 @@ can be processed line by line as each item is written one to a line.
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
fmt.Println("[")
|
fmt.Println("[")
|
||||||
first := true
|
first := true
|
||||||
err := fs.Walk(fsrc, "", false, fs.ConfigMaxDepth(recurse), func(dirPath string, entries fs.DirEntries, err error) error {
|
err := walk.Walk(fsrc, "", false, operations.ConfigMaxDepth(recurse), func(dirPath string, entries fs.DirEntries, err error) error {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
fs.Errorf(dirPath, "error listing: %v", err)
|
fs.Errorf(dirPath, "error listing: %v", err)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -5,7 +5,7 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/cmd/ls/lshelp"
|
"github.com/ncw/rclone/cmd/ls/lshelp"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -24,7 +24,7 @@ readable format with modification time, size and path. Recurses by default.
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fsrc := cmd.NewFsSrc(args)
|
fsrc := cmd.NewFsSrc(args)
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
return fs.ListLong(fsrc, os.Stdout)
|
return operations.ListLong(fsrc, os.Stdout)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,7 +4,7 @@ import (
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -23,7 +23,7 @@ is in the same format as the standard md5sum tool produces.
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fsrc := cmd.NewFsSrc(args)
|
fsrc := cmd.NewFsSrc(args)
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
return fs.Md5sum(fsrc, os.Stdout)
|
return operations.Md5sum(fsrc, os.Stdout)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -6,6 +6,7 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -21,7 +22,7 @@ var commandDefintion = &cobra.Command{
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fsrc := cmd.NewFsSrc(args)
|
fsrc := cmd.NewFsSrc(args)
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
objects, _, err := fs.Count(fsrc)
|
objects, _, err := operations.Count(fsrc)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -30,7 +31,7 @@ var commandDefintion = &cobra.Command{
|
||||||
runtime.GC()
|
runtime.GC()
|
||||||
runtime.ReadMemStats(&before)
|
runtime.ReadMemStats(&before)
|
||||||
var mu sync.Mutex
|
var mu sync.Mutex
|
||||||
err = fs.ListFn(fsrc, func(o fs.Object) {
|
err = operations.ListFn(fsrc, func(o fs.Object) {
|
||||||
mu.Lock()
|
mu.Lock()
|
||||||
objs = append(objs, o)
|
objs = append(objs, o)
|
||||||
mu.Unlock()
|
mu.Unlock()
|
||||||
|
|
|
@ -2,7 +2,7 @@ package mkdir
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -17,7 +17,7 @@ var commandDefintion = &cobra.Command{
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fdst := cmd.NewFsDst(args)
|
fdst := cmd.NewFsDst(args)
|
||||||
cmd.Run(true, false, command, func() error {
|
cmd.Run(true, false, command, func() error {
|
||||||
return fs.Mkdir(fdst, "")
|
return operations.Mkdir(fdst, "")
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -8,7 +8,7 @@ import (
|
||||||
|
|
||||||
"bazil.org/fuse"
|
"bazil.org/fuse"
|
||||||
fusefs "bazil.org/fuse/fs"
|
fusefs "bazil.org/fuse/fs"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/log"
|
||||||
"github.com/ncw/rclone/vfs"
|
"github.com/ncw/rclone/vfs"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"golang.org/x/net/context"
|
"golang.org/x/net/context"
|
||||||
|
@ -24,7 +24,7 @@ var _ fusefs.Node = (*Dir)(nil)
|
||||||
|
|
||||||
// Attr updates the attributes of a directory
|
// Attr updates the attributes of a directory
|
||||||
func (d *Dir) Attr(ctx context.Context, a *fuse.Attr) (err error) {
|
func (d *Dir) Attr(ctx context.Context, a *fuse.Attr) (err error) {
|
||||||
defer fs.Trace(d, "")("attr=%+v, err=%v", a, &err)
|
defer log.Trace(d, "")("attr=%+v, err=%v", a, &err)
|
||||||
a.Gid = d.VFS().Opt.GID
|
a.Gid = d.VFS().Opt.GID
|
||||||
a.Uid = d.VFS().Opt.UID
|
a.Uid = d.VFS().Opt.UID
|
||||||
a.Mode = os.ModeDir | d.VFS().Opt.DirPerms
|
a.Mode = os.ModeDir | d.VFS().Opt.DirPerms
|
||||||
|
@ -43,7 +43,7 @@ var _ fusefs.NodeSetattrer = (*Dir)(nil)
|
||||||
|
|
||||||
// Setattr handles attribute changes from FUSE. Currently supports ModTime only.
|
// Setattr handles attribute changes from FUSE. Currently supports ModTime only.
|
||||||
func (d *Dir) Setattr(ctx context.Context, req *fuse.SetattrRequest, resp *fuse.SetattrResponse) (err error) {
|
func (d *Dir) Setattr(ctx context.Context, req *fuse.SetattrRequest, resp *fuse.SetattrResponse) (err error) {
|
||||||
defer fs.Trace(d, "stat=%+v", req)("err=%v", &err)
|
defer log.Trace(d, "stat=%+v", req)("err=%v", &err)
|
||||||
if d.VFS().Opt.NoModTime {
|
if d.VFS().Opt.NoModTime {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
@ -67,7 +67,7 @@ var _ fusefs.NodeRequestLookuper = (*Dir)(nil)
|
||||||
//
|
//
|
||||||
// Lookup need not to handle the names "." and "..".
|
// Lookup need not to handle the names "." and "..".
|
||||||
func (d *Dir) Lookup(ctx context.Context, req *fuse.LookupRequest, resp *fuse.LookupResponse) (node fusefs.Node, err error) {
|
func (d *Dir) Lookup(ctx context.Context, req *fuse.LookupRequest, resp *fuse.LookupResponse) (node fusefs.Node, err error) {
|
||||||
defer fs.Trace(d, "name=%q", req.Name)("node=%+v, err=%v", &node, &err)
|
defer log.Trace(d, "name=%q", req.Name)("node=%+v, err=%v", &node, &err)
|
||||||
mnode, err := d.Dir.Stat(req.Name)
|
mnode, err := d.Dir.Stat(req.Name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, translateError(err)
|
return nil, translateError(err)
|
||||||
|
@ -87,7 +87,7 @@ var _ fusefs.HandleReadDirAller = (*Dir)(nil)
|
||||||
// ReadDirAll reads the contents of the directory
|
// ReadDirAll reads the contents of the directory
|
||||||
func (d *Dir) ReadDirAll(ctx context.Context) (dirents []fuse.Dirent, err error) {
|
func (d *Dir) ReadDirAll(ctx context.Context) (dirents []fuse.Dirent, err error) {
|
||||||
itemsRead := -1
|
itemsRead := -1
|
||||||
defer fs.Trace(d, "")("item=%d, err=%v", &itemsRead, &err)
|
defer log.Trace(d, "")("item=%d, err=%v", &itemsRead, &err)
|
||||||
items, err := d.Dir.ReadDirAll()
|
items, err := d.Dir.ReadDirAll()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, translateError(err)
|
return nil, translateError(err)
|
||||||
|
@ -111,7 +111,7 @@ var _ fusefs.NodeCreater = (*Dir)(nil)
|
||||||
|
|
||||||
// Create makes a new file
|
// Create makes a new file
|
||||||
func (d *Dir) Create(ctx context.Context, req *fuse.CreateRequest, resp *fuse.CreateResponse) (node fusefs.Node, handle fusefs.Handle, err error) {
|
func (d *Dir) Create(ctx context.Context, req *fuse.CreateRequest, resp *fuse.CreateResponse) (node fusefs.Node, handle fusefs.Handle, err error) {
|
||||||
defer fs.Trace(d, "name=%q", req.Name)("node=%v, handle=%v, err=%v", &node, &handle, &err)
|
defer log.Trace(d, "name=%q", req.Name)("node=%v, handle=%v, err=%v", &node, &handle, &err)
|
||||||
file, err := d.Dir.Create(req.Name)
|
file, err := d.Dir.Create(req.Name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, translateError(err)
|
return nil, nil, translateError(err)
|
||||||
|
@ -127,7 +127,7 @@ var _ fusefs.NodeMkdirer = (*Dir)(nil)
|
||||||
|
|
||||||
// Mkdir creates a new directory
|
// Mkdir creates a new directory
|
||||||
func (d *Dir) Mkdir(ctx context.Context, req *fuse.MkdirRequest) (node fusefs.Node, err error) {
|
func (d *Dir) Mkdir(ctx context.Context, req *fuse.MkdirRequest) (node fusefs.Node, err error) {
|
||||||
defer fs.Trace(d, "name=%q", req.Name)("node=%+v, err=%v", &node, &err)
|
defer log.Trace(d, "name=%q", req.Name)("node=%+v, err=%v", &node, &err)
|
||||||
dir, err := d.Dir.Mkdir(req.Name)
|
dir, err := d.Dir.Mkdir(req.Name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, translateError(err)
|
return nil, translateError(err)
|
||||||
|
@ -141,7 +141,7 @@ var _ fusefs.NodeRemover = (*Dir)(nil)
|
||||||
// the receiver, which must be a directory. The entry to be removed
|
// the receiver, which must be a directory. The entry to be removed
|
||||||
// may correspond to a file (unlink) or to a directory (rmdir).
|
// may correspond to a file (unlink) or to a directory (rmdir).
|
||||||
func (d *Dir) Remove(ctx context.Context, req *fuse.RemoveRequest) (err error) {
|
func (d *Dir) Remove(ctx context.Context, req *fuse.RemoveRequest) (err error) {
|
||||||
defer fs.Trace(d, "name=%q", req.Name)("err=%v", &err)
|
defer log.Trace(d, "name=%q", req.Name)("err=%v", &err)
|
||||||
err = d.Dir.RemoveName(req.Name)
|
err = d.Dir.RemoveName(req.Name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return translateError(err)
|
return translateError(err)
|
||||||
|
@ -154,7 +154,7 @@ var _ fusefs.NodeRenamer = (*Dir)(nil)
|
||||||
|
|
||||||
// Rename the file
|
// Rename the file
|
||||||
func (d *Dir) Rename(ctx context.Context, req *fuse.RenameRequest, newDir fusefs.Node) (err error) {
|
func (d *Dir) Rename(ctx context.Context, req *fuse.RenameRequest, newDir fusefs.Node) (err error) {
|
||||||
defer fs.Trace(d, "oldName=%q, newName=%q, newDir=%+v", req.OldName, req.NewName, newDir)("err=%v", &err)
|
defer log.Trace(d, "oldName=%q, newName=%q, newDir=%+v", req.OldName, req.NewName, newDir)("err=%v", &err)
|
||||||
destDir, ok := newDir.(*Dir)
|
destDir, ok := newDir.(*Dir)
|
||||||
if !ok {
|
if !ok {
|
||||||
return errors.Errorf("Unknown Dir type %T", newDir)
|
return errors.Errorf("Unknown Dir type %T", newDir)
|
||||||
|
@ -173,7 +173,7 @@ var _ fusefs.NodeFsyncer = (*Dir)(nil)
|
||||||
|
|
||||||
// Fsync the directory
|
// Fsync the directory
|
||||||
func (d *Dir) Fsync(ctx context.Context, req *fuse.FsyncRequest) (err error) {
|
func (d *Dir) Fsync(ctx context.Context, req *fuse.FsyncRequest) (err error) {
|
||||||
defer fs.Trace(d, "")("err=%v", &err)
|
defer log.Trace(d, "")("err=%v", &err)
|
||||||
err = d.Dir.Sync()
|
err = d.Dir.Sync()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return translateError(err)
|
return translateError(err)
|
||||||
|
|
|
@ -8,7 +8,7 @@ import (
|
||||||
|
|
||||||
"bazil.org/fuse"
|
"bazil.org/fuse"
|
||||||
fusefs "bazil.org/fuse/fs"
|
fusefs "bazil.org/fuse/fs"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/log"
|
||||||
"github.com/ncw/rclone/vfs"
|
"github.com/ncw/rclone/vfs"
|
||||||
"golang.org/x/net/context"
|
"golang.org/x/net/context"
|
||||||
)
|
)
|
||||||
|
@ -23,7 +23,7 @@ var _ fusefs.Node = (*File)(nil)
|
||||||
|
|
||||||
// Attr fills out the attributes for the file
|
// Attr fills out the attributes for the file
|
||||||
func (f *File) Attr(ctx context.Context, a *fuse.Attr) (err error) {
|
func (f *File) Attr(ctx context.Context, a *fuse.Attr) (err error) {
|
||||||
defer fs.Trace(f, "")("a=%+v, err=%v", a, &err)
|
defer log.Trace(f, "")("a=%+v, err=%v", a, &err)
|
||||||
modTime := f.File.ModTime()
|
modTime := f.File.ModTime()
|
||||||
Size := uint64(f.File.Size())
|
Size := uint64(f.File.Size())
|
||||||
Blocks := (Size + 511) / 512
|
Blocks := (Size + 511) / 512
|
||||||
|
@ -44,7 +44,7 @@ var _ fusefs.NodeSetattrer = (*File)(nil)
|
||||||
|
|
||||||
// Setattr handles attribute changes from FUSE. Currently supports ModTime and Size only
|
// Setattr handles attribute changes from FUSE. Currently supports ModTime and Size only
|
||||||
func (f *File) Setattr(ctx context.Context, req *fuse.SetattrRequest, resp *fuse.SetattrResponse) (err error) {
|
func (f *File) Setattr(ctx context.Context, req *fuse.SetattrRequest, resp *fuse.SetattrResponse) (err error) {
|
||||||
defer fs.Trace(f, "a=%+v", req)("err=%v", &err)
|
defer log.Trace(f, "a=%+v", req)("err=%v", &err)
|
||||||
if !f.VFS().Opt.NoModTime {
|
if !f.VFS().Opt.NoModTime {
|
||||||
if req.Valid.MtimeNow() {
|
if req.Valid.MtimeNow() {
|
||||||
err = f.File.SetModTime(time.Now())
|
err = f.File.SetModTime(time.Now())
|
||||||
|
@ -64,7 +64,7 @@ var _ fusefs.NodeOpener = (*File)(nil)
|
||||||
|
|
||||||
// Open the file for read or write
|
// Open the file for read or write
|
||||||
func (f *File) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenResponse) (fh fusefs.Handle, err error) {
|
func (f *File) Open(ctx context.Context, req *fuse.OpenRequest, resp *fuse.OpenResponse) (fh fusefs.Handle, err error) {
|
||||||
defer fs.Trace(f, "flags=%v", req.Flags)("fh=%v, err=%v", &fh, &err)
|
defer log.Trace(f, "flags=%v", req.Flags)("fh=%v, err=%v", &fh, &err)
|
||||||
|
|
||||||
// fuse flags are based off syscall flags as are os flags, so
|
// fuse flags are based off syscall flags as are os flags, so
|
||||||
// should be compatible
|
// should be compatible
|
||||||
|
@ -91,6 +91,6 @@ var _ fusefs.NodeFsyncer = (*File)(nil)
|
||||||
//
|
//
|
||||||
// Note that we don't do anything except return OK
|
// Note that we don't do anything except return OK
|
||||||
func (f *File) Fsync(ctx context.Context, req *fuse.FsyncRequest) (err error) {
|
func (f *File) Fsync(ctx context.Context, req *fuse.FsyncRequest) (err error) {
|
||||||
defer fs.Trace(f, "")("err=%v", &err)
|
defer log.Trace(f, "")("err=%v", &err)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -10,6 +10,7 @@ import (
|
||||||
"bazil.org/fuse"
|
"bazil.org/fuse"
|
||||||
fusefs "bazil.org/fuse/fs"
|
fusefs "bazil.org/fuse/fs"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/log"
|
||||||
"github.com/ncw/rclone/vfs"
|
"github.com/ncw/rclone/vfs"
|
||||||
"github.com/ncw/rclone/vfs/vfsflags"
|
"github.com/ncw/rclone/vfs/vfsflags"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
|
@ -36,7 +37,7 @@ func NewFS(f fs.Fs) *FS {
|
||||||
|
|
||||||
// Root returns the root node
|
// Root returns the root node
|
||||||
func (f *FS) Root() (node fusefs.Node, err error) {
|
func (f *FS) Root() (node fusefs.Node, err error) {
|
||||||
defer fs.Trace("", "")("node=%+v, err=%v", &node, &err)
|
defer log.Trace("", "")("node=%+v, err=%v", &node, &err)
|
||||||
root, err := f.VFS.Root()
|
root, err := f.VFS.Root()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, translateError(err)
|
return nil, translateError(err)
|
||||||
|
@ -50,7 +51,7 @@ var _ fusefs.FSStatfser = (*FS)(nil)
|
||||||
// Statfs is called to obtain file system metadata.
|
// Statfs is called to obtain file system metadata.
|
||||||
// It should write that data to resp.
|
// It should write that data to resp.
|
||||||
func (f *FS) Statfs(ctx context.Context, req *fuse.StatfsRequest, resp *fuse.StatfsResponse) (err error) {
|
func (f *FS) Statfs(ctx context.Context, req *fuse.StatfsRequest, resp *fuse.StatfsResponse) (err error) {
|
||||||
defer fs.Trace("", "")("stat=%+v, err=%v", resp, &err)
|
defer log.Trace("", "")("stat=%+v, err=%v", resp, &err)
|
||||||
const blockSize = 4096
|
const blockSize = 4096
|
||||||
const fsBlocks = (1 << 50) / blockSize
|
const fsBlocks = (1 << 50) / blockSize
|
||||||
resp.Blocks = fsBlocks // Total data blocks in file system.
|
resp.Blocks = fsBlocks // Total data blocks in file system.
|
||||||
|
|
|
@ -7,7 +7,7 @@ import (
|
||||||
|
|
||||||
"bazil.org/fuse"
|
"bazil.org/fuse"
|
||||||
fusefs "bazil.org/fuse/fs"
|
fusefs "bazil.org/fuse/fs"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/log"
|
||||||
"github.com/ncw/rclone/vfs"
|
"github.com/ncw/rclone/vfs"
|
||||||
"golang.org/x/net/context"
|
"golang.org/x/net/context"
|
||||||
)
|
)
|
||||||
|
@ -23,7 +23,7 @@ var _ fusefs.HandleReader = (*FileHandle)(nil)
|
||||||
// Read from the file handle
|
// Read from the file handle
|
||||||
func (fh *FileHandle) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) (err error) {
|
func (fh *FileHandle) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) (err error) {
|
||||||
var n int
|
var n int
|
||||||
defer fs.Trace(fh, "len=%d, offset=%d", req.Size, req.Offset)("read=%d, err=%v", &n, &err)
|
defer log.Trace(fh, "len=%d, offset=%d", req.Size, req.Offset)("read=%d, err=%v", &n, &err)
|
||||||
data := make([]byte, req.Size)
|
data := make([]byte, req.Size)
|
||||||
n, err = fh.Handle.ReadAt(data, req.Offset)
|
n, err = fh.Handle.ReadAt(data, req.Offset)
|
||||||
if err == io.EOF {
|
if err == io.EOF {
|
||||||
|
@ -40,7 +40,7 @@ var _ fusefs.HandleWriter = (*FileHandle)(nil)
|
||||||
|
|
||||||
// Write data to the file handle
|
// Write data to the file handle
|
||||||
func (fh *FileHandle) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) (err error) {
|
func (fh *FileHandle) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) (err error) {
|
||||||
defer fs.Trace(fh, "len=%d, offset=%d", len(req.Data), req.Offset)("written=%d, err=%v", &resp.Size, &err)
|
defer log.Trace(fh, "len=%d, offset=%d", len(req.Data), req.Offset)("written=%d, err=%v", &resp.Size, &err)
|
||||||
n, err := fh.Handle.WriteAt(req.Data, req.Offset)
|
n, err := fh.Handle.WriteAt(req.Data, req.Offset)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return translateError(err)
|
return translateError(err)
|
||||||
|
@ -68,7 +68,7 @@ var _ fusefs.HandleFlusher = (*FileHandle)(nil)
|
||||||
// Filesystems shouldn't assume that flush will always be called after
|
// Filesystems shouldn't assume that flush will always be called after
|
||||||
// some writes, or that if will be called at all.
|
// some writes, or that if will be called at all.
|
||||||
func (fh *FileHandle) Flush(ctx context.Context, req *fuse.FlushRequest) (err error) {
|
func (fh *FileHandle) Flush(ctx context.Context, req *fuse.FlushRequest) (err error) {
|
||||||
defer fs.Trace(fh, "")("err=%v", &err)
|
defer log.Trace(fh, "")("err=%v", &err)
|
||||||
return translateError(fh.Handle.Flush())
|
return translateError(fh.Handle.Flush())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -79,6 +79,6 @@ var _ fusefs.HandleReleaser = (*FileHandle)(nil)
|
||||||
// It isn't called directly from userspace so the error is ignored by
|
// It isn't called directly from userspace so the error is ignored by
|
||||||
// the kernel
|
// the kernel
|
||||||
func (fh *FileHandle) Release(ctx context.Context, req *fuse.ReleaseRequest) (err error) {
|
func (fh *FileHandle) Release(ctx context.Context, req *fuse.ReleaseRequest) (err error) {
|
||||||
defer fs.Trace(fh, "")("err=%v", &err)
|
defer log.Trace(fh, "")("err=%v", &err)
|
||||||
return translateError(fh.Handle.Release())
|
return translateError(fh.Handle.Release())
|
||||||
}
|
}
|
||||||
|
|
|
@ -8,6 +8,7 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
"github.com/ncw/rclone/vfs"
|
"github.com/ncw/rclone/vfs"
|
||||||
"github.com/ncw/rclone/vfs/vfsflags"
|
"github.com/ncw/rclone/vfs/vfsflags"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
|
@ -181,21 +182,21 @@ will see all files and folders immediately in this mode.
|
||||||
cmd.Root.AddCommand(commandDefintion)
|
cmd.Root.AddCommand(commandDefintion)
|
||||||
|
|
||||||
// Add flags
|
// Add flags
|
||||||
flags := commandDefintion.Flags()
|
flagSet := commandDefintion.Flags()
|
||||||
fs.BoolVarP(flags, &DebugFUSE, "debug-fuse", "", DebugFUSE, "Debug the FUSE internals - needs -v.")
|
flags.BoolVarP(flagSet, &DebugFUSE, "debug-fuse", "", DebugFUSE, "Debug the FUSE internals - needs -v.")
|
||||||
// mount options
|
// mount options
|
||||||
fs.BoolVarP(flags, &AllowNonEmpty, "allow-non-empty", "", AllowNonEmpty, "Allow mounting over a non-empty directory.")
|
flags.BoolVarP(flagSet, &AllowNonEmpty, "allow-non-empty", "", AllowNonEmpty, "Allow mounting over a non-empty directory.")
|
||||||
fs.BoolVarP(flags, &AllowRoot, "allow-root", "", AllowRoot, "Allow access to root user.")
|
flags.BoolVarP(flagSet, &AllowRoot, "allow-root", "", AllowRoot, "Allow access to root user.")
|
||||||
fs.BoolVarP(flags, &AllowOther, "allow-other", "", AllowOther, "Allow access to other users.")
|
flags.BoolVarP(flagSet, &AllowOther, "allow-other", "", AllowOther, "Allow access to other users.")
|
||||||
fs.BoolVarP(flags, &DefaultPermissions, "default-permissions", "", DefaultPermissions, "Makes kernel enforce access control based on the file mode.")
|
flags.BoolVarP(flagSet, &DefaultPermissions, "default-permissions", "", DefaultPermissions, "Makes kernel enforce access control based on the file mode.")
|
||||||
fs.BoolVarP(flags, &WritebackCache, "write-back-cache", "", WritebackCache, "Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.")
|
flags.BoolVarP(flagSet, &WritebackCache, "write-back-cache", "", WritebackCache, "Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.")
|
||||||
fs.FlagsVarP(flags, &MaxReadAhead, "max-read-ahead", "", "The number of bytes that can be prefetched for sequential reads.")
|
flags.FVarP(flagSet, &MaxReadAhead, "max-read-ahead", "", "The number of bytes that can be prefetched for sequential reads.")
|
||||||
fs.StringArrayVarP(flags, &ExtraOptions, "option", "o", []string{}, "Option for libfuse/WinFsp. Repeat if required.")
|
flags.StringArrayVarP(flagSet, &ExtraOptions, "option", "o", []string{}, "Option for libfuse/WinFsp. Repeat if required.")
|
||||||
fs.StringArrayVarP(flags, &ExtraFlags, "fuse-flag", "", []string{}, "Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.")
|
flags.StringArrayVarP(flagSet, &ExtraFlags, "fuse-flag", "", []string{}, "Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.")
|
||||||
//fs.BoolVarP(flags, &foreground, "foreground", "", foreground, "Do not detach.")
|
//flags.BoolVarP(flagSet, &foreground, "foreground", "", foreground, "Do not detach.")
|
||||||
|
|
||||||
// Add in the generic flags
|
// Add in the generic flags
|
||||||
vfsflags.AddFlags(flags)
|
vfsflags.AddFlags(flagSet)
|
||||||
|
|
||||||
return commandDefintion
|
return commandDefintion
|
||||||
}
|
}
|
||||||
|
|
|
@ -19,6 +19,7 @@ import (
|
||||||
|
|
||||||
_ "github.com/ncw/rclone/backend/all" // import all the backends
|
_ "github.com/ncw/rclone/backend/all" // import all the backends
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/walk"
|
||||||
"github.com/ncw/rclone/fstest"
|
"github.com/ncw/rclone/fstest"
|
||||||
"github.com/ncw/rclone/vfs"
|
"github.com/ncw/rclone/vfs"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
|
@ -268,7 +269,7 @@ func (r *Run) readLocal(t *testing.T, dir dirMap, filePath string) {
|
||||||
|
|
||||||
// reads the remote tree into dir
|
// reads the remote tree into dir
|
||||||
func (r *Run) readRemote(t *testing.T, dir dirMap, filepath string) {
|
func (r *Run) readRemote(t *testing.T, dir dirMap, filepath string) {
|
||||||
objs, dirs, err := fs.WalkGetAll(r.fremote, filepath, true, 1)
|
objs, dirs, err := walk.GetAll(r.fremote, filepath, true, 1)
|
||||||
if err == fs.ErrorDirNotFound {
|
if err == fs.ErrorDirNotFound {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,7 +2,7 @@ package move
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/sync"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -44,7 +44,7 @@ If you want to delete empty source directories after move, use the --delete-empt
|
||||||
fsrc, fdst := cmd.NewFsSrcDst(args)
|
fsrc, fdst := cmd.NewFsSrcDst(args)
|
||||||
cmd.Run(true, true, command, func() error {
|
cmd.Run(true, true, command, func() error {
|
||||||
|
|
||||||
return fs.MoveDir(fdst, fsrc, deleteEmptySrcDirs)
|
return sync.MoveDir(fdst, fsrc, deleteEmptySrcDirs)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,7 +2,8 @@ package moveto
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
|
"github.com/ncw/rclone/fs/sync"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -49,9 +50,9 @@ transfer.
|
||||||
|
|
||||||
cmd.Run(true, true, command, func() error {
|
cmd.Run(true, true, command, func() error {
|
||||||
if srcFileName == "" {
|
if srcFileName == "" {
|
||||||
return fs.MoveDir(fdst, fsrc, false)
|
return sync.MoveDir(fdst, fsrc, false)
|
||||||
}
|
}
|
||||||
return fs.MoveFile(fdst, fsrc, dstFileName, srcFileName)
|
return operations.MoveFile(fdst, fsrc, dstFileName, srcFileName)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -6,6 +6,7 @@ import (
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/walk"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -129,7 +130,7 @@ func Scan(f fs.Fs) (chan *Dir, chan error, chan struct{}) {
|
||||||
updated := make(chan struct{}, 1)
|
updated := make(chan struct{}, 1)
|
||||||
go func() {
|
go func() {
|
||||||
parents := map[string]*Dir{}
|
parents := map[string]*Dir{}
|
||||||
err := fs.Walk(f, "", false, fs.Config.MaxDepth, func(dirPath string, entries fs.DirEntries, err error) error {
|
err := walk.Walk(f, "", false, fs.Config.MaxDepth, func(dirPath string, entries fs.DirEntries, err error) error {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err // FIXME mark directory as errored instead of aborting
|
return err // FIXME mark directory as errored instead of aborting
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,7 +4,7 @@ import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/config"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -18,7 +18,7 @@ var commandDefintion = &cobra.Command{
|
||||||
Run: func(command *cobra.Command, args []string) {
|
Run: func(command *cobra.Command, args []string) {
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
obscure := fs.MustObscure(args[0])
|
obscure := config.MustObscure(args[0])
|
||||||
fmt.Println(obscure)
|
fmt.Println(obscure)
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
|
|
|
@ -2,7 +2,7 @@ package purge
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -22,7 +22,7 @@ you want to selectively delete files.
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fdst := cmd.NewFsDst(args)
|
fdst := cmd.NewFsDst(args)
|
||||||
cmd.Run(true, false, command, func() error {
|
cmd.Run(true, false, command, func() error {
|
||||||
return fs.Purge(fdst)
|
return operations.Purge(fdst)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -6,7 +6,7 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -50,7 +50,7 @@ a lot of data, you're better off caching locally and then
|
||||||
|
|
||||||
fdst, dstFileName := cmd.NewFsDstFile(args)
|
fdst, dstFileName := cmd.NewFsDstFile(args)
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
_, err := fs.Rcat(fdst, dstFileName, os.Stdin, time.Now())
|
_, err := operations.Rcat(fdst, dstFileName, os.Stdin, time.Now())
|
||||||
return err
|
return err
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
|
|
|
@ -2,7 +2,7 @@ package rmdir
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -20,7 +20,7 @@ objects in it, use purge for that.`,
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fdst := cmd.NewFsDst(args)
|
fdst := cmd.NewFsDst(args)
|
||||||
cmd.Run(true, false, command, func() error {
|
cmd.Run(true, false, command, func() error {
|
||||||
return fs.Rmdir(fdst, "")
|
return operations.Rmdir(fdst, "")
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,7 +2,7 @@ package rmdir
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -32,7 +32,7 @@ empty directories in.
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fdst := cmd.NewFsDst(args)
|
fdst := cmd.NewFsDst(args)
|
||||||
cmd.Run(true, false, command, func() error {
|
cmd.Run(true, false, command, func() error {
|
||||||
return fs.Rmdirs(fdst, "", leaveRoot)
|
return operations.Rmdirs(fdst, "", leaveRoot)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -11,6 +11,7 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/accounting"
|
||||||
"github.com/ncw/rclone/lib/rest"
|
"github.com/ncw/rclone/lib/rest"
|
||||||
"github.com/ncw/rclone/vfs"
|
"github.com/ncw/rclone/vfs"
|
||||||
"github.com/ncw/rclone/vfs/vfsflags"
|
"github.com/ncw/rclone/vfs/vfsflags"
|
||||||
|
@ -159,7 +160,7 @@ type indexData struct {
|
||||||
|
|
||||||
// error returns an http.StatusInternalServerError and logs the error
|
// error returns an http.StatusInternalServerError and logs the error
|
||||||
func internalError(what interface{}, w http.ResponseWriter, text string, err error) {
|
func internalError(what interface{}, w http.ResponseWriter, text string, err error) {
|
||||||
fs.Stats.Error(err)
|
fs.CountError(err)
|
||||||
fs.Errorf(what, "%s: %v", text, err)
|
fs.Errorf(what, "%s: %v", text, err)
|
||||||
http.Error(w, text+".", http.StatusInternalServerError)
|
http.Error(w, text+".", http.StatusInternalServerError)
|
||||||
}
|
}
|
||||||
|
@ -192,8 +193,8 @@ func (s *server) serveDir(w http.ResponseWriter, r *http.Request, dirRemote stri
|
||||||
}
|
}
|
||||||
|
|
||||||
// Account the transfer
|
// Account the transfer
|
||||||
fs.Stats.Transferring(dirRemote)
|
accounting.Stats.Transferring(dirRemote)
|
||||||
defer fs.Stats.DoneTransferring(dirRemote, true)
|
defer accounting.Stats.DoneTransferring(dirRemote, true)
|
||||||
|
|
||||||
fs.Infof(dirRemote, "%s: Serving directory", r.RemoteAddr)
|
fs.Infof(dirRemote, "%s: Serving directory", r.RemoteAddr)
|
||||||
err = indexTemplate.Execute(w, indexData{
|
err = indexTemplate.Execute(w, indexData{
|
||||||
|
@ -259,8 +260,8 @@ func (s *server) serveFile(w http.ResponseWriter, r *http.Request, remote string
|
||||||
}()
|
}()
|
||||||
|
|
||||||
// Account the transfer
|
// Account the transfer
|
||||||
fs.Stats.Transferring(remote)
|
accounting.Stats.Transferring(remote)
|
||||||
defer fs.Stats.DoneTransferring(remote, true)
|
defer accounting.Stats.DoneTransferring(remote, true)
|
||||||
// FIXME in = fs.NewAccount(in, obj).WithBuffer() // account the transfer
|
// FIXME in = fs.NewAccount(in, obj).WithBuffer() // account the transfer
|
||||||
|
|
||||||
// Serve the file
|
// Serve the file
|
||||||
|
|
|
@ -14,6 +14,8 @@ import (
|
||||||
|
|
||||||
_ "github.com/ncw/rclone/backend/local"
|
_ "github.com/ncw/rclone/backend/local"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/filter"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
@ -47,14 +49,14 @@ func startServer(t *testing.T, f fs.Fs) {
|
||||||
|
|
||||||
func TestInit(t *testing.T) {
|
func TestInit(t *testing.T) {
|
||||||
// Configure the remote
|
// Configure the remote
|
||||||
fs.LoadConfig()
|
config.LoadConfig()
|
||||||
// fs.Config.LogLevel = fs.LogLevelDebug
|
// fs.Config.LogLevel = fs.LogLevelDebug
|
||||||
// fs.Config.DumpHeaders = true
|
// fs.Config.DumpHeaders = true
|
||||||
// fs.Config.DumpBodies = true
|
// fs.Config.DumpBodies = true
|
||||||
|
|
||||||
// exclude files called hidden.txt and directories called hidden
|
// exclude files called hidden.txt and directories called hidden
|
||||||
require.NoError(t, fs.Config.Filter.AddRule("- hidden.txt"))
|
require.NoError(t, filter.Active.AddRule("- hidden.txt"))
|
||||||
require.NoError(t, fs.Config.Filter.AddRule("- hidden/**"))
|
require.NoError(t, filter.Active.AddRule("- hidden/**"))
|
||||||
|
|
||||||
// Create a test Fs
|
// Create a test Fs
|
||||||
f, err := fs.NewFs("testdata/files")
|
f, err := fs.NewFs("testdata/files")
|
||||||
|
|
|
@ -9,6 +9,7 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/log"
|
||||||
"github.com/ncw/rclone/vfs"
|
"github.com/ncw/rclone/vfs"
|
||||||
"github.com/ncw/rclone/vfs/vfsflags"
|
"github.com/ncw/rclone/vfs/vfsflags"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
@ -96,7 +97,7 @@ func (w *WebDAV) logRequest(r *http.Request, err error) {
|
||||||
|
|
||||||
// Mkdir creates a directory
|
// Mkdir creates a directory
|
||||||
func (w *WebDAV) Mkdir(ctx context.Context, name string, perm os.FileMode) (err error) {
|
func (w *WebDAV) Mkdir(ctx context.Context, name string, perm os.FileMode) (err error) {
|
||||||
defer fs.Trace(name, "perm=%v", perm)("err = %v", &err)
|
defer log.Trace(name, "perm=%v", perm)("err = %v", &err)
|
||||||
dir, leaf, err := w.vfs.StatParent(name)
|
dir, leaf, err := w.vfs.StatParent(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
@ -107,13 +108,13 @@ func (w *WebDAV) Mkdir(ctx context.Context, name string, perm os.FileMode) (err
|
||||||
|
|
||||||
// OpenFile opens a file or a directory
|
// OpenFile opens a file or a directory
|
||||||
func (w *WebDAV) OpenFile(ctx context.Context, name string, flags int, perm os.FileMode) (file webdav.File, err error) {
|
func (w *WebDAV) OpenFile(ctx context.Context, name string, flags int, perm os.FileMode) (file webdav.File, err error) {
|
||||||
defer fs.Trace(name, "flags=%v, perm=%v", flags, perm)("err = %v", &err)
|
defer log.Trace(name, "flags=%v, perm=%v", flags, perm)("err = %v", &err)
|
||||||
return w.vfs.OpenFile(name, flags, perm)
|
return w.vfs.OpenFile(name, flags, perm)
|
||||||
}
|
}
|
||||||
|
|
||||||
// RemoveAll removes a file or a directory and its contents
|
// RemoveAll removes a file or a directory and its contents
|
||||||
func (w *WebDAV) RemoveAll(ctx context.Context, name string) (err error) {
|
func (w *WebDAV) RemoveAll(ctx context.Context, name string) (err error) {
|
||||||
defer fs.Trace(name, "")("err = %v", &err)
|
defer log.Trace(name, "")("err = %v", &err)
|
||||||
node, err := w.vfs.Stat(name)
|
node, err := w.vfs.Stat(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
@ -127,13 +128,13 @@ func (w *WebDAV) RemoveAll(ctx context.Context, name string) (err error) {
|
||||||
|
|
||||||
// Rename a file or a directory
|
// Rename a file or a directory
|
||||||
func (w *WebDAV) Rename(ctx context.Context, oldName, newName string) (err error) {
|
func (w *WebDAV) Rename(ctx context.Context, oldName, newName string) (err error) {
|
||||||
defer fs.Trace(oldName, "newName=%q", newName)("err = %v", &err)
|
defer log.Trace(oldName, "newName=%q", newName)("err = %v", &err)
|
||||||
return w.vfs.Rename(oldName, newName)
|
return w.vfs.Rename(oldName, newName)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stat returns info about the file or directory
|
// Stat returns info about the file or directory
|
||||||
func (w *WebDAV) Stat(ctx context.Context, name string) (fi os.FileInfo, err error) {
|
func (w *WebDAV) Stat(ctx context.Context, name string) (fi os.FileInfo, err error) {
|
||||||
defer fs.Trace(name, "")("fi=%+v, err = %v", &fi, &err)
|
defer log.Trace(name, "")("fi=%+v, err = %v", &fi, &err)
|
||||||
return w.vfs.Stat(name)
|
return w.vfs.Stat(name)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -4,7 +4,7 @@ import (
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -23,7 +23,7 @@ is in the same format as the standard sha1sum tool produces.
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fsrc := cmd.NewFsSrc(args)
|
fsrc := cmd.NewFsSrc(args)
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
return fs.Sha1sum(fsrc, os.Stdout)
|
return operations.Sha1sum(fsrc, os.Stdout)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -5,6 +5,7 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/operations"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -19,7 +20,7 @@ var commandDefintion = &cobra.Command{
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fsrc := cmd.NewFsSrc(args)
|
fsrc := cmd.NewFsSrc(args)
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
objects, size, err := fs.Count(fsrc)
|
objects, size, err := operations.Count(fsrc)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,7 +2,7 @@ package sync
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs/sync"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -37,7 +37,7 @@ go there.
|
||||||
cmd.CheckArgs(2, 2, command, args)
|
cmd.CheckArgs(2, 2, command, args)
|
||||||
fsrc, fdst := cmd.NewFsSrcDst(args)
|
fsrc, fdst := cmd.NewFsSrcDst(args)
|
||||||
cmd.Run(true, true, command, func() error {
|
cmd.Run(true, true, command, func() error {
|
||||||
return fs.Sync(fdst, fsrc)
|
return sync.Sync(fdst, fsrc)
|
||||||
})
|
})
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
|
@ -6,6 +6,7 @@ import (
|
||||||
|
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/object"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
@ -55,7 +56,7 @@ func Touch(fsrc fs.Fs, srcFileName string) error {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if !notCreateNewFile {
|
if !notCreateNewFile {
|
||||||
var buffer []byte
|
var buffer []byte
|
||||||
src := fs.NewStaticObjectInfo(srcFileName, timeAtr, int64(len(buffer)), true, nil, fsrc)
|
src := object.NewStaticObjectInfo(srcFileName, timeAtr, int64(len(buffer)), true, nil, fsrc)
|
||||||
_, err = fsrc.Put(bytes.NewBuffer(buffer), src)
|
_, err = fsrc.Put(bytes.NewBuffer(buffer), src)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
|
|
@ -3,7 +3,6 @@ package tree
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
"log"
|
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
@ -13,6 +12,8 @@ import (
|
||||||
"github.com/a8m/tree"
|
"github.com/a8m/tree"
|
||||||
"github.com/ncw/rclone/cmd"
|
"github.com/ncw/rclone/cmd"
|
||||||
"github.com/ncw/rclone/fs"
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/log"
|
||||||
|
"github.com/ncw/rclone/fs/walk"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
@ -88,7 +89,7 @@ The tree command has many options for controlling the listing which
|
||||||
are compatible with the tree command. Note that not all of them have
|
are compatible with the tree command. Note that not all of them have
|
||||||
short options as they conflict with rclone's short options.
|
short options as they conflict with rclone's short options.
|
||||||
`,
|
`,
|
||||||
Run: func(command *cobra.Command, args []string) {
|
RunE: func(command *cobra.Command, args []string) error {
|
||||||
cmd.CheckArgs(1, 1, command, args)
|
cmd.CheckArgs(1, 1, command, args)
|
||||||
fsrc := cmd.NewFsSrc(args)
|
fsrc := cmd.NewFsSrc(args)
|
||||||
outFile := os.Stdout
|
outFile := os.Stdout
|
||||||
|
@ -96,7 +97,7 @@ short options as they conflict with rclone's short options.
|
||||||
var err error
|
var err error
|
||||||
outFile, err = os.Create(outFileName)
|
outFile, err = os.Create(outFileName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create output file: %v", err)
|
return errors.Errorf("failed to create output file: %v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
opts.VerSort = opts.VerSort || sort == "version"
|
opts.VerSort = opts.VerSort || sort == "version"
|
||||||
|
@ -110,12 +111,13 @@ short options as they conflict with rclone's short options.
|
||||||
cmd.Run(false, false, command, func() error {
|
cmd.Run(false, false, command, func() error {
|
||||||
return Tree(fsrc, outFile, &opts)
|
return Tree(fsrc, outFile, &opts)
|
||||||
})
|
})
|
||||||
|
return nil
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
// Tree lists fsrc to outFile using the Options passed in
|
// Tree lists fsrc to outFile using the Options passed in
|
||||||
func Tree(fsrc fs.Fs, outFile io.Writer, opts *tree.Options) error {
|
func Tree(fsrc fs.Fs, outFile io.Writer, opts *tree.Options) error {
|
||||||
dirs, err := fs.NewDirTree(fsrc, "", false, opts.DeepLevel)
|
dirs, err := walk.NewDirTree(fsrc, "", false, opts.DeepLevel)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -183,22 +185,22 @@ func (to *FileInfo) String() string {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fs maps an fs.Fs into a tree.Fs
|
// Fs maps an fs.Fs into a tree.Fs
|
||||||
type Fs fs.DirTree
|
type Fs walk.DirTree
|
||||||
|
|
||||||
// NewFs creates a new tree
|
// NewFs creates a new tree
|
||||||
func NewFs(dirs fs.DirTree) Fs {
|
func NewFs(dirs walk.DirTree) Fs {
|
||||||
return Fs(dirs)
|
return Fs(dirs)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stat returns info about the file
|
// Stat returns info about the file
|
||||||
func (dirs Fs) Stat(filePath string) (fi os.FileInfo, err error) {
|
func (dirs Fs) Stat(filePath string) (fi os.FileInfo, err error) {
|
||||||
defer fs.Trace(nil, "filePath=%q", filePath)("fi=%+v, err=%v", &fi, &err)
|
defer log.Trace(nil, "filePath=%q", filePath)("fi=%+v, err=%v", &fi, &err)
|
||||||
filePath = filepath.ToSlash(filePath)
|
filePath = filepath.ToSlash(filePath)
|
||||||
filePath = strings.TrimLeft(filePath, "/")
|
filePath = strings.TrimLeft(filePath, "/")
|
||||||
if filePath == "" {
|
if filePath == "" {
|
||||||
return &FileInfo{fs.NewDir("", time.Now())}, nil
|
return &FileInfo{fs.NewDir("", time.Now())}, nil
|
||||||
}
|
}
|
||||||
_, entry := fs.DirTree(dirs).Find(filePath)
|
_, entry := walk.DirTree(dirs).Find(filePath)
|
||||||
if entry == nil {
|
if entry == nil {
|
||||||
return nil, errors.Errorf("Couldn't find %q in directory cache", filePath)
|
return nil, errors.Errorf("Couldn't find %q in directory cache", filePath)
|
||||||
}
|
}
|
||||||
|
@ -207,7 +209,7 @@ func (dirs Fs) Stat(filePath string) (fi os.FileInfo, err error) {
|
||||||
|
|
||||||
// ReadDir returns info about the directory and fills up the directory cache
|
// ReadDir returns info about the directory and fills up the directory cache
|
||||||
func (dirs Fs) ReadDir(dir string) (names []string, err error) {
|
func (dirs Fs) ReadDir(dir string) (names []string, err error) {
|
||||||
defer fs.Trace(nil, "dir=%s", dir)("names=%+v, err=%v", &names, &err)
|
defer log.Trace(nil, "dir=%s", dir)("names=%+v, err=%v", &names, &err)
|
||||||
dir = filepath.ToSlash(dir)
|
dir = filepath.ToSlash(dir)
|
||||||
dir = strings.TrimLeft(dir, "/")
|
dir = strings.TrimLeft(dir, "/")
|
||||||
entries, ok := dirs[dir]
|
entries, ok := dirs[dir]
|
||||||
|
|
|
@ -1,6 +1,5 @@
|
||||||
// Accounting and limiting reader
|
// Package accounting providers an accounting and limiting reader
|
||||||
|
package accounting
|
||||||
package fs
|
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
|
@ -12,6 +11,8 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/VividCortex/ewma"
|
"github.com/VividCortex/ewma"
|
||||||
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/asyncreader"
|
||||||
"golang.org/x/net/context" // switch to "context" when we stop supporting go1.6
|
"golang.org/x/net/context" // switch to "context" when we stop supporting go1.6
|
||||||
"golang.org/x/time/rate"
|
"golang.org/x/time/rate"
|
||||||
)
|
)
|
||||||
|
@ -24,31 +25,36 @@ var (
|
||||||
prevTokenBucket = tokenBucket
|
prevTokenBucket = tokenBucket
|
||||||
bwLimitToggledOff = false
|
bwLimitToggledOff = false
|
||||||
currLimitMu sync.Mutex // protects changes to the timeslot
|
currLimitMu sync.Mutex // protects changes to the timeslot
|
||||||
currLimit BwTimeSlot
|
currLimit fs.BwTimeSlot
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
// Set the function pointer up in fs
|
||||||
|
fs.CountError = Stats.Error
|
||||||
|
}
|
||||||
|
|
||||||
const maxBurstSize = 1 * 1024 * 1024 // must be bigger than the biggest request
|
const maxBurstSize = 1 * 1024 * 1024 // must be bigger than the biggest request
|
||||||
|
|
||||||
// make a new empty token bucket with the bandwidth given
|
// make a new empty token bucket with the bandwidth given
|
||||||
func newTokenBucket(bandwidth SizeSuffix) *rate.Limiter {
|
func newTokenBucket(bandwidth fs.SizeSuffix) *rate.Limiter {
|
||||||
newTokenBucket := rate.NewLimiter(rate.Limit(bandwidth), maxBurstSize)
|
newTokenBucket := rate.NewLimiter(rate.Limit(bandwidth), maxBurstSize)
|
||||||
// empty the bucket
|
// empty the bucket
|
||||||
err := newTokenBucket.WaitN(context.Background(), maxBurstSize)
|
err := newTokenBucket.WaitN(context.Background(), maxBurstSize)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
Errorf(nil, "Failed to empty token bucket: %v", err)
|
fs.Errorf(nil, "Failed to empty token bucket: %v", err)
|
||||||
}
|
}
|
||||||
return newTokenBucket
|
return newTokenBucket
|
||||||
}
|
}
|
||||||
|
|
||||||
// Start the token bucket if necessary
|
// StartTokenBucket starts the token bucket if necessary
|
||||||
func startTokenBucket() {
|
func StartTokenBucket() {
|
||||||
currLimitMu.Lock()
|
currLimitMu.Lock()
|
||||||
currLimit := bwLimit.LimitAt(time.Now())
|
currLimit := fs.Config.BwLimit.LimitAt(time.Now())
|
||||||
currLimitMu.Unlock()
|
currLimitMu.Unlock()
|
||||||
|
|
||||||
if currLimit.bandwidth > 0 {
|
if currLimit.Bandwidth > 0 {
|
||||||
tokenBucket = newTokenBucket(currLimit.bandwidth)
|
tokenBucket = newTokenBucket(currLimit.Bandwidth)
|
||||||
Infof(nil, "Starting bandwidth limiter at %vBytes/s", &currLimit.bandwidth)
|
fs.Infof(nil, "Starting bandwidth limiter at %vBytes/s", &currLimit.Bandwidth)
|
||||||
|
|
||||||
// Start the SIGUSR2 signal handler to toggle bandwidth.
|
// Start the SIGUSR2 signal handler to toggle bandwidth.
|
||||||
// This function does nothing in windows systems.
|
// This function does nothing in windows systems.
|
||||||
|
@ -56,21 +62,21 @@ func startTokenBucket() {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// startTokenTicker creates a ticker to update the bandwidth limiter every minute.
|
// StartTokenTicker creates a ticker to update the bandwidth limiter every minute.
|
||||||
func startTokenTicker() {
|
func StartTokenTicker() {
|
||||||
// If the timetable has a single entry or was not specified, we don't need
|
// If the timetable has a single entry or was not specified, we don't need
|
||||||
// a ticker to update the bandwidth.
|
// a ticker to update the bandwidth.
|
||||||
if len(bwLimit) <= 1 {
|
if len(fs.Config.BwLimit) <= 1 {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
ticker := time.NewTicker(time.Minute)
|
ticker := time.NewTicker(time.Minute)
|
||||||
go func() {
|
go func() {
|
||||||
for range ticker.C {
|
for range ticker.C {
|
||||||
limitNow := bwLimit.LimitAt(time.Now())
|
limitNow := fs.Config.BwLimit.LimitAt(time.Now())
|
||||||
currLimitMu.Lock()
|
currLimitMu.Lock()
|
||||||
|
|
||||||
if currLimit.bandwidth != limitNow.bandwidth {
|
if currLimit.Bandwidth != limitNow.Bandwidth {
|
||||||
tokenBucketMu.Lock()
|
tokenBucketMu.Lock()
|
||||||
|
|
||||||
// If bwlimit is toggled off, the change should only
|
// If bwlimit is toggled off, the change should only
|
||||||
|
@ -84,17 +90,17 @@ func startTokenTicker() {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Set new bandwidth. If unlimited, set tokenbucket to nil.
|
// Set new bandwidth. If unlimited, set tokenbucket to nil.
|
||||||
if limitNow.bandwidth > 0 {
|
if limitNow.Bandwidth > 0 {
|
||||||
*targetBucket = newTokenBucket(limitNow.bandwidth)
|
*targetBucket = newTokenBucket(limitNow.Bandwidth)
|
||||||
if bwLimitToggledOff {
|
if bwLimitToggledOff {
|
||||||
Logf(nil, "Scheduled bandwidth change. "+
|
fs.Logf(nil, "Scheduled bandwidth change. "+
|
||||||
"Limit will be set to %vBytes/s when toggled on again.", &limitNow.bandwidth)
|
"Limit will be set to %vBytes/s when toggled on again.", &limitNow.Bandwidth)
|
||||||
} else {
|
} else {
|
||||||
Logf(nil, "Scheduled bandwidth change. Limit set to %vBytes/s", &limitNow.bandwidth)
|
fs.Logf(nil, "Scheduled bandwidth change. Limit set to %vBytes/s", &limitNow.Bandwidth)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
*targetBucket = nil
|
*targetBucket = nil
|
||||||
Logf(nil, "Scheduled bandwidth change. Bandwidth limits disabled")
|
fs.Logf(nil, "Scheduled bandwidth change. Bandwidth limits disabled")
|
||||||
}
|
}
|
||||||
|
|
||||||
currLimit = limitNow
|
currLimit = limitNow
|
||||||
|
@ -117,7 +123,7 @@ type inProgress struct {
|
||||||
// newInProgress makes a new inProgress object
|
// newInProgress makes a new inProgress object
|
||||||
func newInProgress() *inProgress {
|
func newInProgress() *inProgress {
|
||||||
return &inProgress{
|
return &inProgress{
|
||||||
m: make(map[string]*Account, Config.Transfers),
|
m: make(map[string]*Account, fs.Config.Transfers),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -181,8 +187,8 @@ type StatsInfo struct {
|
||||||
// NewStats cretates an initialised StatsInfo
|
// NewStats cretates an initialised StatsInfo
|
||||||
func NewStats() *StatsInfo {
|
func NewStats() *StatsInfo {
|
||||||
return &StatsInfo{
|
return &StatsInfo{
|
||||||
checking: make(stringSet, Config.Checkers),
|
checking: make(stringSet, fs.Config.Checkers),
|
||||||
transferring: make(stringSet, Config.Transfers),
|
transferring: make(stringSet, fs.Config.Transfers),
|
||||||
start: time.Now(),
|
start: time.Now(),
|
||||||
inProgress: newInProgress(),
|
inProgress: newInProgress(),
|
||||||
}
|
}
|
||||||
|
@ -201,7 +207,7 @@ func (s *StatsInfo) String() string {
|
||||||
dtRounded := dt - (dt % (time.Second / 10))
|
dtRounded := dt - (dt % (time.Second / 10))
|
||||||
buf := &bytes.Buffer{}
|
buf := &bytes.Buffer{}
|
||||||
|
|
||||||
if Config.DataRateUnit == "bits" {
|
if fs.Config.DataRateUnit == "bits" {
|
||||||
speed = speed * 8
|
speed = speed * 8
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -212,7 +218,7 @@ Checks: %10d
|
||||||
Transferred: %10d
|
Transferred: %10d
|
||||||
Elapsed time: %10v
|
Elapsed time: %10v
|
||||||
`,
|
`,
|
||||||
SizeSuffix(s.bytes).Unit("Bytes"), SizeSuffix(speed).Unit(strings.Title(Config.DataRateUnit)+"/s"),
|
fs.SizeSuffix(s.bytes).Unit("Bytes"), fs.SizeSuffix(speed).Unit(strings.Title(fs.Config.DataRateUnit)+"/s"),
|
||||||
s.errors,
|
s.errors,
|
||||||
s.checks,
|
s.checks,
|
||||||
s.transfers,
|
s.transfers,
|
||||||
|
@ -228,7 +234,7 @@ Elapsed time: %10v
|
||||||
|
|
||||||
// Log outputs the StatsInfo to the log
|
// Log outputs the StatsInfo to the log
|
||||||
func (s *StatsInfo) Log() {
|
func (s *StatsInfo) Log() {
|
||||||
LogLevelPrintf(Config.StatsLogLevel, nil, "%v\n", s)
|
fs.LogLevelPrintf(fs.Config.StatsLogLevel, nil, "%v\n", s)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Bytes updates the stats for bytes bytes
|
// Bytes updates the stats for bytes bytes
|
||||||
|
@ -375,7 +381,7 @@ func NewAccountSizeName(in io.ReadCloser, size int64, name string) *Account {
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewAccount makes a Account reader for an object
|
// NewAccount makes a Account reader for an object
|
||||||
func NewAccount(in io.ReadCloser, obj Object) *Account {
|
func NewAccount(in io.ReadCloser, obj fs.Object) *Account {
|
||||||
return NewAccountSizeName(in, obj.Size(), obj.Remote())
|
return NewAccountSizeName(in, obj.Size(), obj.Remote())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -383,16 +389,16 @@ func NewAccount(in io.ReadCloser, obj Object) *Account {
|
||||||
func (acc *Account) WithBuffer() *Account {
|
func (acc *Account) WithBuffer() *Account {
|
||||||
acc.withBuf = true
|
acc.withBuf = true
|
||||||
var buffers int
|
var buffers int
|
||||||
if acc.size >= int64(Config.BufferSize) || acc.size == -1 {
|
if acc.size >= int64(fs.Config.BufferSize) || acc.size == -1 {
|
||||||
buffers = int(int64(Config.BufferSize) / asyncBufferSize)
|
buffers = int(int64(fs.Config.BufferSize) / asyncreader.BufferSize)
|
||||||
} else {
|
} else {
|
||||||
buffers = int(acc.size / asyncBufferSize)
|
buffers = int(acc.size / asyncreader.BufferSize)
|
||||||
}
|
}
|
||||||
// On big files add a buffer
|
// On big files add a buffer
|
||||||
if buffers > 0 {
|
if buffers > 0 {
|
||||||
in, err := newAsyncReader(acc.in, buffers)
|
in, err := asyncreader.New(acc.in, buffers)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
Errorf(acc.name, "Failed to make buffer: %v", err)
|
fs.Errorf(acc.name, "Failed to make buffer: %v", err)
|
||||||
} else {
|
} else {
|
||||||
acc.in = in
|
acc.in = in
|
||||||
}
|
}
|
||||||
|
@ -409,7 +415,7 @@ func (acc *Account) GetReader() io.ReadCloser {
|
||||||
|
|
||||||
// StopBuffering stops the async buffer doing any more buffering
|
// StopBuffering stops the async buffer doing any more buffering
|
||||||
func (acc *Account) StopBuffering() {
|
func (acc *Account) StopBuffering() {
|
||||||
if asyncIn, ok := acc.in.(*asyncReader); ok {
|
if asyncIn, ok := acc.in.(*asyncreader.AsyncReader); ok {
|
||||||
asyncIn.Abandon()
|
asyncIn.Abandon()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -484,7 +490,7 @@ func (acc *Account) read(in io.Reader, p []byte) (n int, err error) {
|
||||||
if tokenBucket != nil {
|
if tokenBucket != nil {
|
||||||
tbErr := tokenBucket.WaitN(context.Background(), n)
|
tbErr := tokenBucket.WaitN(context.Background(), n)
|
||||||
if tbErr != nil {
|
if tbErr != nil {
|
||||||
Errorf(nil, "Token bucket error: %v", err)
|
fs.Errorf(nil, "Token bucket error: %v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
tokenBucketMu.Unlock()
|
tokenBucketMu.Unlock()
|
||||||
|
@ -572,14 +578,14 @@ func (acc *Account) String() string {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
name := []rune(acc.name)
|
name := []rune(acc.name)
|
||||||
if Config.StatsFileNameLength > 0 {
|
if fs.Config.StatsFileNameLength > 0 {
|
||||||
if len(name) > Config.StatsFileNameLength {
|
if len(name) > fs.Config.StatsFileNameLength {
|
||||||
where := len(name) - Config.StatsFileNameLength
|
where := len(name) - fs.Config.StatsFileNameLength
|
||||||
name = append([]rune{'.', '.', '.'}, name[where:]...)
|
name = append([]rune{'.', '.', '.'}, name[where:]...)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if Config.DataRateUnit == "bits" {
|
if fs.Config.DataRateUnit == "bits" {
|
||||||
cur = cur * 8
|
cur = cur * 8
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -588,12 +594,12 @@ func (acc *Account) String() string {
|
||||||
percentageDone = int(100 * float64(a) / float64(b))
|
percentageDone = int(100 * float64(a) / float64(b))
|
||||||
}
|
}
|
||||||
|
|
||||||
done := fmt.Sprintf("%2d%% /%s", percentageDone, SizeSuffix(b))
|
done := fmt.Sprintf("%2d%% /%s", percentageDone, fs.SizeSuffix(b))
|
||||||
|
|
||||||
return fmt.Sprintf("%45s: %s, %s/s, %s",
|
return fmt.Sprintf("%45s: %s, %s/s, %s",
|
||||||
string(name),
|
string(name),
|
||||||
done,
|
done,
|
||||||
SizeSuffix(cur),
|
fs.SizeSuffix(cur),
|
||||||
etas,
|
etas,
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
@ -633,10 +639,10 @@ func (a *accountStream) Read(p []byte) (n int, err error) {
|
||||||
// AccountByPart turns off whole file accounting
|
// AccountByPart turns off whole file accounting
|
||||||
//
|
//
|
||||||
// Returns the current account or nil if not found
|
// Returns the current account or nil if not found
|
||||||
func AccountByPart(obj Object) *Account {
|
func AccountByPart(obj fs.Object) *Account {
|
||||||
acc := Stats.inProgress.get(obj.Remote())
|
acc := Stats.inProgress.get(obj.Remote())
|
||||||
if acc == nil {
|
if acc == nil {
|
||||||
Debugf(obj, "Didn't find object to account part transfer")
|
fs.Debugf(obj, "Didn't find object to account part transfer")
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
acc.disableWholeFileAccounting()
|
acc.disableWholeFileAccounting()
|
||||||
|
@ -647,7 +653,7 @@ func AccountByPart(obj Object) *Account {
|
||||||
//
|
//
|
||||||
// It disables the whole file counter and returns an io.Reader to wrap
|
// It disables the whole file counter and returns an io.Reader to wrap
|
||||||
// a segment of the transfer.
|
// a segment of the transfer.
|
||||||
func AccountPart(obj Object, in io.Reader) io.Reader {
|
func AccountPart(obj fs.Object, in io.Reader) io.Reader {
|
||||||
acc := AccountByPart(obj)
|
acc := AccountByPart(obj)
|
||||||
if acc == nil {
|
if acc == nil {
|
||||||
return in
|
return in
|
|
@ -3,7 +3,7 @@
|
||||||
|
|
||||||
// +build !darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris
|
// +build !darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris
|
||||||
|
|
||||||
package fs
|
package accounting
|
||||||
|
|
||||||
// startSignalHandler() is Unix specific and does nothing under non-Unix
|
// startSignalHandler() is Unix specific and does nothing under non-Unix
|
||||||
// platforms.
|
// platforms.
|
|
@ -3,12 +3,14 @@
|
||||||
|
|
||||||
// +build darwin dragonfly freebsd linux netbsd openbsd solaris
|
// +build darwin dragonfly freebsd linux netbsd openbsd solaris
|
||||||
|
|
||||||
package fs
|
package accounting
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"os"
|
"os"
|
||||||
"os/signal"
|
"os/signal"
|
||||||
"syscall"
|
"syscall"
|
||||||
|
|
||||||
|
"github.com/ncw/rclone/fs"
|
||||||
)
|
)
|
||||||
|
|
||||||
// startSignalHandler() sets a signal handler to catch SIGUSR2 and toggle throttling.
|
// startSignalHandler() sets a signal handler to catch SIGUSR2 and toggle throttling.
|
||||||
|
@ -28,7 +30,7 @@ func startSignalHandler() {
|
||||||
s = "enabled"
|
s = "enabled"
|
||||||
}
|
}
|
||||||
tokenBucketMu.Unlock()
|
tokenBucketMu.Unlock()
|
||||||
Logf(nil, "Bandwidth limit %s by user", s)
|
fs.Logf(nil, "Bandwidth limit %s by user", s)
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
}
|
}
|
|
@ -1,14 +1,18 @@
|
||||||
package fs
|
// Package asyncreader provides an asynchronous reader which reads
|
||||||
|
// independently of write
|
||||||
|
package asyncreader
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"io"
|
"io"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
|
"github.com/ncw/rclone/lib/readers"
|
||||||
"github.com/pkg/errors"
|
"github.com/pkg/errors"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
asyncBufferSize = 1024 * 1024
|
// BufferSize is the default size of the async buffer
|
||||||
|
BufferSize = 1024 * 1024
|
||||||
softStartInitial = 4 * 1024
|
softStartInitial = 4 * 1024
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@ -18,11 +22,11 @@ var asyncBufferPool = sync.Pool{
|
||||||
|
|
||||||
var errorStreamAbandoned = errors.New("stream abandoned")
|
var errorStreamAbandoned = errors.New("stream abandoned")
|
||||||
|
|
||||||
// asyncReader will do async read-ahead from the input reader
|
// AsyncReader will do async read-ahead from the input reader
|
||||||
// and make the data available as an io.Reader.
|
// and make the data available as an io.Reader.
|
||||||
// This should be fully transparent, except that once an error
|
// This should be fully transparent, except that once an error
|
||||||
// has been returned from the Reader, it will not recover.
|
// has been returned from the Reader, it will not recover.
|
||||||
type asyncReader struct {
|
type AsyncReader struct {
|
||||||
in io.ReadCloser // Input reader
|
in io.ReadCloser // Input reader
|
||||||
ready chan *buffer // Buffers ready to be handed to the reader
|
ready chan *buffer // Buffers ready to be handed to the reader
|
||||||
token chan struct{} // Tokens which allow a buffer to be taken
|
token chan struct{} // Tokens which allow a buffer to be taken
|
||||||
|
@ -36,25 +40,25 @@ type asyncReader struct {
|
||||||
mu sync.Mutex // lock for Read/WriteTo/Abandon/Close
|
mu sync.Mutex // lock for Read/WriteTo/Abandon/Close
|
||||||
}
|
}
|
||||||
|
|
||||||
// newAsyncReader returns a reader that will asynchronously read from
|
// New returns a reader that will asynchronously read from
|
||||||
// the supplied Reader into a number of buffers each of size asyncBufferSize
|
// the supplied Reader into a number of buffers each of size BufferSize
|
||||||
// It will start reading from the input at once, maybe even before this
|
// It will start reading from the input at once, maybe even before this
|
||||||
// function has returned.
|
// function has returned.
|
||||||
// The input can be read from the returned reader.
|
// The input can be read from the returned reader.
|
||||||
// When done use Close to release the buffers and close the supplied input.
|
// When done use Close to release the buffers and close the supplied input.
|
||||||
func newAsyncReader(rd io.ReadCloser, buffers int) (*asyncReader, error) {
|
func New(rd io.ReadCloser, buffers int) (*AsyncReader, error) {
|
||||||
if buffers <= 0 {
|
if buffers <= 0 {
|
||||||
return nil, errors.New("number of buffers too small")
|
return nil, errors.New("number of buffers too small")
|
||||||
}
|
}
|
||||||
if rd == nil {
|
if rd == nil {
|
||||||
return nil, errors.New("nil reader supplied")
|
return nil, errors.New("nil reader supplied")
|
||||||
}
|
}
|
||||||
a := &asyncReader{}
|
a := &AsyncReader{}
|
||||||
a.init(rd, buffers)
|
a.init(rd, buffers)
|
||||||
return a, nil
|
return a, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (a *asyncReader) init(rd io.ReadCloser, buffers int) {
|
func (a *AsyncReader) init(rd io.ReadCloser, buffers int) {
|
||||||
a.in = rd
|
a.in = rd
|
||||||
a.ready = make(chan *buffer, buffers)
|
a.ready = make(chan *buffer, buffers)
|
||||||
a.token = make(chan struct{}, buffers)
|
a.token = make(chan struct{}, buffers)
|
||||||
|
@ -78,7 +82,7 @@ func (a *asyncReader) init(rd io.ReadCloser, buffers int) {
|
||||||
select {
|
select {
|
||||||
case <-a.token:
|
case <-a.token:
|
||||||
b := a.getBuffer()
|
b := a.getBuffer()
|
||||||
if a.size < asyncBufferSize {
|
if a.size < BufferSize {
|
||||||
b.buf = b.buf[:a.size]
|
b.buf = b.buf[:a.size]
|
||||||
a.size <<= 1
|
a.size <<= 1
|
||||||
}
|
}
|
||||||
|
@ -95,19 +99,19 @@ func (a *asyncReader) init(rd io.ReadCloser, buffers int) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// return the buffer to the pool (clearing it)
|
// return the buffer to the pool (clearing it)
|
||||||
func (a *asyncReader) putBuffer(b *buffer) {
|
func (a *AsyncReader) putBuffer(b *buffer) {
|
||||||
b.clear()
|
b.clear()
|
||||||
asyncBufferPool.Put(b)
|
asyncBufferPool.Put(b)
|
||||||
}
|
}
|
||||||
|
|
||||||
// get a buffer from the pool
|
// get a buffer from the pool
|
||||||
func (a *asyncReader) getBuffer() *buffer {
|
func (a *AsyncReader) getBuffer() *buffer {
|
||||||
b := asyncBufferPool.Get().(*buffer)
|
b := asyncBufferPool.Get().(*buffer)
|
||||||
return b
|
return b
|
||||||
}
|
}
|
||||||
|
|
||||||
// Read will return the next available data.
|
// Read will return the next available data.
|
||||||
func (a *asyncReader) fill() (err error) {
|
func (a *AsyncReader) fill() (err error) {
|
||||||
if a.cur.isEmpty() {
|
if a.cur.isEmpty() {
|
||||||
if a.cur != nil {
|
if a.cur != nil {
|
||||||
a.putBuffer(a.cur)
|
a.putBuffer(a.cur)
|
||||||
|
@ -128,7 +132,7 @@ func (a *asyncReader) fill() (err error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Read will return the next available data.
|
// Read will return the next available data.
|
||||||
func (a *asyncReader) Read(p []byte) (n int, err error) {
|
func (a *AsyncReader) Read(p []byte) (n int, err error) {
|
||||||
a.mu.Lock()
|
a.mu.Lock()
|
||||||
defer a.mu.Unlock()
|
defer a.mu.Unlock()
|
||||||
|
|
||||||
|
@ -153,7 +157,7 @@ func (a *asyncReader) Read(p []byte) (n int, err error) {
|
||||||
// WriteTo writes data to w until there's no more data to write or when an error occurs.
|
// WriteTo writes data to w until there's no more data to write or when an error occurs.
|
||||||
// The return value n is the number of bytes written.
|
// The return value n is the number of bytes written.
|
||||||
// Any error encountered during the write is also returned.
|
// Any error encountered during the write is also returned.
|
||||||
func (a *asyncReader) WriteTo(w io.Writer) (n int64, err error) {
|
func (a *AsyncReader) WriteTo(w io.Writer) (n int64, err error) {
|
||||||
a.mu.Lock()
|
a.mu.Lock()
|
||||||
defer a.mu.Unlock()
|
defer a.mu.Unlock()
|
||||||
|
|
||||||
|
@ -177,8 +181,8 @@ func (a *asyncReader) WriteTo(w io.Writer) (n int64, err error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Abandon will ensure that the underlying async reader is shut down.
|
// Abandon will ensure that the underlying async reader is shut down.
|
||||||
// It will NOT close the input supplied on newAsyncReader.
|
// It will NOT close the input supplied on New.
|
||||||
func (a *asyncReader) Abandon() {
|
func (a *AsyncReader) Abandon() {
|
||||||
select {
|
select {
|
||||||
case <-a.exit:
|
case <-a.exit:
|
||||||
// Do nothing if reader routine already exited
|
// Do nothing if reader routine already exited
|
||||||
|
@ -202,8 +206,8 @@ func (a *asyncReader) Abandon() {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close will ensure that the underlying async reader is shut down.
|
// Close will ensure that the underlying async reader is shut down.
|
||||||
// It will also close the input supplied on newAsyncReader.
|
// It will also close the input supplied on New.
|
||||||
func (a *asyncReader) Close() (err error) {
|
func (a *AsyncReader) Close() (err error) {
|
||||||
a.Abandon()
|
a.Abandon()
|
||||||
if a.closed {
|
if a.closed {
|
||||||
return nil
|
return nil
|
||||||
|
@ -223,7 +227,7 @@ type buffer struct {
|
||||||
|
|
||||||
func newBuffer() *buffer {
|
func newBuffer() *buffer {
|
||||||
return &buffer{
|
return &buffer{
|
||||||
buf: make([]byte, asyncBufferSize),
|
buf: make([]byte, BufferSize),
|
||||||
err: nil,
|
err: nil,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -252,7 +256,7 @@ func (b *buffer) isEmpty() bool {
|
||||||
// Any error encountered during the read is returned.
|
// Any error encountered during the read is returned.
|
||||||
func (b *buffer) read(rd io.Reader) error {
|
func (b *buffer) read(rd io.Reader) error {
|
||||||
var n int
|
var n int
|
||||||
n, b.err = ReadFill(rd, b.buf)
|
n, b.err = readers.ReadFill(rd, b.buf)
|
||||||
b.buf = b.buf[0:n]
|
b.buf = b.buf[0:n]
|
||||||
b.offset = 0
|
b.offset = 0
|
||||||
return b.err
|
return b.err
|
|
@ -1,4 +1,4 @@
|
||||||
package fs
|
package asyncreader
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bufio"
|
"bufio"
|
||||||
|
@ -17,7 +17,7 @@ import (
|
||||||
|
|
||||||
func TestAsyncReader(t *testing.T) {
|
func TestAsyncReader(t *testing.T) {
|
||||||
buf := ioutil.NopCloser(bytes.NewBufferString("Testbuffer"))
|
buf := ioutil.NopCloser(bytes.NewBufferString("Testbuffer"))
|
||||||
ar, err := newAsyncReader(buf, 4)
|
ar, err := New(buf, 4)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
var dst = make([]byte, 100)
|
var dst = make([]byte, 100)
|
||||||
|
@ -42,7 +42,7 @@ func TestAsyncReader(t *testing.T) {
|
||||||
|
|
||||||
// Test Close without reading everything
|
// Test Close without reading everything
|
||||||
buf = ioutil.NopCloser(bytes.NewBuffer(make([]byte, 50000)))
|
buf = ioutil.NopCloser(bytes.NewBuffer(make([]byte, 50000)))
|
||||||
ar, err = newAsyncReader(buf, 4)
|
ar, err = New(buf, 4)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
err = ar.Close()
|
err = ar.Close()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
@ -51,7 +51,7 @@ func TestAsyncReader(t *testing.T) {
|
||||||
|
|
||||||
func TestAsyncWriteTo(t *testing.T) {
|
func TestAsyncWriteTo(t *testing.T) {
|
||||||
buf := ioutil.NopCloser(bytes.NewBufferString("Testbuffer"))
|
buf := ioutil.NopCloser(bytes.NewBufferString("Testbuffer"))
|
||||||
ar, err := newAsyncReader(buf, 4)
|
ar, err := New(buf, 4)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
var dst = &bytes.Buffer{}
|
var dst = &bytes.Buffer{}
|
||||||
|
@ -70,14 +70,14 @@ func TestAsyncWriteTo(t *testing.T) {
|
||||||
|
|
||||||
func TestAsyncReaderErrors(t *testing.T) {
|
func TestAsyncReaderErrors(t *testing.T) {
|
||||||
// test nil reader
|
// test nil reader
|
||||||
_, err := newAsyncReader(nil, 4)
|
_, err := New(nil, 4)
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
|
|
||||||
// invalid buffer number
|
// invalid buffer number
|
||||||
buf := ioutil.NopCloser(bytes.NewBufferString("Testbuffer"))
|
buf := ioutil.NopCloser(bytes.NewBufferString("Testbuffer"))
|
||||||
_, err = newAsyncReader(buf, 0)
|
_, err = New(buf, 0)
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
_, err = newAsyncReader(buf, -1)
|
_, err = New(buf, -1)
|
||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -157,9 +157,9 @@ func TestAsyncReaderSizes(t *testing.T) {
|
||||||
bufsize := bufsizes[k]
|
bufsize := bufsizes[k]
|
||||||
read := readmaker.fn(strings.NewReader(text))
|
read := readmaker.fn(strings.NewReader(text))
|
||||||
buf := bufio.NewReaderSize(read, bufsize)
|
buf := bufio.NewReaderSize(read, bufsize)
|
||||||
ar, _ := newAsyncReader(ioutil.NopCloser(buf), l)
|
ar, _ := New(ioutil.NopCloser(buf), l)
|
||||||
s := bufreader.fn(ar)
|
s := bufreader.fn(ar)
|
||||||
// "timeout" expects the Reader to recover, asyncReader does not.
|
// "timeout" expects the Reader to recover, AsyncReader does not.
|
||||||
if s != text && readmaker.name != "timeout" {
|
if s != text && readmaker.name != "timeout" {
|
||||||
t.Errorf("reader=%s fn=%s bufsize=%d want=%q got=%q",
|
t.Errorf("reader=%s fn=%s bufsize=%d want=%q got=%q",
|
||||||
readmaker.name, bufreader.name, bufsize, text, s)
|
readmaker.name, bufreader.name, bufsize, text, s)
|
||||||
|
@ -196,14 +196,14 @@ func TestAsyncReaderWriteTo(t *testing.T) {
|
||||||
bufsize := bufsizes[k]
|
bufsize := bufsizes[k]
|
||||||
read := readmaker.fn(strings.NewReader(text))
|
read := readmaker.fn(strings.NewReader(text))
|
||||||
buf := bufio.NewReaderSize(read, bufsize)
|
buf := bufio.NewReaderSize(read, bufsize)
|
||||||
ar, _ := newAsyncReader(ioutil.NopCloser(buf), l)
|
ar, _ := New(ioutil.NopCloser(buf), l)
|
||||||
dst := &bytes.Buffer{}
|
dst := &bytes.Buffer{}
|
||||||
_, err := ar.WriteTo(dst)
|
_, err := ar.WriteTo(dst)
|
||||||
if err != nil && err != io.EOF && err != iotest.ErrTimeout {
|
if err != nil && err != io.EOF && err != iotest.ErrTimeout {
|
||||||
t.Fatal("Copy:", err)
|
t.Fatal("Copy:", err)
|
||||||
}
|
}
|
||||||
s := dst.String()
|
s := dst.String()
|
||||||
// "timeout" expects the Reader to recover, asyncReader does not.
|
// "timeout" expects the Reader to recover, AsyncReader does not.
|
||||||
if s != text && readmaker.name != "timeout" {
|
if s != text && readmaker.name != "timeout" {
|
||||||
t.Errorf("reader=%s fn=%s bufsize=%d want=%q got=%q",
|
t.Errorf("reader=%s fn=%s bufsize=%d want=%q got=%q",
|
||||||
readmaker.name, bufreader.name, bufsize, text, s)
|
readmaker.name, bufreader.name, bufsize, text, s)
|
||||||
|
@ -243,7 +243,7 @@ func (z *zeroReader) Close() error {
|
||||||
// Test closing and abandoning
|
// Test closing and abandoning
|
||||||
func testAsyncReaderClose(t *testing.T, writeto bool) {
|
func testAsyncReaderClose(t *testing.T, writeto bool) {
|
||||||
zr := &zeroReader{}
|
zr := &zeroReader{}
|
||||||
a, err := newAsyncReader(zr, 16)
|
a, err := New(zr, 16)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
var copyN int64
|
var copyN int64
|
||||||
var copyErr error
|
var copyErr error
|
132
fs/bwtimetable.go
Normal file
132
fs/bwtimetable.go
Normal file
|
@ -0,0 +1,132 @@
|
||||||
|
package fs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
// BwTimeSlot represents a bandwidth configuration at a point in time.
|
||||||
|
type BwTimeSlot struct {
|
||||||
|
HHMM int
|
||||||
|
Bandwidth SizeSuffix
|
||||||
|
}
|
||||||
|
|
||||||
|
// BwTimetable contains all configured time slots.
|
||||||
|
type BwTimetable []BwTimeSlot
|
||||||
|
|
||||||
|
// String returns a printable representation of BwTimetable.
|
||||||
|
func (x BwTimetable) String() string {
|
||||||
|
ret := []string{}
|
||||||
|
for _, ts := range x {
|
||||||
|
ret = append(ret, fmt.Sprintf("%04.4d,%s", ts.HHMM, ts.Bandwidth.String()))
|
||||||
|
}
|
||||||
|
return strings.Join(ret, " ")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set the bandwidth timetable.
|
||||||
|
func (x *BwTimetable) Set(s string) error {
|
||||||
|
// The timetable is formatted as:
|
||||||
|
// "hh:mm,bandwidth hh:mm,banwidth..." ex: "10:00,10G 11:30,1G 18:00,off"
|
||||||
|
// If only a single bandwidth identifier is provided, we assume constant bandwidth.
|
||||||
|
|
||||||
|
if len(s) == 0 {
|
||||||
|
return errors.New("empty string")
|
||||||
|
}
|
||||||
|
// Single value without time specification.
|
||||||
|
if !strings.Contains(s, " ") && !strings.Contains(s, ",") {
|
||||||
|
ts := BwTimeSlot{}
|
||||||
|
if err := ts.Bandwidth.Set(s); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
ts.HHMM = 0
|
||||||
|
*x = BwTimetable{ts}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tok := range strings.Split(s, " ") {
|
||||||
|
tv := strings.Split(tok, ",")
|
||||||
|
|
||||||
|
// Format must be HH:MM,BW
|
||||||
|
if len(tv) != 2 {
|
||||||
|
return errors.Errorf("invalid time/bandwidth specification: %q", tok)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Basic timespec sanity checking
|
||||||
|
HHMM := tv[0]
|
||||||
|
if len(HHMM) != 5 {
|
||||||
|
return errors.Errorf("invalid time specification (hh:mm): %q", HHMM)
|
||||||
|
}
|
||||||
|
hh, err := strconv.Atoi(HHMM[0:2])
|
||||||
|
if err != nil {
|
||||||
|
return errors.Errorf("invalid hour in time specification %q: %v", HHMM, err)
|
||||||
|
}
|
||||||
|
if hh < 0 || hh > 23 {
|
||||||
|
return errors.Errorf("invalid hour (must be between 00 and 23): %q", hh)
|
||||||
|
}
|
||||||
|
mm, err := strconv.Atoi(HHMM[3:])
|
||||||
|
if err != nil {
|
||||||
|
return errors.Errorf("invalid minute in time specification: %q: %v", HHMM, err)
|
||||||
|
}
|
||||||
|
if mm < 0 || mm > 59 {
|
||||||
|
return errors.Errorf("invalid minute (must be between 00 and 59): %q", hh)
|
||||||
|
}
|
||||||
|
|
||||||
|
ts := BwTimeSlot{
|
||||||
|
HHMM: (hh * 100) + mm,
|
||||||
|
}
|
||||||
|
// Bandwidth limit for this time slot.
|
||||||
|
if err := ts.Bandwidth.Set(tv[1]); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
*x = append(*x, ts)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// LimitAt returns a BwTimeSlot for the time requested.
|
||||||
|
func (x BwTimetable) LimitAt(tt time.Time) BwTimeSlot {
|
||||||
|
// If the timetable is empty, we return an unlimited BwTimeSlot starting at midnight.
|
||||||
|
if len(x) == 0 {
|
||||||
|
return BwTimeSlot{HHMM: 0, Bandwidth: -1}
|
||||||
|
}
|
||||||
|
|
||||||
|
HHMM := tt.Hour()*100 + tt.Minute()
|
||||||
|
|
||||||
|
// By default, we return the last element in the timetable. This
|
||||||
|
// satisfies two conditions: 1) If there's only one element it
|
||||||
|
// will always be selected, and 2) The last element of the table
|
||||||
|
// will "wrap around" until overriden by an earlier time slot.
|
||||||
|
// there's only one time slot in the timetable.
|
||||||
|
ret := x[len(x)-1]
|
||||||
|
|
||||||
|
mindif := 0
|
||||||
|
first := true
|
||||||
|
|
||||||
|
// Look for most recent time slot.
|
||||||
|
for _, ts := range x {
|
||||||
|
// Ignore the past
|
||||||
|
if HHMM < ts.HHMM {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
dif := ((HHMM / 100 * 60) + (HHMM % 100)) - ((ts.HHMM / 100 * 60) + (ts.HHMM % 100))
|
||||||
|
if first {
|
||||||
|
mindif = dif
|
||||||
|
first = false
|
||||||
|
}
|
||||||
|
if dif <= mindif {
|
||||||
|
mindif = dif
|
||||||
|
ret = ts
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return ret
|
||||||
|
}
|
||||||
|
|
||||||
|
// Type of the value
|
||||||
|
func (x BwTimetable) Type() string {
|
||||||
|
return "BwTimetable"
|
||||||
|
}
|
113
fs/bwtimetable_test.go
Normal file
113
fs/bwtimetable_test.go
Normal file
|
@ -0,0 +1,113 @@
|
||||||
|
package fs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/spf13/pflag"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Check it satisfies the interface
|
||||||
|
var _ pflag.Value = (*BwTimetable)(nil)
|
||||||
|
|
||||||
|
func TestBwTimetableSet(t *testing.T) {
|
||||||
|
for _, test := range []struct {
|
||||||
|
in string
|
||||||
|
want BwTimetable
|
||||||
|
err bool
|
||||||
|
}{
|
||||||
|
{"", BwTimetable{}, true},
|
||||||
|
{"0", BwTimetable{BwTimeSlot{HHMM: 0, Bandwidth: 0}}, false},
|
||||||
|
{"666", BwTimetable{BwTimeSlot{HHMM: 0, Bandwidth: 666 * 1024}}, false},
|
||||||
|
{"10:20,666", BwTimetable{BwTimeSlot{HHMM: 1020, Bandwidth: 666 * 1024}}, false},
|
||||||
|
{
|
||||||
|
"11:00,333 13:40,666 23:50,10M 23:59,off",
|
||||||
|
BwTimetable{
|
||||||
|
BwTimeSlot{HHMM: 1100, Bandwidth: 333 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 1340, Bandwidth: 666 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 2350, Bandwidth: 10 * 1024 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 2359, Bandwidth: -1},
|
||||||
|
},
|
||||||
|
false,
|
||||||
|
},
|
||||||
|
{"bad,bad", BwTimetable{}, true},
|
||||||
|
{"bad bad", BwTimetable{}, true},
|
||||||
|
{"bad", BwTimetable{}, true},
|
||||||
|
{"1000X", BwTimetable{}, true},
|
||||||
|
{"2401,666", BwTimetable{}, true},
|
||||||
|
{"1061,666", BwTimetable{}, true},
|
||||||
|
} {
|
||||||
|
tt := BwTimetable{}
|
||||||
|
err := tt.Set(test.in)
|
||||||
|
if test.err {
|
||||||
|
require.Error(t, err)
|
||||||
|
} else {
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
assert.Equal(t, test.want, tt)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBwTimetableLimitAt(t *testing.T) {
|
||||||
|
for _, test := range []struct {
|
||||||
|
tt BwTimetable
|
||||||
|
now time.Time
|
||||||
|
want BwTimeSlot
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
BwTimetable{},
|
||||||
|
time.Date(2017, time.April, 20, 15, 0, 0, 0, time.UTC),
|
||||||
|
BwTimeSlot{HHMM: 0, Bandwidth: -1},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
BwTimetable{BwTimeSlot{HHMM: 1100, Bandwidth: 333 * 1024}},
|
||||||
|
time.Date(2017, time.April, 20, 15, 0, 0, 0, time.UTC),
|
||||||
|
BwTimeSlot{HHMM: 1100, Bandwidth: 333 * 1024},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
BwTimetable{
|
||||||
|
BwTimeSlot{HHMM: 1100, Bandwidth: 333 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 1300, Bandwidth: 666 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 2301, Bandwidth: 1024 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 2350, Bandwidth: -1},
|
||||||
|
},
|
||||||
|
time.Date(2017, time.April, 20, 10, 15, 0, 0, time.UTC),
|
||||||
|
BwTimeSlot{HHMM: 2350, Bandwidth: -1},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
BwTimetable{
|
||||||
|
BwTimeSlot{HHMM: 1100, Bandwidth: 333 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 1300, Bandwidth: 666 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 2301, Bandwidth: 1024 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 2350, Bandwidth: -1},
|
||||||
|
},
|
||||||
|
time.Date(2017, time.April, 20, 11, 0, 0, 0, time.UTC),
|
||||||
|
BwTimeSlot{HHMM: 1100, Bandwidth: 333 * 1024},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
BwTimetable{
|
||||||
|
BwTimeSlot{HHMM: 1100, Bandwidth: 333 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 1300, Bandwidth: 666 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 2301, Bandwidth: 1024 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 2350, Bandwidth: -1},
|
||||||
|
},
|
||||||
|
time.Date(2017, time.April, 20, 13, 1, 0, 0, time.UTC),
|
||||||
|
BwTimeSlot{HHMM: 1300, Bandwidth: 666 * 1024},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
BwTimetable{
|
||||||
|
BwTimeSlot{HHMM: 1100, Bandwidth: 333 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 1300, Bandwidth: 666 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 2301, Bandwidth: 1024 * 1024},
|
||||||
|
BwTimeSlot{HHMM: 2350, Bandwidth: -1},
|
||||||
|
},
|
||||||
|
time.Date(2017, time.April, 20, 23, 59, 0, 0, time.UTC),
|
||||||
|
BwTimeSlot{HHMM: 2350, Bandwidth: -1},
|
||||||
|
},
|
||||||
|
} {
|
||||||
|
slot := test.tt.LimitAt(test.now)
|
||||||
|
assert.Equal(t, test.want, slot)
|
||||||
|
}
|
||||||
|
}
|
1538
fs/config.go
1538
fs/config.go
File diff suppressed because it is too large
Load diff
1159
fs/config/config.go
Normal file
1159
fs/config/config.go
Normal file
File diff suppressed because it is too large
Load diff
|
@ -3,7 +3,7 @@
|
||||||
|
|
||||||
// +build !darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris
|
// +build !darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris
|
||||||
|
|
||||||
package fs
|
package config
|
||||||
|
|
||||||
// attemptCopyGroups tries to keep the group the same, which only makes sense
|
// attemptCopyGroups tries to keep the group the same, which only makes sense
|
||||||
// for system with user-group-world permission model.
|
// for system with user-group-world permission model.
|
|
@ -4,7 +4,7 @@
|
||||||
|
|
||||||
// +build !solaris,!plan9
|
// +build !solaris,!plan9
|
||||||
|
|
||||||
package fs
|
package config
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
|
@ -4,7 +4,7 @@
|
||||||
|
|
||||||
// +build solaris plan9
|
// +build solaris plan9
|
||||||
|
|
||||||
package fs
|
package config
|
||||||
|
|
||||||
// ReadPassword reads a password with echoing it to the terminal.
|
// ReadPassword reads a password with echoing it to the terminal.
|
||||||
func ReadPassword() string {
|
func ReadPassword() string {
|
|
@ -1,45 +1,15 @@
|
||||||
package fs
|
package config
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
|
||||||
"crypto/rand"
|
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"os"
|
"os"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
|
"github.com/ncw/rclone/fs"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestObscure(t *testing.T) {
|
|
||||||
for _, test := range []struct {
|
|
||||||
in string
|
|
||||||
want string
|
|
||||||
iv string
|
|
||||||
}{
|
|
||||||
{"", "YWFhYWFhYWFhYWFhYWFhYQ", "aaaaaaaaaaaaaaaa"},
|
|
||||||
{"potato", "YWFhYWFhYWFhYWFhYWFhYXMaGgIlEQ", "aaaaaaaaaaaaaaaa"},
|
|
||||||
{"potato", "YmJiYmJiYmJiYmJiYmJiYp3gcEWbAw", "bbbbbbbbbbbbbbbb"},
|
|
||||||
} {
|
|
||||||
cryptRand = bytes.NewBufferString(test.iv)
|
|
||||||
got, err := Obscure(test.in)
|
|
||||||
cryptRand = rand.Reader
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Equal(t, test.want, got)
|
|
||||||
recoveredIn, err := Reveal(got)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Equal(t, test.in, recoveredIn, "not bidirectional")
|
|
||||||
// Now the Must variants
|
|
||||||
cryptRand = bytes.NewBufferString(test.iv)
|
|
||||||
got = MustObscure(test.in)
|
|
||||||
cryptRand = rand.Reader
|
|
||||||
assert.Equal(t, test.want, got)
|
|
||||||
recoveredIn = MustReveal(got)
|
|
||||||
assert.Equal(t, test.in, recoveredIn, "not bidirectional")
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCRUD(t *testing.T) {
|
func TestCRUD(t *testing.T) {
|
||||||
configKey = nil // reset password
|
configKey = nil // reset password
|
||||||
// create temp config file
|
// create temp config file
|
||||||
|
@ -54,39 +24,47 @@ func TestCRUD(t *testing.T) {
|
||||||
|
|
||||||
// temporarily adapt configuration
|
// temporarily adapt configuration
|
||||||
oldOsStdout := os.Stdout
|
oldOsStdout := os.Stdout
|
||||||
oldConfigFile := configFile
|
oldConfigPath := ConfigPath
|
||||||
oldConfig := Config
|
oldConfig := fs.Config
|
||||||
oldConfigData := configData
|
oldConfigData := configData
|
||||||
oldReadLine := ReadLine
|
oldReadLine := ReadLine
|
||||||
os.Stdout = nil
|
os.Stdout = nil
|
||||||
configFile = &path
|
ConfigPath = path
|
||||||
Config = &ConfigInfo{}
|
fs.Config = &fs.ConfigInfo{}
|
||||||
configData = nil
|
configData = nil
|
||||||
defer func() {
|
defer func() {
|
||||||
os.Stdout = oldOsStdout
|
os.Stdout = oldOsStdout
|
||||||
configFile = oldConfigFile
|
ConfigPath = oldConfigPath
|
||||||
ReadLine = oldReadLine
|
ReadLine = oldReadLine
|
||||||
Config = oldConfig
|
fs.Config = oldConfig
|
||||||
configData = oldConfigData
|
configData = oldConfigData
|
||||||
}()
|
}()
|
||||||
|
|
||||||
LoadConfig()
|
LoadConfig()
|
||||||
assert.Equal(t, []string{}, configData.GetSectionList())
|
assert.Equal(t, []string{}, configData.GetSectionList())
|
||||||
|
|
||||||
|
// Fake a remote
|
||||||
|
fs.Register(&fs.RegInfo{Name: "config_test_remote"})
|
||||||
|
|
||||||
// add new remote
|
// add new remote
|
||||||
i := 0
|
i := 0
|
||||||
ReadLine = func() string {
|
ReadLine = func() string {
|
||||||
answers := []string{
|
answers := []string{
|
||||||
"local", // type is local
|
"config_test_remote", // type
|
||||||
"1", // yes, disable long filenames
|
|
||||||
"y", // looks good, save
|
"y", // looks good, save
|
||||||
}
|
}
|
||||||
i = i + 1
|
i = i + 1
|
||||||
return answers[i-1]
|
return answers[i-1]
|
||||||
}
|
}
|
||||||
|
|
||||||
NewRemote("test")
|
NewRemote("test")
|
||||||
assert.Equal(t, []string{"test"}, configData.GetSectionList())
|
assert.Equal(t, []string{"test"}, configData.GetSectionList())
|
||||||
|
|
||||||
|
// Reload the config file to workaround this bug
|
||||||
|
// https://github.com/Unknwon/goconfig/issues/39
|
||||||
|
configData, err = loadConfigFile()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
// normal rename, test → asdf
|
// normal rename, test → asdf
|
||||||
ReadLine = func() string { return "asdf" }
|
ReadLine = func() string { return "asdf" }
|
||||||
RenameRemote("test")
|
RenameRemote("test")
|
||||||
|
@ -226,50 +204,3 @@ func hashedKeyCompare(t *testing.T, a, b string, shouldMatch bool) {
|
||||||
assert.NotEqual(t, k1, k2)
|
assert.NotEqual(t, k1, k2)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestDumpFlagsString(t *testing.T) {
|
|
||||||
assert.Equal(t, "", DumpFlags(0).String())
|
|
||||||
assert.Equal(t, "headers", (DumpHeaders).String())
|
|
||||||
assert.Equal(t, "headers,bodies", (DumpHeaders | DumpBodies).String())
|
|
||||||
assert.Equal(t, "headers,bodies,requests,responses,auth,filters", (DumpHeaders | DumpBodies | DumpRequests | DumpResponses | DumpAuth | DumpFilters).String())
|
|
||||||
assert.Equal(t, "headers,Unknown-0x8000", (DumpHeaders | DumpFlags(0x8000)).String())
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDumpFlagsSet(t *testing.T) {
|
|
||||||
for _, test := range []struct {
|
|
||||||
in string
|
|
||||||
want DumpFlags
|
|
||||||
wantErr string
|
|
||||||
}{
|
|
||||||
{"", DumpFlags(0), ""},
|
|
||||||
{"bodies", DumpBodies, ""},
|
|
||||||
{"bodies,headers,auth", DumpBodies | DumpHeaders | DumpAuth, ""},
|
|
||||||
{"bodies,headers,auth", DumpBodies | DumpHeaders | DumpAuth, ""},
|
|
||||||
{"headers,bodies,requests,responses,auth,filters", DumpHeaders | DumpBodies | DumpRequests | DumpResponses | DumpAuth | DumpFilters, ""},
|
|
||||||
{"headers,bodies,unknown,auth", 0, "Unknown dump flag \"unknown\""},
|
|
||||||
} {
|
|
||||||
f := DumpFlags(-1)
|
|
||||||
initial := f
|
|
||||||
err := f.Set(test.in)
|
|
||||||
if err != nil {
|
|
||||||
if test.wantErr == "" {
|
|
||||||
t.Errorf("Got an error when not expecting one on %q: %v", test.in, err)
|
|
||||||
} else {
|
|
||||||
assert.Contains(t, err.Error(), test.wantErr)
|
|
||||||
}
|
|
||||||
assert.Equal(t, initial, f, test.want)
|
|
||||||
} else {
|
|
||||||
if test.wantErr != "" {
|
|
||||||
t.Errorf("Got no error when expecting one on %q", test.in)
|
|
||||||
} else {
|
|
||||||
assert.Equal(t, test.want, f)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDumpFlagsType(t *testing.T) {
|
|
||||||
f := DumpFlags(0)
|
|
||||||
assert.Equal(t, "string", f.Type())
|
|
||||||
}
|
|
|
@ -3,13 +3,15 @@
|
||||||
|
|
||||||
// +build darwin dragonfly freebsd linux netbsd openbsd solaris
|
// +build darwin dragonfly freebsd linux netbsd openbsd solaris
|
||||||
|
|
||||||
package fs
|
package config
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"os"
|
"os"
|
||||||
"os/user"
|
"os/user"
|
||||||
"strconv"
|
"strconv"
|
||||||
"syscall"
|
"syscall"
|
||||||
|
|
||||||
|
"github.com/ncw/rclone/fs"
|
||||||
)
|
)
|
||||||
|
|
||||||
// attemptCopyGroups tries to keep the group the same. User will be the one
|
// attemptCopyGroups tries to keep the group the same. User will be the one
|
||||||
|
@ -29,7 +31,7 @@ func attemptCopyGroup(fromPath, toPath string) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if err = os.Chown(toPath, uid, int(stat.Gid)); err != nil {
|
if err = os.Chown(toPath, uid, int(stat.Gid)); err != nil {
|
||||||
Debugf(nil, "Failed to keep previous owner of config file: %v", err)
|
fs.Debugf(nil, "Failed to keep previous owner of config file: %v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
162
fs/config/configflags/configflags.go
Normal file
162
fs/config/configflags/configflags.go
Normal file
|
@ -0,0 +1,162 @@
|
||||||
|
// Package configflags defines the flags used by rclone. It is
|
||||||
|
// decoupled into a separate package so it can be replaced.
|
||||||
|
package configflags
|
||||||
|
|
||||||
|
// Options set by command line flags
|
||||||
|
import (
|
||||||
|
"log"
|
||||||
|
"net"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/ncw/rclone/fs"
|
||||||
|
"github.com/ncw/rclone/fs/config"
|
||||||
|
"github.com/ncw/rclone/fs/config/flags"
|
||||||
|
"github.com/spf13/pflag"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
// these will get interpreted into fs.Config via SetFlags() below
|
||||||
|
verbose int
|
||||||
|
quiet bool
|
||||||
|
dumpHeaders bool
|
||||||
|
dumpBodies bool
|
||||||
|
deleteBefore bool
|
||||||
|
deleteDuring bool
|
||||||
|
deleteAfter bool
|
||||||
|
bindAddr string
|
||||||
|
disableFeatures string
|
||||||
|
)
|
||||||
|
|
||||||
|
// AddFlags adds the non filing system specific flags to the command
|
||||||
|
func AddFlags(flagSet *pflag.FlagSet) {
|
||||||
|
// NB defaults which aren't the zero for the type should be set in fs/config.go NewConfig
|
||||||
|
flags.CountVarP(flagSet, &verbose, "verbose", "v", "Print lots more stuff (repeat for more)")
|
||||||
|
flags.BoolVarP(flagSet, &quiet, "quiet", "q", false, "Print as little stuff as possible")
|
||||||
|
flags.DurationVarP(flagSet, &fs.Config.ModifyWindow, "modify-window", "", fs.Config.ModifyWindow, "Max time diff to be considered the same")
|
||||||
|
flags.IntVarP(flagSet, &fs.Config.Checkers, "checkers", "", fs.Config.Checkers, "Number of checkers to run in parallel.")
|
||||||
|
flags.IntVarP(flagSet, &fs.Config.Transfers, "transfers", "", fs.Config.Transfers, "Number of file transfers to run in parallel.")
|
||||||
|
flags.StringVarP(flagSet, &config.ConfigPath, "config", "", config.ConfigPath, "Config file.")
|
||||||
|
flags.StringVarP(flagSet, &config.CacheDir, "cache-dir", "", config.CacheDir, "Directory rclone will use for caching.")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.CheckSum, "checksum", "c", fs.Config.CheckSum, "Skip based on checksum & size, not mod-time & size")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.SizeOnly, "size-only", "", fs.Config.SizeOnly, "Skip based on size only, not mod-time or checksum")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.IgnoreTimes, "ignore-times", "I", fs.Config.IgnoreTimes, "Don't skip files that match size and time - transfer all files")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.IgnoreExisting, "ignore-existing", "", fs.Config.IgnoreExisting, "Skip all files that exist on destination")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.DryRun, "dry-run", "n", fs.Config.DryRun, "Do a trial run with no permanent changes")
|
||||||
|
flags.DurationVarP(flagSet, &fs.Config.ConnectTimeout, "contimeout", "", fs.Config.ConnectTimeout, "Connect timeout")
|
||||||
|
flags.DurationVarP(flagSet, &fs.Config.Timeout, "timeout", "", fs.Config.Timeout, "IO idle timeout")
|
||||||
|
flags.BoolVarP(flagSet, &dumpHeaders, "dump-headers", "", false, "Dump HTTP bodies - may contain sensitive info")
|
||||||
|
flags.BoolVarP(flagSet, &dumpBodies, "dump-bodies", "", false, "Dump HTTP headers and bodies - may contain sensitive info")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.InsecureSkipVerify, "no-check-certificate", "", fs.Config.InsecureSkipVerify, "Do not verify the server SSL certificate. Insecure.")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.AskPassword, "ask-password", "", fs.Config.AskPassword, "Allow prompt for password for encrypted configuration.")
|
||||||
|
flags.BoolVarP(flagSet, &deleteBefore, "delete-before", "", false, "When synchronizing, delete files on destination before transfering")
|
||||||
|
flags.BoolVarP(flagSet, &deleteDuring, "delete-during", "", false, "When synchronizing, delete files during transfer (default)")
|
||||||
|
flags.BoolVarP(flagSet, &deleteAfter, "delete-after", "", false, "When synchronizing, delete files on destination after transfering")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.TrackRenames, "track-renames", "", fs.Config.TrackRenames, "When synchronizing, track file renames and do a server side move if possible")
|
||||||
|
flags.IntVarP(flagSet, &fs.Config.LowLevelRetries, "low-level-retries", "", fs.Config.LowLevelRetries, "Number of low level retries to do.")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.UpdateOlder, "update", "u", fs.Config.UpdateOlder, "Skip files that are newer on the destination.")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.NoGzip, "no-gzip-encoding", "", fs.Config.NoGzip, "Don't set Accept-Encoding: gzip.")
|
||||||
|
flags.IntVarP(flagSet, &fs.Config.MaxDepth, "max-depth", "", fs.Config.MaxDepth, "If set limits the recursion depth to this.")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.IgnoreSize, "ignore-size", "", false, "Ignore size when skipping use mod-time or checksum.")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.IgnoreChecksum, "ignore-checksum", "", fs.Config.IgnoreChecksum, "Skip post copy check of checksums.")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.NoTraverse, "no-traverse", "", fs.Config.NoTraverse, "Don't traverse destination file system on copy.")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.NoUpdateModTime, "no-update-modtime", "", fs.Config.NoUpdateModTime, "Don't update destination mod-time if files identical.")
|
||||||
|
flags.StringVarP(flagSet, &fs.Config.BackupDir, "backup-dir", "", fs.Config.BackupDir, "Make backups into hierarchy based in DIR.")
|
||||||
|
flags.StringVarP(flagSet, &fs.Config.Suffix, "suffix", "", fs.Config.Suffix, "Suffix for use with --backup-dir.")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.UseListR, "fast-list", "", fs.Config.UseListR, "Use recursive list if available. Uses more memory but fewer transactions.")
|
||||||
|
flags.Float64VarP(flagSet, &fs.Config.TPSLimit, "tpslimit", "", fs.Config.TPSLimit, "Limit HTTP transactions per second to this.")
|
||||||
|
flags.IntVarP(flagSet, &fs.Config.TPSLimitBurst, "tpslimit-burst", "", fs.Config.TPSLimitBurst, "Max burst of transactions for --tpslimit.")
|
||||||
|
flags.StringVarP(flagSet, &bindAddr, "bind", "", "", "Local address to bind to for outgoing connections, IPv4, IPv6 or name.")
|
||||||
|
flags.StringVarP(flagSet, &disableFeatures, "disable", "", "", "Disable a comma separated list of features. Use help to see a list.")
|
||||||
|
flags.StringVarP(flagSet, &fs.Config.UserAgent, "user-agent", "", fs.Config.UserAgent, "Set the user-agent to a specified string. The default is rclone/ version")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.Immutable, "immutable", "", fs.Config.Immutable, "Do not modify files. Fail if existing files have been modified.")
|
||||||
|
flags.BoolVarP(flagSet, &fs.Config.AutoConfirm, "auto-confirm", "", fs.Config.AutoConfirm, "If enabled, do not request console confirmation.")
|
||||||
|
flags.IntVarP(flagSet, &fs.Config.StatsFileNameLength, "stats-file-name-length", "", fs.Config.StatsFileNameLength, "Max file name length in stats. 0 for no limit")
|
||||||
|
flags.FVarP(flagSet, &fs.Config.LogLevel, "log-level", "", "Log level DEBUG|INFO|NOTICE|ERROR")
|
||||||
|
flags.FVarP(flagSet, &fs.Config.StatsLogLevel, "stats-log-level", "", "Log level to show --stats output DEBUG|INFO|NOTICE|ERROR")
|
||||||
|
flags.FVarP(flagSet, &fs.Config.BwLimit, "bwlimit", "", "Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.")
|
||||||
|
flags.FVarP(flagSet, &fs.Config.BufferSize, "buffer-size", "", "Buffer size when copying files.")
|
||||||
|
flags.FVarP(flagSet, &fs.Config.StreamingUploadCutoff, "streaming-upload-cutoff", "", "Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends.")
|
||||||
|
flags.FVarP(flagSet, &fs.Config.Dump, "dump", "", "List of items to dump from: "+fs.DumpFlagsList)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetFlags converts any flags into config which weren't straight foward
|
||||||
|
func SetFlags() {
|
||||||
|
fs.Config.LogLevel = fs.LogLevelNotice
|
||||||
|
if verbose >= 2 {
|
||||||
|
fs.Config.LogLevel = fs.LogLevelDebug
|
||||||
|
} else if verbose >= 1 {
|
||||||
|
fs.Config.LogLevel = fs.LogLevelInfo
|
||||||
|
}
|
||||||
|
if quiet {
|
||||||
|
if verbose > 0 {
|
||||||
|
log.Fatalf("Can't set -v and -q")
|
||||||
|
}
|
||||||
|
fs.Config.LogLevel = fs.LogLevelError
|
||||||
|
}
|
||||||
|
logLevelFlag := pflag.Lookup("log-level")
|
||||||
|
if logLevelFlag != nil && logLevelFlag.Changed {
|
||||||
|
if verbose > 0 {
|
||||||
|
log.Fatalf("Can't set -v and --log-level")
|
||||||
|
}
|
||||||
|
if quiet {
|
||||||
|
log.Fatalf("Can't set -q and --log-level")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if dumpHeaders {
|
||||||
|
fs.Config.Dump |= fs.DumpHeaders
|
||||||
|
fs.Infof(nil, "--dump-headers is obsolete - please use --dump headers instead")
|
||||||
|
}
|
||||||
|
if dumpBodies {
|
||||||
|
fs.Config.Dump |= fs.DumpBodies
|
||||||
|
fs.Infof(nil, "--dump-bodies is obsolete - please use --dump bodies instead")
|
||||||
|
}
|
||||||
|
|
||||||
|
switch {
|
||||||
|
case deleteBefore && (deleteDuring || deleteAfter),
|
||||||
|
deleteDuring && deleteAfter:
|
||||||
|
log.Fatalf(`Only one of --delete-before, --delete-during or --delete-after can be used.`)
|
||||||
|
case deleteBefore:
|
||||||
|
fs.Config.DeleteMode = fs.DeleteModeBefore
|
||||||
|
case deleteDuring:
|
||||||
|
fs.Config.DeleteMode = fs.DeleteModeDuring
|
||||||
|
case deleteAfter:
|
||||||
|
fs.Config.DeleteMode = fs.DeleteModeAfter
|
||||||
|
default:
|
||||||
|
fs.Config.DeleteMode = fs.DeleteModeDefault
|
||||||
|
}
|
||||||
|
|
||||||
|
if fs.Config.IgnoreSize && fs.Config.SizeOnly {
|
||||||
|
log.Fatalf(`Can't use --size-only and --ignore-size together.`)
|
||||||
|
}
|
||||||
|
|
||||||
|
if fs.Config.Suffix != "" && fs.Config.BackupDir == "" {
|
||||||
|
log.Fatalf(`Can only use --suffix with --backup-dir.`)
|
||||||
|
}
|
||||||
|
|
||||||
|
if bindAddr != "" {
|
||||||
|
addrs, err := net.LookupIP(bindAddr)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("--bind: Failed to parse %q as IP address: %v", bindAddr, err)
|
||||||
|
}
|
||||||
|
if len(addrs) != 1 {
|
||||||
|
log.Fatalf("--bind: Expecting 1 IP address for %q but got %d", bindAddr, len(addrs))
|
||||||
|
}
|
||||||
|
fs.Config.BindAddr = addrs[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
if disableFeatures != "" {
|
||||||
|
if disableFeatures == "help" {
|
||||||
|
log.Fatalf("Possible backend features are: %s\n", strings.Join(new(fs.Features).List(), ", "))
|
||||||
|
}
|
||||||
|
fs.Config.DisableFeatures = strings.Split(disableFeatures, ",")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Make the config file absolute
|
||||||
|
configPath, err := filepath.Abs(config.ConfigPath)
|
||||||
|
if err == nil {
|
||||||
|
config.ConfigPath = configPath
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,239 +1,17 @@
|
||||||
// This contains helper functions for managing flags
|
// Package flags contains enahnced versions of spf13/pflag flag
|
||||||
|
// routines which will read from the environment also.
|
||||||
package fs
|
package flags
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
|
||||||
"log"
|
"log"
|
||||||
"math"
|
|
||||||
"os"
|
"os"
|
||||||
"strconv"
|
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/pkg/errors"
|
"github.com/ncw/rclone/fs"
|
||||||
"github.com/spf13/pflag"
|
"github.com/spf13/pflag"
|
||||||
)
|
)
|
||||||
|
|
||||||
// SizeSuffix is parsed by flag with k/M/G suffixes
|
|
||||||
type SizeSuffix int64
|
|
||||||
|
|
||||||
// Turn SizeSuffix into a string and a suffix
|
|
||||||
func (x SizeSuffix) string() (string, string) {
|
|
||||||
scaled := float64(0)
|
|
||||||
suffix := ""
|
|
||||||
switch {
|
|
||||||
case x < 0:
|
|
||||||
return "off", ""
|
|
||||||
case x == 0:
|
|
||||||
return "0", ""
|
|
||||||
case x < 1024:
|
|
||||||
scaled = float64(x)
|
|
||||||
suffix = ""
|
|
||||||
case x < 1024*1024:
|
|
||||||
scaled = float64(x) / 1024
|
|
||||||
suffix = "k"
|
|
||||||
case x < 1024*1024*1024:
|
|
||||||
scaled = float64(x) / 1024 / 1024
|
|
||||||
suffix = "M"
|
|
||||||
default:
|
|
||||||
scaled = float64(x) / 1024 / 1024 / 1024
|
|
||||||
suffix = "G"
|
|
||||||
}
|
|
||||||
if math.Floor(scaled) == scaled {
|
|
||||||
return fmt.Sprintf("%.0f", scaled), suffix
|
|
||||||
}
|
|
||||||
return fmt.Sprintf("%.3f", scaled), suffix
|
|
||||||
}
|
|
||||||
|
|
||||||
// String turns SizeSuffix into a string
|
|
||||||
func (x SizeSuffix) String() string {
|
|
||||||
val, suffix := x.string()
|
|
||||||
return val + suffix
|
|
||||||
}
|
|
||||||
|
|
||||||
// Unit turns SizeSuffix into a string with a unit
|
|
||||||
func (x SizeSuffix) Unit(unit string) string {
|
|
||||||
val, suffix := x.string()
|
|
||||||
if val == "off" {
|
|
||||||
return val
|
|
||||||
}
|
|
||||||
return val + " " + suffix + unit
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set a SizeSuffix
|
|
||||||
func (x *SizeSuffix) Set(s string) error {
|
|
||||||
if len(s) == 0 {
|
|
||||||
return errors.New("empty string")
|
|
||||||
}
|
|
||||||
if strings.ToLower(s) == "off" {
|
|
||||||
*x = -1
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
suffix := s[len(s)-1]
|
|
||||||
suffixLen := 1
|
|
||||||
var multiplier float64
|
|
||||||
switch suffix {
|
|
||||||
case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '.':
|
|
||||||
suffixLen = 0
|
|
||||||
multiplier = 1 << 10
|
|
||||||
case 'b', 'B':
|
|
||||||
multiplier = 1
|
|
||||||
case 'k', 'K':
|
|
||||||
multiplier = 1 << 10
|
|
||||||
case 'm', 'M':
|
|
||||||
multiplier = 1 << 20
|
|
||||||
case 'g', 'G':
|
|
||||||
multiplier = 1 << 30
|
|
||||||
default:
|
|
||||||
return errors.Errorf("bad suffix %q", suffix)
|
|
||||||
}
|
|
||||||
s = s[:len(s)-suffixLen]
|
|
||||||
value, err := strconv.ParseFloat(s, 64)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if value < 0 {
|
|
||||||
return errors.Errorf("size can't be negative %q", s)
|
|
||||||
}
|
|
||||||
value *= multiplier
|
|
||||||
*x = SizeSuffix(value)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Type of the value
|
|
||||||
func (x *SizeSuffix) Type() string {
|
|
||||||
return "int64"
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check it satisfies the interface
|
|
||||||
var _ pflag.Value = (*SizeSuffix)(nil)
|
|
||||||
|
|
||||||
// BwTimeSlot represents a bandwidth configuration at a point in time.
|
|
||||||
type BwTimeSlot struct {
|
|
||||||
hhmm int
|
|
||||||
bandwidth SizeSuffix
|
|
||||||
}
|
|
||||||
|
|
||||||
// BwTimetable contains all configured time slots.
|
|
||||||
type BwTimetable []BwTimeSlot
|
|
||||||
|
|
||||||
// String returns a printable representation of BwTimetable.
|
|
||||||
func (x BwTimetable) String() string {
|
|
||||||
ret := []string{}
|
|
||||||
for _, ts := range x {
|
|
||||||
ret = append(ret, fmt.Sprintf("%04.4d,%s", ts.hhmm, ts.bandwidth.String()))
|
|
||||||
}
|
|
||||||
return strings.Join(ret, " ")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set the bandwidth timetable.
|
|
||||||
func (x *BwTimetable) Set(s string) error {
|
|
||||||
// The timetable is formatted as:
|
|
||||||
// "hh:mm,bandwidth hh:mm,banwidth..." ex: "10:00,10G 11:30,1G 18:00,off"
|
|
||||||
// If only a single bandwidth identifier is provided, we assume constant bandwidth.
|
|
||||||
|
|
||||||
if len(s) == 0 {
|
|
||||||
return errors.New("empty string")
|
|
||||||
}
|
|
||||||
// Single value without time specification.
|
|
||||||
if !strings.Contains(s, " ") && !strings.Contains(s, ",") {
|
|
||||||
ts := BwTimeSlot{}
|
|
||||||
if err := ts.bandwidth.Set(s); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
ts.hhmm = 0
|
|
||||||
*x = BwTimetable{ts}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tok := range strings.Split(s, " ") {
|
|
||||||
tv := strings.Split(tok, ",")
|
|
||||||
|
|
||||||
// Format must be HH:MM,BW
|
|
||||||
if len(tv) != 2 {
|
|
||||||
return errors.Errorf("invalid time/bandwidth specification: %q", tok)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Basic timespec sanity checking
|
|
||||||
hhmm := tv[0]
|
|
||||||
if len(hhmm) != 5 {
|
|
||||||
return errors.Errorf("invalid time specification (hh:mm): %q", hhmm)
|
|
||||||
}
|
|
||||||
hh, err := strconv.Atoi(hhmm[0:2])
|
|
||||||
if err != nil {
|
|
||||||
return errors.Errorf("invalid hour in time specification %q: %v", hhmm, err)
|
|
||||||
}
|
|
||||||
if hh < 0 || hh > 23 {
|
|
||||||
return errors.Errorf("invalid hour (must be between 00 and 23): %q", hh)
|
|
||||||
}
|
|
||||||
mm, err := strconv.Atoi(hhmm[3:])
|
|
||||||
if err != nil {
|
|
||||||
return errors.Errorf("invalid minute in time specification: %q: %v", hhmm, err)
|
|
||||||
}
|
|
||||||
if mm < 0 || mm > 59 {
|
|
||||||
return errors.Errorf("invalid minute (must be between 00 and 59): %q", hh)
|
|
||||||
}
|
|
||||||
|
|
||||||
ts := BwTimeSlot{
|
|
||||||
hhmm: (hh * 100) + mm,
|
|
||||||
}
|
|
||||||
// Bandwidth limit for this time slot.
|
|
||||||
if err := ts.bandwidth.Set(tv[1]); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
*x = append(*x, ts)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// LimitAt returns a BwTimeSlot for the time requested.
|
|
||||||
func (x BwTimetable) LimitAt(tt time.Time) BwTimeSlot {
|
|
||||||
// If the timetable is empty, we return an unlimited BwTimeSlot starting at midnight.
|
|
||||||
if len(x) == 0 {
|
|
||||||
return BwTimeSlot{hhmm: 0, bandwidth: -1}
|
|
||||||
}
|
|
||||||
|
|
||||||
hhmm := tt.Hour()*100 + tt.Minute()
|
|
||||||
|
|
||||||
// By default, we return the last element in the timetable. This
|
|
||||||
// satisfies two conditions: 1) If there's only one element it
|
|
||||||
// will always be selected, and 2) The last element of the table
|
|
||||||
// will "wrap around" until overriden by an earlier time slot.
|
|
||||||
// there's only one time slot in the timetable.
|
|
||||||
ret := x[len(x)-1]
|
|
||||||
|
|
||||||
mindif := 0
|
|
||||||
first := true
|
|
||||||
|
|
||||||
// Look for most recent time slot.
|
|
||||||
for _, ts := range x {
|
|
||||||
// Ignore the past
|
|
||||||
if hhmm < ts.hhmm {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
dif := ((hhmm / 100 * 60) + (hhmm % 100)) - ((ts.hhmm / 100 * 60) + (ts.hhmm % 100))
|
|
||||||
if first {
|
|
||||||
mindif = dif
|
|
||||||
first = false
|
|
||||||
}
|
|
||||||
if dif <= mindif {
|
|
||||||
mindif = dif
|
|
||||||
ret = ts
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return ret
|
|
||||||
}
|
|
||||||
|
|
||||||
// Type of the value
|
|
||||||
func (x BwTimetable) Type() string {
|
|
||||||
return "BwTimetable"
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check it satisfies the interface
|
|
||||||
var _ pflag.Value = (*BwTimetable)(nil)
|
|
||||||
|
|
||||||
// optionToEnv converts an option name, eg "ignore-size" into an
|
// optionToEnv converts an option name, eg "ignore-size" into an
|
||||||
// environment name "RCLONE_IGNORE_SIZE"
|
// environment name "RCLONE_IGNORE_SIZE"
|
||||||
func optionToEnv(name string) string {
|
func optionToEnv(name string) string {
|
||||||
|
@ -254,7 +32,7 @@ func setDefaultFromEnv(name string) {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Invalid value for environment variable %q: %v", key, err)
|
log.Fatalf("Invalid value for environment variable %q: %v", key, err)
|
||||||
}
|
}
|
||||||
Debugf(nil, "Set default for %q from %q to %q (%v)", name, key, newValue, flag.Value)
|
fs.Debugf(nil, "Set default for %q from %q to %q (%v)", name, key, newValue, flag.Value)
|
||||||
flag.DefValue = newValue
|
flag.DefValue = newValue
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -302,6 +80,15 @@ func IntP(name, shorthand string, value int, usage string) (out *int) {
|
||||||
return out
|
return out
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Int64P defines a flag which can be overridden by an environment variable
|
||||||
|
//
|
||||||
|
// It is a thin wrapper around pflag.IntP
|
||||||
|
func Int64P(name, shorthand string, value int64, usage string) (out *int64) {
|
||||||
|
out = pflag.Int64P(name, shorthand, value, usage)
|
||||||
|
setDefaultFromEnv(name)
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
// IntVarP defines a flag which can be overridden by an environment variable
|
// IntVarP defines a flag which can be overridden by an environment variable
|
||||||
//
|
//
|
||||||
// It is a thin wrapper around pflag.IntVarP
|
// It is a thin wrapper around pflag.IntVarP
|
||||||
|
@ -360,10 +147,10 @@ func VarP(value pflag.Value, name, shorthand, usage string) {
|
||||||
setDefaultFromEnv(name)
|
setDefaultFromEnv(name)
|
||||||
}
|
}
|
||||||
|
|
||||||
// FlagsVarP defines a flag which can be overridden by an environment variable
|
// FVarP defines a flag which can be overridden by an environment variable
|
||||||
//
|
//
|
||||||
// It is a thin wrapper around pflag.VarP
|
// It is a thin wrapper around pflag.VarP
|
||||||
func FlagsVarP(flags *pflag.FlagSet, value pflag.Value, name, shorthand, usage string) {
|
func FVarP(flags *pflag.FlagSet, value pflag.Value, name, shorthand, usage string) {
|
||||||
flags.VarP(value, name, shorthand, usage)
|
flags.VarP(value, name, shorthand, usage)
|
||||||
setDefaultFromEnv(name)
|
setDefaultFromEnv(name)
|
||||||
}
|
}
|
95
fs/config/obscure.go
Normal file
95
fs/config/obscure.go
Normal file
|
@ -0,0 +1,95 @@
|
||||||
|
// Obscure and Reveal config values
|
||||||
|
|
||||||
|
package config
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/aes"
|
||||||
|
"crypto/cipher"
|
||||||
|
"crypto/rand"
|
||||||
|
"encoding/base64"
|
||||||
|
"io"
|
||||||
|
"log"
|
||||||
|
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
// crypt internals
|
||||||
|
var (
|
||||||
|
cryptKey = []byte{
|
||||||
|
0x9c, 0x93, 0x5b, 0x48, 0x73, 0x0a, 0x55, 0x4d,
|
||||||
|
0x6b, 0xfd, 0x7c, 0x63, 0xc8, 0x86, 0xa9, 0x2b,
|
||||||
|
0xd3, 0x90, 0x19, 0x8e, 0xb8, 0x12, 0x8a, 0xfb,
|
||||||
|
0xf4, 0xde, 0x16, 0x2b, 0x8b, 0x95, 0xf6, 0x38,
|
||||||
|
}
|
||||||
|
cryptBlock cipher.Block
|
||||||
|
cryptRand = rand.Reader
|
||||||
|
)
|
||||||
|
|
||||||
|
// crypt transforms in to out using iv under AES-CTR.
|
||||||
|
//
|
||||||
|
// in and out may be the same buffer.
|
||||||
|
//
|
||||||
|
// Note encryption and decryption are the same operation
|
||||||
|
func crypt(out, in, iv []byte) error {
|
||||||
|
if cryptBlock == nil {
|
||||||
|
var err error
|
||||||
|
cryptBlock, err = aes.NewCipher(cryptKey)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
stream := cipher.NewCTR(cryptBlock, iv)
|
||||||
|
stream.XORKeyStream(out, in)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Obscure a value
|
||||||
|
//
|
||||||
|
// This is done by encrypting with AES-CTR
|
||||||
|
func Obscure(x string) (string, error) {
|
||||||
|
plaintext := []byte(x)
|
||||||
|
ciphertext := make([]byte, aes.BlockSize+len(plaintext))
|
||||||
|
iv := ciphertext[:aes.BlockSize]
|
||||||
|
if _, err := io.ReadFull(cryptRand, iv); err != nil {
|
||||||
|
return "", errors.Wrap(err, "failed to read iv")
|
||||||
|
}
|
||||||
|
if err := crypt(ciphertext[aes.BlockSize:], plaintext, iv); err != nil {
|
||||||
|
return "", errors.Wrap(err, "encrypt failed")
|
||||||
|
}
|
||||||
|
return base64.RawURLEncoding.EncodeToString(ciphertext), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// MustObscure obscures a value, exiting with a fatal error if it failed
|
||||||
|
func MustObscure(x string) string {
|
||||||
|
out, err := Obscure(x)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Obscure failed: %v", err)
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reveal an obscured value
|
||||||
|
func Reveal(x string) (string, error) {
|
||||||
|
ciphertext, err := base64.RawURLEncoding.DecodeString(x)
|
||||||
|
if err != nil {
|
||||||
|
return "", errors.Wrap(err, "base64 decode failed when revealing password - is it obscured?")
|
||||||
|
}
|
||||||
|
if len(ciphertext) < aes.BlockSize {
|
||||||
|
return "", errors.New("input too short when revealing password - is it obscured?")
|
||||||
|
}
|
||||||
|
buf := ciphertext[aes.BlockSize:]
|
||||||
|
iv := ciphertext[:aes.BlockSize]
|
||||||
|
if err := crypt(buf, buf, iv); err != nil {
|
||||||
|
return "", errors.Wrap(err, "decrypt failed when revealing password - is it obscured?")
|
||||||
|
}
|
||||||
|
return string(buf), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// MustReveal reveals an obscured value, exiting with a fatal error if it failed
|
||||||
|
func MustReveal(x string) string {
|
||||||
|
out, err := Reveal(x)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Reveal failed: %v", err)
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
38
fs/config/obscure_test.go
Normal file
38
fs/config/obscure_test.go
Normal file
|
@ -0,0 +1,38 @@
|
||||||
|
package config
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"crypto/rand"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestObscure(t *testing.T) {
|
||||||
|
for _, test := range []struct {
|
||||||
|
in string
|
||||||
|
want string
|
||||||
|
iv string
|
||||||
|
}{
|
||||||
|
{"", "YWFhYWFhYWFhYWFhYWFhYQ", "aaaaaaaaaaaaaaaa"},
|
||||||
|
{"potato", "YWFhYWFhYWFhYWFhYWFhYXMaGgIlEQ", "aaaaaaaaaaaaaaaa"},
|
||||||
|
{"potato", "YmJiYmJiYmJiYmJiYmJiYp3gcEWbAw", "bbbbbbbbbbbbbbbb"},
|
||||||
|
} {
|
||||||
|
cryptRand = bytes.NewBufferString(test.iv)
|
||||||
|
got, err := Obscure(test.in)
|
||||||
|
cryptRand = rand.Reader
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Equal(t, test.want, got)
|
||||||
|
recoveredIn, err := Reveal(got)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Equal(t, test.in, recoveredIn, "not bidirectional")
|
||||||
|
// Now the Must variants
|
||||||
|
cryptRand = bytes.NewBufferString(test.iv)
|
||||||
|
got = MustObscure(test.in)
|
||||||
|
cryptRand = rand.Reader
|
||||||
|
assert.Equal(t, test.want, got)
|
||||||
|
recoveredIn = MustReveal(got)
|
||||||
|
assert.Equal(t, test.in, recoveredIn, "not bidirectional")
|
||||||
|
|
||||||
|
}
|
||||||
|
}
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue