diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index b56b560e1..c548aeef3 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -241,10 +241,11 @@ Research
Getting going
* Create `remote/remote.go` (copy this from a similar remote)
- * onedrive is a good one to start from if you have a directory based remote
+ * box is a good one to start from if you have a directory based remote
* b2 is a good one to start from if you have a bucket based remote
* Add your remote to the imports in `fs/all/all.go`
* HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead.
+ * Try to implement as many optional methods as possible as it makes the remote more usable.
Unit tests
diff --git a/README.md b/README.md
index d885fe5b6..5b3b0c70b 100644
--- a/README.md
+++ b/README.md
@@ -28,6 +28,7 @@ Rclone is a command line program to sync files and directories to and from
* Microsoft Azure Blob Storage
* Microsoft OneDrive
* Openstack Swift / Rackspace cloud files / Memset Memstore / OVH / Oracle Cloud Storage
+ * pCloud
* QingStor
* SFTP
* Yandex Disk
diff --git a/bin/make_manual.py b/bin/make_manual.py
index adcabf446..f3f989cf1 100755
--- a/bin/make_manual.py
+++ b/bin/make_manual.py
@@ -36,6 +36,7 @@ docs = [
"onedrive.md",
"qingstor.md",
"swift.md",
+ "pcloud.md",
"sftp.md",
"yandex.md",
diff --git a/cmd/cmd.go b/cmd/cmd.go
index 1fa9734ab..710cc83f5 100644
--- a/cmd/cmd.go
+++ b/cmd/cmd.go
@@ -54,6 +54,7 @@ from various cloud storage systems and using file transfer services, such as:
* Microsoft Azure Blob Storage
* Microsoft OneDrive
* Openstack Swift / Rackspace cloud files / Memset Memstore
+ * pCloud
* QingStor
* SFTP
* Yandex Disk
diff --git a/docs/content/about.md b/docs/content/about.md
index 3d2402faa..de506a208 100644
--- a/docs/content/about.md
+++ b/docs/content/about.md
@@ -31,6 +31,7 @@ Rclone is a command line program to sync files and directories to and from:
* {{< provider name="Minio" home="https://www.minio.io/" config="/s3/#minio" >}}
* {{< provider name="OVH" home="https://www.ovh.co.uk/public-cloud/storage/object-storage/" config="/swift/" >}}
* {{< provider name="Openstack Swift" home="https://docs.openstack.org/swift/latest/" config="/swift/" >}}
+* {{< provider name="pCloud" home="https://www.pcloud.com/" config="/pcloud/" >}}
* {{< provider name="Oracle Cloud Storage" home="https://cloud.oracle.com/storage-opc" config="/swift/" >}}
* {{< provider name="QingStor" home="https://www.qingcloud.com/products/storage" config="/qingstor/" >}}
* {{< provider name="Rackspace Cloud Files" home="https://www.rackspace.com/cloud/files" config="/swift/" >}}
diff --git a/docs/content/docs.md b/docs/content/docs.md
index ad0f9408c..3ecb05ca6 100644
--- a/docs/content/docs.md
+++ b/docs/content/docs.md
@@ -33,6 +33,7 @@ See the following for detailed instructions for
* [Microsoft Azure Blob Storage](/azureblob/)
* [Microsoft OneDrive](/onedrive/)
* [Openstack Swift / Rackspace Cloudfiles / Memset Memstore](/swift/)
+ * [Pcloud](/pcloud/)
* [QingStor](/qingstor/)
* [SFTP](/sftp/)
* [Yandex Disk](/yandex/)
diff --git a/docs/content/overview.md b/docs/content/overview.md
index 1ab53b79e..513d2cc17 100644
--- a/docs/content/overview.md
+++ b/docs/content/overview.md
@@ -30,6 +30,7 @@ Here is an overview of the major features of each cloud storage system.
| Microsoft Azure Blob Storage | MD5 | Yes | No | No | R/W |
| Microsoft OneDrive | SHA1 | Yes | Yes | No | R |
| Openstack Swift | MD5 | Yes | No | No | R/W |
+| pCloud | MD5, SHA1 | Yes | No | No | W |
| QingStor | MD5 | No | No | No | R/W |
| SFTP | MD5, SHA1 ‡ | Yes | Depends | No | - |
| Yandex Disk | MD5 | Yes | No | No | R/W |
@@ -130,6 +131,7 @@ operations more efficient.
| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | No |
| Microsoft OneDrive | Yes | Yes | Yes | No [#197](https://github.com/ncw/rclone/issues/197) | No [#575](https://github.com/ncw/rclone/issues/575) | No | No |
| Openstack Swift | Yes † | Yes | No | No | No | Yes | Yes |
+| pCloud | Yes | Yes | Yes | Yes | Yes | No | No |
| QingStor | No | Yes | No | No | No | Yes | No |
| SFTP | No | No | Yes | Yes | No | No | Yes |
| Yandex Disk | Yes | No | No | No | Yes | Yes | Yes |
diff --git a/docs/content/pcloud.md b/docs/content/pcloud.md
new file mode 100644
index 000000000..105c97856
--- /dev/null
+++ b/docs/content/pcloud.md
@@ -0,0 +1,135 @@
+---
+title: "pCloud"
+description: "Rclone docs for pCloud"
+date: "2017-10-01"
+---
+
+ pCloud
+-----------------------------------------
+
+Paths are specified as `remote:path`
+
+Paths may be as deep as required, eg `remote:directory/subdirectory`.
+
+The initial setup for pCloud involves getting a token from pCloud which you
+need to do in your browser. `rclone config` walks you through it.
+
+Here is an example of how to make a remote called `remote`. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Box
+ \ "box"
+ 5 / Dropbox
+ \ "dropbox"
+ 6 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 7 / FTP Connection
+ \ "ftp"
+ 8 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 9 / Google Drive
+ \ "drive"
+10 / Hubic
+ \ "hubic"
+11 / Local Disk
+ \ "local"
+12 / Microsoft Azure Blob Storage
+ \ "azureblob"
+13 / Microsoft OneDrive
+ \ "onedrive"
+14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+15 / Pcloud
+ \ "pcloud"
+16 / QingClound Object Storage
+ \ "qingstor"
+17 / SSH/SFTP Connection
+ \ "sftp"
+18 / Yandex Disk
+ \ "yandex"
+19 / http Connection
+ \ "http"
+Storage> pcloud
+Pcloud App Client Id - leave blank normally.
+client_id>
+Pcloud App Client Secret - leave blank normally.
+client_secret>
+Remote config
+Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+y) Yes
+n) No
+y/n> y
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+Log in and authorize rclone for access
+Waiting for code...
+Got code
+--------------------
+[remote]
+client_id =
+client_secret =
+token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+See the [remote setup docs](/remote_setup/) for how to set it up on a
+machine with no Internet browser available.
+
+Note that rclone runs a webserver on your local machine to collect the
+token as returned from pCloud. This only runs from the moment it opens
+your browser to the moment you get back the verification code. This
+is on `http://127.0.0.1:53682/` and this it may require you to unblock
+it temporarily if you are running a host firewall.
+
+Once configured you can then use `rclone` like this,
+
+List directories in top level of your pCloud
+
+ rclone lsd remote:
+
+List all the files in your pCloud
+
+ rclone ls remote:
+
+To copy a local directory to an pCloud directory called backup
+
+ rclone copy /home/source remote:backup
+
+### Modified time and hashes ###
+
+pCloud allows modification times to be set on objects accurate to 1
+second. These will be used to detect whether objects need syncing or
+not. In order to set a Modification time pCloud requires the object
+be re-uploaded.
+
+pCloud supports MD5 and SHA1 type hashes, so you can use the
+`--checksum` flag.
+
+### Deleting files ###
+
+Deleted files will be moved to the trash. Your subscription level
+will determine how long items stay in the trash. `rclone cleanup` can
+be used to empty the trash.
diff --git a/docs/layouts/chrome/navbar.html b/docs/layouts/chrome/navbar.html
index 572d06943..27c6b23b2 100644
--- a/docs/layouts/chrome/navbar.html
+++ b/docs/layouts/chrome/navbar.html
@@ -64,6 +64,7 @@
Microsoft OneDrive
QingStor
Openstack Swift
+ pCloud
SFTP
Yandex Disk
The local filesystem
diff --git a/fs/all/all.go b/fs/all/all.go
index 51921b7b5..c0bcd9982 100644
--- a/fs/all/all.go
+++ b/fs/all/all.go
@@ -15,6 +15,7 @@ import (
_ "github.com/ncw/rclone/hubic"
_ "github.com/ncw/rclone/local"
_ "github.com/ncw/rclone/onedrive"
+ _ "github.com/ncw/rclone/pcloud"
_ "github.com/ncw/rclone/qingstor"
_ "github.com/ncw/rclone/s3"
_ "github.com/ncw/rclone/sftp"
diff --git a/fs/test_all.go b/fs/test_all.go
index a518b2c41..a890c8daf 100644
--- a/fs/test_all.go
+++ b/fs/test_all.go
@@ -113,6 +113,11 @@ var (
SubDir: true,
FastList: true,
},
+ {
+ Name: "TestPcloud:",
+ SubDir: false,
+ FastList: false,
+ },
}
binary = "fs.test"
// Flags
diff --git a/fstest/fstests/gen_tests.go b/fstest/fstests/gen_tests.go
index 4aa3565f0..2c9f1e128 100644
--- a/fstest/fstests/gen_tests.go
+++ b/fstest/fstests/gen_tests.go
@@ -164,5 +164,6 @@ func main() {
generateTestProgram(t, fns, "Box")
generateTestProgram(t, fns, "QingStor", buildConstraint("!plan9"))
generateTestProgram(t, fns, "AzureBlob", buildConstraint("go1.7"))
+ generateTestProgram(t, fns, "Pcloud")
log.Printf("Done")
}
diff --git a/pcloud/api/types.go b/pcloud/api/types.go
new file mode 100644
index 000000000..bf4ad3032
--- /dev/null
+++ b/pcloud/api/types.go
@@ -0,0 +1,153 @@
+// Package api has type definitions for pcloud
+//
+// Converted from the API docs with help from https://mholt.github.io/json-to-go/
+package api
+
+import (
+ "fmt"
+ "time"
+)
+
+const (
+ // Sun, 16 Mar 2014 17:26:04 +0000
+ timeFormat = `"` + time.RFC1123Z + `"`
+)
+
+// Time represents represents date and time information for the
+// pcloud API, by using RFC1123Z
+type Time time.Time
+
+// MarshalJSON turns a Time into JSON (in UTC)
+func (t *Time) MarshalJSON() (out []byte, err error) {
+ timeString := (*time.Time)(t).Format(timeFormat)
+ return []byte(timeString), nil
+}
+
+// UnmarshalJSON turns JSON into a Time
+func (t *Time) UnmarshalJSON(data []byte) error {
+ newT, err := time.Parse(timeFormat, string(data))
+ if err != nil {
+ return err
+ }
+ *t = Time(newT)
+ return nil
+}
+
+// Error is returned from pcloud when things go wrong
+//
+// If result is 0 then everything is OK
+type Error struct {
+ Result int `json:"result"`
+ ErrorString string `json:"error"`
+}
+
+// Error returns a string for the error and statistifes the error interface
+func (e *Error) Error() string {
+ return fmt.Sprintf("pcloud error: %s (%d)", e.ErrorString, e.Result)
+}
+
+// Update returns err directly if it was != nil, otherwise it returns
+// an Error or nil if no error was detected
+func (e *Error) Update(err error) error {
+ if err != nil {
+ return err
+ }
+ if e.Result == 0 {
+ return nil
+ }
+ return e
+}
+
+// Check Error statisfies the error interface
+var _ error = (*Error)(nil)
+
+// Item describes a folder or a file as returned by Get Folder Items and others
+type Item struct {
+ Path string `json:"path"`
+ Name string `json:"name"`
+ Created Time `json:"created"`
+ IsMine bool `json:"ismine"`
+ Thumb bool `json:"thumb"`
+ Modified Time `json:"modified"`
+ Comments int `json:"comments"`
+ ID string `json:"id"`
+ IsShared bool `json:"isshared"`
+ IsDeleted bool `json:"isdeleted"`
+ Icon string `json:"icon"`
+ IsFolder bool `json:"isfolder"`
+ ParentFolderID int64 `json:"parentfolderid"`
+ FolderID int64 `json:"folderid,omitempty"`
+ Height int `json:"height,omitempty"`
+ FileID int64 `json:"fileid,omitempty"`
+ Width int `json:"width,omitempty"`
+ Hash uint64 `json:"hash,omitempty"`
+ Category int `json:"category,omitempty"`
+ Size int64 `json:"size,omitempty"`
+ ContentType string `json:"contenttype,omitempty"`
+ Contents []Item `json:"contents"`
+}
+
+// ModTime returns the modification time of the item
+func (i *Item) ModTime() (t time.Time) {
+ t = time.Time(i.Modified)
+ if t.IsZero() {
+ t = time.Time(i.Created)
+ }
+ return t
+}
+
+// ItemResult is returned from the /listfolder, /createfolder, /deletefolder, /deletefile etc methods
+type ItemResult struct {
+ Error
+ Metadata Item `json:"metadata"`
+}
+
+// Hashes contains the supported hashes
+type Hashes struct {
+ SHA1 string `json:"sha1"`
+ MD5 string `json:"md5"`
+}
+
+// UploadFileResponse is the response from /uploadfile
+type UploadFileResponse struct {
+ Error
+ Items []Item `json:"metadata"`
+ Checksums []Hashes `json:"checksums"`
+ Fileids []int64 `json:"fileids"`
+}
+
+// GetFileLinkResult is returned from /getfilelink
+type GetFileLinkResult struct {
+ Error
+ Dwltag string `json:"dwltag"`
+ Hash uint64 `json:"hash"`
+ Size int64 `json:"size"`
+ Expires Time `json:"expires"`
+ Path string `json:"path"`
+ Hosts []string `json:"hosts"`
+}
+
+// IsValid returns whether the link is valid and has not expired
+func (g *GetFileLinkResult) IsValid() bool {
+ if g == nil {
+ return false
+ }
+ if len(g.Hosts) == 0 {
+ return false
+ }
+ return time.Until(time.Time(g.Expires)) > 30*time.Second
+}
+
+// URL returns a URL from the Path and Hosts. Check with IsValid
+// before calling.
+func (g *GetFileLinkResult) URL() string {
+ // FIXME rotate the hosts?
+ return "https://" + g.Hosts[0] + g.Path
+}
+
+// ChecksumFileResult is returned from /checksumfile
+type ChecksumFileResult struct {
+ Error
+ Hashes
+ Metadata Item `json:"metadata"`
+}
diff --git a/pcloud/pcloud.go b/pcloud/pcloud.go
new file mode 100644
index 000000000..75f6e68d8
--- /dev/null
+++ b/pcloud/pcloud.go
@@ -0,0 +1,1111 @@
+// Package pcloud provides an interface to the Pcloud
+// object storage system.
+package pcloud
+
+// FIXME implement ListR? /listfolder can do recursive lists
+
+// FIXME cleanup returns login required?
+
+// FIXME mime type? Fix overview if implement.
+
+import (
+ "bytes"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "log"
+ "net/http"
+ "net/url"
+ "path"
+ "regexp"
+ "strings"
+ "time"
+
+ "github.com/ncw/rclone/dircache"
+ "github.com/ncw/rclone/fs"
+ "github.com/ncw/rclone/oauthutil"
+ "github.com/ncw/rclone/pacer"
+ "github.com/ncw/rclone/pcloud/api"
+ "github.com/ncw/rclone/rest"
+ "github.com/pkg/errors"
+ "golang.org/x/oauth2"
+)
+
+const (
+ rcloneClientID = "DnONSzyJXpm"
+ rcloneEncryptedClientSecret = "ej1OIF39VOQQ0PXaSdK9ztkLw3tdLNscW2157TKNQdQKkICR4uU7aFg4eFM"
+ minSleep = 10 * time.Millisecond
+ maxSleep = 2 * time.Second
+ decayConstant = 2 // bigger for slower decay, exponential
+ rootID = "d0" // ID of root folder is always this
+ rootURL = "https://api.pcloud.com"
+)
+
+// Globals
+var (
+ // Description of how to auth for this app
+ oauthConfig = &oauth2.Config{
+ Scopes: nil,
+ Endpoint: oauth2.Endpoint{
+ AuthURL: "https://my.pcloud.com/oauth2/authorize",
+ TokenURL: "https://api.pcloud.com/oauth2_token",
+ },
+ ClientID: rcloneClientID,
+ ClientSecret: fs.MustReveal(rcloneEncryptedClientSecret),
+ RedirectURL: oauthutil.RedirectLocalhostURL,
+ }
+ uploadCutoff = fs.SizeSuffix(50 * 1024 * 1024)
+)
+
+// Register with Fs
+func init() {
+ fs.Register(&fs.RegInfo{
+ Name: "pcloud",
+ Description: "Pcloud",
+ NewFs: NewFs,
+ Config: func(name string) {
+ err := oauthutil.Config("pcloud", name, oauthConfig)
+ if err != nil {
+ log.Fatalf("Failed to configure token: %v", err)
+ }
+ },
+ Options: []fs.Option{{
+ Name: fs.ConfigClientID,
+ Help: "Pcloud App Client Id - leave blank normally.",
+ }, {
+ Name: fs.ConfigClientSecret,
+ Help: "Pcloud App Client Secret - leave blank normally.",
+ }},
+ })
+ fs.VarP(&uploadCutoff, "pcloud-upload-cutoff", "", "Cutoff for switching to multipart upload")
+}
+
+// Fs represents a remote pcloud
+type Fs struct {
+ name string // name of this remote
+ root string // the path we are working on
+ features *fs.Features // optional features
+ srv *rest.Client // the connection to the server
+ dirCache *dircache.DirCache // Map of directory path to directory id
+ pacer *pacer.Pacer // pacer for API calls
+ tokenRenewer *oauthutil.Renew // renew the token on expiry
+ uploadToken *pacer.TokenDispenser // control concurrency
+}
+
+// Object describes a pcloud object
+//
+// Will definitely have info but maybe not meta
+type Object struct {
+ fs *Fs // what this object is part of
+ remote string // The remote path
+ hasMetaData bool // whether info below has been set
+ size int64 // size of the object
+ modTime time.Time // modification time of the object
+ id string // ID of the object
+ md5 string // MD5 if known
+ sha1 string // SHA1 if known
+ link *api.GetFileLinkResult
+}
+
+// ------------------------------------------------------------
+
+// Name of the remote (as passed into NewFs)
+func (f *Fs) Name() string {
+ return f.name
+}
+
+// Root of the remote (as passed into NewFs)
+func (f *Fs) Root() string {
+ return f.root
+}
+
+// String converts this Fs to a string
+func (f *Fs) String() string {
+ return fmt.Sprintf("pcloud root '%s'", f.root)
+}
+
+// Features returns the optional features of this Fs
+func (f *Fs) Features() *fs.Features {
+ return f.features
+}
+
+// Pattern to match a pcloud path
+var matcher = regexp.MustCompile(`^([^/]*)(.*)$`)
+
+// parsePath parses an pcloud 'url'
+func parsePath(path string) (root string) {
+ root = strings.Trim(path, "/")
+ return
+}
+
+// retryErrorCodes is a slice of error codes that we will retry
+var retryErrorCodes = []int{
+ 429, // Too Many Requests.
+ 500, // Internal Server Error
+ 502, // Bad Gateway
+ 503, // Service Unavailable
+ 504, // Gateway Timeout
+ 509, // Bandwidth Limit Exceeded
+}
+
+// shouldRetry returns a boolean as to whether this resp and err
+// deserve to be retried. It returns the err as a convenience
+func shouldRetry(resp *http.Response, err error) (bool, error) {
+ doRetry := false
+
+ // Check if it is an api.Error
+ if apiErr, ok := err.(*api.Error); ok {
+ // See https://docs.pcloud.com/errors/ for error treatment
+ // Errors are classified as 1xxx, 2xxx etc
+ switch apiErr.Result / 1000 {
+ case 4: // 4xxx: rate limiting
+ doRetry = true
+ case 5: // 5xxx: internal errors
+ doRetry = true
+ }
+ }
+
+ if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
+ doRetry = true
+ fs.Debugf(nil, "Should retry: %v", err)
+ }
+ return doRetry || fs.ShouldRetry(err) || fs.ShouldRetryHTTP(resp, retryErrorCodes), err
+}
+
+// substitute reserved characters for pcloud
+//
+// Generally all characters are allowed in filenames, except the NULL
+// byte, forward and backslash (/,\ and \0)
+func replaceReservedChars(x string) string {
+ // Backslash for FULLWIDTH REVERSE SOLIDUS
+ return strings.Replace(x, "\\", "\", -1)
+}
+
+// restore reserved characters for pcloud
+func restoreReservedChars(x string) string {
+ // FULLWIDTH REVERSE SOLIDUS for Backslash
+ return strings.Replace(x, "\", "\\", -1)
+}
+
+// readMetaDataForPath reads the metadata from the path
+func (f *Fs) readMetaDataForPath(path string) (info *api.Item, err error) {
+ // defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
+ leaf, directoryID, err := f.dirCache.FindRootAndPath(path, false)
+ if err != nil {
+ if err == fs.ErrorDirNotFound {
+ return nil, fs.ErrorObjectNotFound
+ }
+ return nil, err
+ }
+
+ found, err := f.listAll(directoryID, false, true, func(item *api.Item) bool {
+ if item.Name == leaf {
+ info = item
+ return true
+ }
+ return false
+ })
+ if err != nil {
+ return nil, err
+ }
+ if !found {
+ return nil, fs.ErrorObjectNotFound
+ }
+ return info, nil
+}
+
+// errorHandler parses a non 2xx error response into an error
+func errorHandler(resp *http.Response) error {
+ // Decode error response
+ errResponse := new(api.Error)
+ err := rest.DecodeJSON(resp, &errResponse)
+ if err != nil {
+ fs.Debugf(nil, "Couldn't decode error response: %v", err)
+ }
+ if errResponse.ErrorString == "" {
+ errResponse.ErrorString = resp.Status
+ }
+ if errResponse.Result == 0 {
+ errResponse.Result = resp.StatusCode
+ }
+ return errResponse
+}
+
+// NewFs constructs an Fs from the path, container:path
+func NewFs(name, root string) (fs.Fs, error) {
+ root = parsePath(root)
+ oAuthClient, ts, err := oauthutil.NewClient(name, oauthConfig)
+ if err != nil {
+ log.Fatalf("Failed to configure Pcloud: %v", err)
+ }
+
+ f := &Fs{
+ name: name,
+ root: root,
+ srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
+ pacer: pacer.New().SetMinSleep(minSleep).SetMaxSleep(maxSleep).SetDecayConstant(decayConstant),
+ uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
+ }
+ f.features = (&fs.Features{
+ CaseInsensitive: false,
+ CanHaveEmptyDirectories: true,
+ }).Fill(f)
+ f.srv.SetErrorHandler(errorHandler)
+
+ // Renew the token in the background
+ f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
+ _, err := f.readMetaDataForPath("")
+ return err
+ })
+
+ // Get rootID
+ f.dirCache = dircache.New(root, rootID, f)
+
+ // Find the current root
+ err = f.dirCache.FindRoot(false)
+ if err != nil {
+ // Assume it is a file
+ newRoot, remote := dircache.SplitPath(root)
+ newF := *f
+ newF.dirCache = dircache.New(newRoot, rootID, &newF)
+ newF.root = newRoot
+ // Make new Fs which is the parent
+ err = newF.dirCache.FindRoot(false)
+ if err != nil {
+ // No root so return old f
+ return f, nil
+ }
+ _, err := newF.newObjectWithInfo(remote, nil)
+ if err != nil {
+ if err == fs.ErrorObjectNotFound {
+ // File doesn't exist so return old f
+ return f, nil
+ }
+ return nil, err
+ }
+ // return an error with an fs which points to the parent
+ return &newF, fs.ErrorIsFile
+ }
+ return f, nil
+}
+
+// Return an Object from a path
+//
+// If it can't be found it returns the error fs.ErrorObjectNotFound.
+func (f *Fs) newObjectWithInfo(remote string, info *api.Item) (fs.Object, error) {
+ o := &Object{
+ fs: f,
+ remote: remote,
+ }
+ var err error
+ if info != nil {
+ // Set info
+ err = o.setMetaData(info)
+ } else {
+ err = o.readMetaData() // reads info and meta, returning an error
+ }
+ if err != nil {
+ return nil, err
+ }
+ return o, nil
+}
+
+// NewObject finds the Object at remote. If it can't be found
+// it returns the error fs.ErrorObjectNotFound.
+func (f *Fs) NewObject(remote string) (fs.Object, error) {
+ return f.newObjectWithInfo(remote, nil)
+}
+
+// FindLeaf finds a directory of name leaf in the folder with ID pathID
+func (f *Fs) FindLeaf(pathID, leaf string) (pathIDOut string, found bool, err error) {
+ // Find the leaf in pathID
+ found, err = f.listAll(pathID, true, false, func(item *api.Item) bool {
+ if item.Name == leaf {
+ pathIDOut = item.ID
+ return true
+ }
+ return false
+ })
+ return pathIDOut, found, err
+}
+
+// CreateDir makes a directory with pathID as parent and name leaf
+func (f *Fs) CreateDir(pathID, leaf string) (newID string, err error) {
+ // fs.Debugf(f, "CreateDir(%q, %q)\n", pathID, leaf)
+ var resp *http.Response
+ var result api.ItemResult
+ opts := rest.Opts{
+ Method: "POST",
+ Path: "/createfolder",
+ Parameters: url.Values{},
+ }
+ opts.Parameters.Set("name", replaceReservedChars(leaf))
+ opts.Parameters.Set("folderid", dirIDtoNumber(pathID))
+ err = f.pacer.Call(func() (bool, error) {
+ resp, err = f.srv.CallJSON(&opts, nil, &result)
+ err = result.Error.Update(err)
+ return shouldRetry(resp, err)
+ })
+ if err != nil {
+ //fmt.Printf("...Error %v\n", err)
+ return "", err
+ }
+ // fmt.Printf("...Id %q\n", *info.Id)
+ return result.Metadata.ID, nil
+}
+
+// Converts a dirID which is usually 'd' followed by digits into just
+// the digits
+func dirIDtoNumber(dirID string) string {
+ if len(dirID) > 0 && dirID[0] == 'd' {
+ return dirID[1:]
+ }
+ fs.Debugf(nil, "Invalid directory id %q", dirID)
+ return dirID
+}
+
+// Converts a fileID which is usually 'f' followed by digits into just
+// the digits
+func fileIDtoNumber(fileID string) string {
+ if len(fileID) > 0 && fileID[0] == 'f' {
+ return fileID[1:]
+ }
+ fs.Debugf(nil, "Invalid filee id %q", fileID)
+ return fileID
+}
+
+// list the objects into the function supplied
+//
+// If directories is set it only sends directories
+// User function to process a File item from listAll
+//
+// Should return true to finish processing
+type listAllFn func(*api.Item) bool
+
+// Lists the directory required calling the user function on each item found
+//
+// If the user fn ever returns true then it early exits with found = true
+func (f *Fs) listAll(dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
+ opts := rest.Opts{
+ Method: "GET",
+ Path: "/listfolder",
+ Parameters: url.Values{},
+ }
+ opts.Parameters.Set("folderid", dirIDtoNumber(dirID))
+ // FIXME can do recursive
+
+ var result api.ItemResult
+ var resp *http.Response
+ err = f.pacer.Call(func() (bool, error) {
+ resp, err = f.srv.CallJSON(&opts, nil, &result)
+ err = result.Error.Update(err)
+ return shouldRetry(resp, err)
+ })
+ if err != nil {
+ return found, errors.Wrap(err, "couldn't list files")
+ }
+ for i := range result.Metadata.Contents {
+ item := &result.Metadata.Contents[i]
+ if item.IsFolder {
+ if filesOnly {
+ continue
+ }
+ } else {
+ if directoriesOnly {
+ continue
+ }
+ }
+ item.Name = restoreReservedChars(item.Name)
+ if fn(item) {
+ found = true
+ break
+ }
+ }
+ return
+}
+
+// List the objects and directories in dir into entries. The
+// entries can be returned in any order but should be for a
+// complete directory.
+//
+// dir should be "" to list the root, and should not have
+// trailing slashes.
+//
+// This should return ErrDirNotFound if the directory isn't
+// found.
+func (f *Fs) List(dir string) (entries fs.DirEntries, err error) {
+ err = f.dirCache.FindRoot(false)
+ if err != nil {
+ return nil, err
+ }
+ directoryID, err := f.dirCache.FindDir(dir, false)
+ if err != nil {
+ return nil, err
+ }
+ var iErr error
+ _, err = f.listAll(directoryID, false, false, func(info *api.Item) bool {
+ remote := path.Join(dir, info.Name)
+ if info.IsFolder {
+ // cache the directory ID for later lookups
+ f.dirCache.Put(remote, info.ID)
+ d := fs.NewDir(remote, info.ModTime()).SetID(info.ID)
+ // FIXME more info from dir?
+ entries = append(entries, d)
+ } else {
+ o, err := f.newObjectWithInfo(remote, info)
+ if err != nil {
+ iErr = err
+ return true
+ }
+ entries = append(entries, o)
+ }
+ return false
+ })
+ if err != nil {
+ return nil, err
+ }
+ if iErr != nil {
+ return nil, iErr
+ }
+ return entries, nil
+}
+
+// Creates from the parameters passed in a half finished Object which
+// must have setMetaData called on it
+//
+// Returns the object, leaf, directoryID and error
+//
+// Used to create new objects
+func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object, leaf string, directoryID string, err error) {
+ // Create the directory for the object if it doesn't exist
+ leaf, directoryID, err = f.dirCache.FindRootAndPath(remote, true)
+ if err != nil {
+ return
+ }
+ // Temporary Object under construction
+ o = &Object{
+ fs: f,
+ remote: remote,
+ }
+ return o, leaf, directoryID, nil
+}
+
+// Put the object into the container
+//
+// Copy the reader in to the new object which is returned
+//
+// The new object may have been created if an error is returned
+func (f *Fs) Put(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
+ remote := src.Remote()
+ size := src.Size()
+ modTime := src.ModTime()
+
+ o, _, _, err := f.createObject(remote, modTime, size)
+ if err != nil {
+ return nil, err
+ }
+ return o, o.Update(in, src, options...)
+}
+
+// Mkdir creates the container if it doesn't exist
+func (f *Fs) Mkdir(dir string) error {
+ err := f.dirCache.FindRoot(true)
+ if err != nil {
+ return err
+ }
+ if dir != "" {
+ _, err = f.dirCache.FindDir(dir, true)
+ }
+ return err
+}
+
+// purgeCheck removes the root directory, if check is set then it
+// refuses to do so if it has anything in
+func (f *Fs) purgeCheck(dir string, check bool) error {
+ root := path.Join(f.root, dir)
+ if root == "" {
+ return errors.New("can't purge root directory")
+ }
+ dc := f.dirCache
+ err := dc.FindRoot(false)
+ if err != nil {
+ return err
+ }
+ rootID, err := dc.FindDir(dir, false)
+ if err != nil {
+ return err
+ }
+
+ opts := rest.Opts{
+ Method: "POST",
+ Path: "/deletefolder",
+ Parameters: url.Values{},
+ }
+ opts.Parameters.Set("folderid", dirIDtoNumber(rootID))
+ if !check {
+ opts.Path = "/deletefolderrecursive"
+ }
+ var resp *http.Response
+ var result api.ItemResult
+ err = f.pacer.Call(func() (bool, error) {
+ resp, err = f.srv.CallJSON(&opts, nil, &result)
+ err = result.Error.Update(err)
+ return shouldRetry(resp, err)
+ })
+ if err != nil {
+ return errors.Wrap(err, "rmdir failed")
+ }
+ f.dirCache.FlushDir(dir)
+ if err != nil {
+ return err
+ }
+ return nil
+}
+
+// Rmdir deletes the root folder
+//
+// Returns an error if it isn't empty
+func (f *Fs) Rmdir(dir string) error {
+ return f.purgeCheck(dir, true)
+}
+
+// Precision return the precision of this Fs
+func (f *Fs) Precision() time.Duration {
+ return time.Second
+}
+
+// Copy src to this remote using server side copy operations.
+//
+// This is stored with the remote path given
+//
+// It returns the destination Object and a possible error
+//
+// Will only be called if src.Fs().Name() == f.Name()
+//
+// If it isn't possible then return fs.ErrorCantCopy
+func (f *Fs) Copy(src fs.Object, remote string) (fs.Object, error) {
+ srcObj, ok := src.(*Object)
+ if !ok {
+ fs.Debugf(src, "Can't copy - not same remote type")
+ return nil, fs.ErrorCantCopy
+ }
+ err := srcObj.readMetaData()
+ if err != nil {
+ return nil, err
+ }
+
+ // Create temporary object
+ dstObj, leaf, directoryID, err := f.createObject(remote, srcObj.modTime, srcObj.size)
+ if err != nil {
+ return nil, err
+ }
+
+ // Copy the object
+ opts := rest.Opts{
+ Method: "POST",
+ Path: "/copyfile",
+ Parameters: url.Values{},
+ }
+ opts.Parameters.Set("fileid", fileIDtoNumber(srcObj.id))
+ opts.Parameters.Set("toname", replaceReservedChars(leaf))
+ opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID))
+ opts.Parameters.Set("mtime", fmt.Sprintf("%d", srcObj.modTime.Unix()))
+ var resp *http.Response
+ var result api.ItemResult
+ err = f.pacer.Call(func() (bool, error) {
+ resp, err = f.srv.CallJSON(&opts, nil, &result)
+ err = result.Error.Update(err)
+ return shouldRetry(resp, err)
+ })
+ if err != nil {
+ return nil, err
+ }
+ err = dstObj.setMetaData(&result.Metadata)
+ if err != nil {
+ return nil, err
+ }
+ return dstObj, nil
+}
+
+// Purge deletes all the files and the container
+//
+// Optional interface: Only implement this if you have a way of
+// deleting all the files quicker than just running Remove() on the
+// result of List()
+func (f *Fs) Purge() error {
+ return f.purgeCheck("", false)
+}
+
+// CleanUp empties the trash
+func (f *Fs) CleanUp() error {
+ err := f.dirCache.FindRoot(false)
+ if err != nil {
+ return err
+ }
+ opts := rest.Opts{
+ Method: "POST",
+ Path: "/trash_clear",
+ Parameters: url.Values{},
+ }
+ opts.Parameters.Set("folderid", dirIDtoNumber(f.dirCache.RootID()))
+ var resp *http.Response
+ var result api.Error
+ return f.pacer.Call(func() (bool, error) {
+ resp, err = f.srv.CallJSON(&opts, nil, &result)
+ err = result.Update(err)
+ return shouldRetry(resp, err)
+ })
+}
+
+// Move src to this remote using server side move operations.
+//
+// This is stored with the remote path given
+//
+// It returns the destination Object and a possible error
+//
+// Will only be called if src.Fs().Name() == f.Name()
+//
+// If it isn't possible then return fs.ErrorCantMove
+func (f *Fs) Move(src fs.Object, remote string) (fs.Object, error) {
+ srcObj, ok := src.(*Object)
+ if !ok {
+ fs.Debugf(src, "Can't move - not same remote type")
+ return nil, fs.ErrorCantMove
+ }
+
+ // Create temporary object
+ dstObj, leaf, directoryID, err := f.createObject(remote, srcObj.modTime, srcObj.size)
+ if err != nil {
+ return nil, err
+ }
+
+ // Do the move
+ opts := rest.Opts{
+ Method: "POST",
+ Path: "/renamefile",
+ Parameters: url.Values{},
+ }
+ opts.Parameters.Set("fileid", fileIDtoNumber(srcObj.id))
+ opts.Parameters.Set("toname", replaceReservedChars(leaf))
+ opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID))
+ var resp *http.Response
+ var result api.ItemResult
+ err = f.pacer.Call(func() (bool, error) {
+ resp, err = f.srv.CallJSON(&opts, nil, &result)
+ err = result.Error.Update(err)
+ return shouldRetry(resp, err)
+ })
+ if err != nil {
+ return nil, err
+ }
+
+ err = dstObj.setMetaData(&result.Metadata)
+ if err != nil {
+ return nil, err
+ }
+ return dstObj, nil
+}
+
+// DirMove moves src, srcRemote to this remote at dstRemote
+// using server side move operations.
+//
+// Will only be called if src.Fs().Name() == f.Name()
+//
+// If it isn't possible then return fs.ErrorCantDirMove
+//
+// If destination exists then return fs.ErrorDirExists
+func (f *Fs) DirMove(src fs.Fs, srcRemote, dstRemote string) error {
+ srcFs, ok := src.(*Fs)
+ if !ok {
+ fs.Debugf(srcFs, "Can't move directory - not same remote type")
+ return fs.ErrorCantDirMove
+ }
+ srcPath := path.Join(srcFs.root, srcRemote)
+ dstPath := path.Join(f.root, dstRemote)
+
+ // Refuse to move to or from the root
+ if srcPath == "" || dstPath == "" {
+ fs.Debugf(src, "DirMove error: Can't move root")
+ return errors.New("can't move root directory")
+ }
+
+ // find the root src directory
+ err := srcFs.dirCache.FindRoot(false)
+ if err != nil {
+ return err
+ }
+
+ // find the root dst directory
+ if dstRemote != "" {
+ err = f.dirCache.FindRoot(true)
+ if err != nil {
+ return err
+ }
+ } else {
+ if f.dirCache.FoundRoot() {
+ return fs.ErrorDirExists
+ }
+ }
+
+ // Find ID of dst parent, creating subdirs if necessary
+ var leaf, directoryID string
+ findPath := dstRemote
+ if dstRemote == "" {
+ findPath = f.root
+ }
+ leaf, directoryID, err = f.dirCache.FindPath(findPath, true)
+ if err != nil {
+ return err
+ }
+
+ // Check destination does not exist
+ if dstRemote != "" {
+ _, err = f.dirCache.FindDir(dstRemote, false)
+ if err == fs.ErrorDirNotFound {
+ // OK
+ } else if err != nil {
+ return err
+ } else {
+ return fs.ErrorDirExists
+ }
+ }
+
+ // Find ID of src
+ srcID, err := srcFs.dirCache.FindDir(srcRemote, false)
+ if err != nil {
+ return err
+ }
+
+ // Do the move
+ opts := rest.Opts{
+ Method: "POST",
+ Path: "/renamefolder",
+ Parameters: url.Values{},
+ }
+ opts.Parameters.Set("folderid", dirIDtoNumber(srcID))
+ opts.Parameters.Set("toname", replaceReservedChars(leaf))
+ opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID))
+ var resp *http.Response
+ var result api.ItemResult
+ err = f.pacer.Call(func() (bool, error) {
+ resp, err = f.srv.CallJSON(&opts, nil, &result)
+ err = result.Error.Update(err)
+ return shouldRetry(resp, err)
+ })
+ if err != nil {
+ return err
+ }
+
+ srcFs.dirCache.FlushDir(srcRemote)
+ return nil
+}
+
+// DirCacheFlush resets the directory cache - used in testing as an
+// optional interface
+func (f *Fs) DirCacheFlush() {
+ f.dirCache.ResetRoot()
+}
+
+// Hashes returns the supported hash sets.
+func (f *Fs) Hashes() fs.HashSet {
+ return fs.HashSet(fs.HashMD5 | fs.HashSHA1)
+}
+
+// ------------------------------------------------------------
+
+// Fs returns the parent Fs
+func (o *Object) Fs() fs.Info {
+ return o.fs
+}
+
+// Return a string version
+func (o *Object) String() string {
+ if o == nil {
+ return ""
+ }
+ return o.remote
+}
+
+// Remote returns the remote path
+func (o *Object) Remote() string {
+ return o.remote
+}
+
+// getHashes fetches the hashes into the object
+func (o *Object) getHashes() (err error) {
+ var resp *http.Response
+ var result api.ChecksumFileResult
+ opts := rest.Opts{
+ Method: "GET",
+ Path: "/checksumfile",
+ Parameters: url.Values{},
+ }
+ opts.Parameters.Set("fileid", fileIDtoNumber(o.id))
+ err = o.fs.pacer.Call(func() (bool, error) {
+ resp, err = o.fs.srv.CallJSON(&opts, nil, &result)
+ err = result.Error.Update(err)
+ return shouldRetry(resp, err)
+ })
+ if err != nil {
+ return err
+ }
+ o.setHashes(&result.Hashes)
+ return o.setMetaData(&result.Metadata)
+}
+
+// Hash returns the SHA-1 of an object returning a lowercase hex string
+func (o *Object) Hash(t fs.HashType) (string, error) {
+ if t != fs.HashMD5 && t != fs.HashSHA1 {
+ return "", fs.ErrHashUnsupported
+ }
+ if o.md5 == "" && o.sha1 == "" {
+ err := o.getHashes()
+ if err != nil {
+ return "", errors.Wrap(err, "failed to get hash")
+ }
+ }
+ if t == fs.HashMD5 {
+ return o.md5, nil
+ }
+ return o.sha1, nil
+}
+
+// Size returns the size of an object in bytes
+func (o *Object) Size() int64 {
+ err := o.readMetaData()
+ if err != nil {
+ fs.Logf(o, "Failed to read metadata: %v", err)
+ return 0
+ }
+ return o.size
+}
+
+// setMetaData sets the metadata from info
+func (o *Object) setMetaData(info *api.Item) (err error) {
+ if info.IsFolder {
+ return errors.Wrapf(fs.ErrorNotAFile, "%q is a folder", o.remote)
+ }
+ o.hasMetaData = true
+ o.size = info.Size
+ o.modTime = info.ModTime()
+ o.id = info.ID
+ return nil
+}
+
+// setHashes sets the hashes from that passed in
+func (o *Object) setHashes(hashes *api.Hashes) {
+ o.sha1 = hashes.SHA1
+ o.md5 = hashes.MD5
+}
+
+// readMetaData gets the metadata if it hasn't already been fetched
+//
+// it also sets the info
+func (o *Object) readMetaData() (err error) {
+ if o.hasMetaData {
+ return nil
+ }
+ info, err := o.fs.readMetaDataForPath(o.remote)
+ if err != nil {
+ //if apiErr, ok := err.(*api.Error); ok {
+ // FIXME
+ // if apiErr.Code == "not_found" || apiErr.Code == "trashed" {
+ // return fs.ErrorObjectNotFound
+ // }
+ //}
+ return err
+ }
+ return o.setMetaData(info)
+}
+
+// ModTime returns the modification time of the object
+//
+//
+// It attempts to read the objects mtime and if that isn't present the
+// LastModified returned in the http headers
+func (o *Object) ModTime() time.Time {
+ err := o.readMetaData()
+ if err != nil {
+ fs.Logf(o, "Failed to read metadata: %v", err)
+ return time.Now()
+ }
+ return o.modTime
+}
+
+// SetModTime sets the modification time of the local fs object
+func (o *Object) SetModTime(modTime time.Time) error {
+ // Pcloud doesn't have a way of doing this so returning this
+ // error will cause the file to be re-uploaded to set the time.
+ return fs.ErrorCantSetModTime
+}
+
+// Storable returns a boolean showing whether this object storable
+func (o *Object) Storable() bool {
+ return true
+}
+
+// downloadURL fetches the download link
+func (o *Object) downloadURL() (URL string, err error) {
+ if o.id == "" {
+ return "", errors.New("can't download - no id")
+ }
+ if o.link.IsValid() {
+ return o.link.URL(), nil
+ }
+ var resp *http.Response
+ var result api.GetFileLinkResult
+ opts := rest.Opts{
+ Method: "GET",
+ Path: "/getfilelink",
+ Parameters: url.Values{},
+ }
+ opts.Parameters.Set("fileid", fileIDtoNumber(o.id))
+ err = o.fs.pacer.Call(func() (bool, error) {
+ resp, err = o.fs.srv.CallJSON(&opts, nil, &result)
+ err = result.Error.Update(err)
+ return shouldRetry(resp, err)
+ })
+ if err != nil {
+ return "", err
+ }
+ if !result.IsValid() {
+ return "", errors.Errorf("fetched invalid link %+v", result)
+ }
+ o.link = &result
+ return o.link.URL(), nil
+}
+
+// Open an object for read
+func (o *Object) Open(options ...fs.OpenOption) (in io.ReadCloser, err error) {
+ url, err := o.downloadURL()
+ if err != nil {
+ return nil, err
+ }
+ var resp *http.Response
+ opts := rest.Opts{
+ Method: "GET",
+ RootURL: url,
+ Options: options,
+ }
+ err = o.fs.pacer.Call(func() (bool, error) {
+ resp, err = o.fs.srv.Call(&opts)
+ return shouldRetry(resp, err)
+ })
+ if err != nil {
+ return nil, err
+ }
+ return resp.Body, err
+}
+
+// Update the object with the contents of the io.Reader, modTime and size
+//
+// If existing is set then it updates the object rather than creating a new one
+//
+// The new object may have been created if an error is returned
+func (o *Object) Update(in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
+ o.fs.tokenRenewer.Start()
+ defer o.fs.tokenRenewer.Stop()
+
+ size := src.Size() // NB can upload without size
+ modTime := src.ModTime()
+ remote := o.Remote()
+
+ // Create the directory for the object if it doesn't exist
+ leaf, directoryID, err := o.fs.dirCache.FindRootAndPath(remote, true)
+ if err != nil {
+ return err
+ }
+
+ // Experiments with pcloud indicate that it doesn't like any
+ // form of request which doesn't have a Content-Length.
+ // According to the docs if you close the connection at the
+ // end then it should work without Content-Length, but I
+ // couldn't get this to work using opts.Close (which sets
+ // http.Request.Close).
+ //
+ // This means that chunked transfer encoding needs to be
+ // disabled and a Content-Length needs to be supplied. This
+ // also rules out streaming.
+ //
+ // Docs: https://docs.pcloud.com/methods/file/uploadfile.html
+ var resp *http.Response
+ var result api.UploadFileResponse
+ opts := rest.Opts{
+ Method: "PUT",
+ Path: "/uploadfile",
+ Body: in,
+ ContentType: fs.MimeType(o),
+ ContentLength: &size,
+ Parameters: url.Values{},
+ TransferEncoding: []string{"identity"}, // pcloud doesn't like chunked encoding
+ }
+ leaf = replaceReservedChars(leaf)
+ opts.Parameters.Set("filename", leaf)
+ opts.Parameters.Set("folderid", dirIDtoNumber(directoryID))
+ opts.Parameters.Set("nopartial", "1")
+ opts.Parameters.Set("mtime", fmt.Sprintf("%d", modTime.Unix()))
+
+ // Special treatment for a 0 length upload. This doesn't work
+ // with PUT even with Content-Length set (by setting
+ // opts.Body=0), so upload it as a multpart form POST with
+ // Content-Length set.
+ if size == 0 {
+ formReader, contentType, err := rest.MultipartUpload(in, opts.Parameters, "content", leaf)
+ if err != nil {
+ return errors.Wrap(err, "failed to make multipart upload for 0 length file")
+ }
+ formBody, err := ioutil.ReadAll(formReader)
+ if err != nil {
+ return errors.Wrap(err, "failed to read multipart upload for 0 length file")
+ }
+ length := int64(len(formBody))
+
+ opts.ContentType = contentType
+ opts.Body = bytes.NewBuffer(formBody)
+ opts.Method = "POST"
+ opts.Parameters = nil
+ opts.ContentLength = &length
+ }
+
+ err = o.fs.pacer.CallNoRetry(func() (bool, error) {
+ resp, err = o.fs.srv.CallJSON(&opts, nil, &result)
+ err = result.Error.Update(err)
+ return shouldRetry(resp, err)
+ })
+ if err != nil {
+ return err
+ }
+ if len(result.Items) != 1 {
+ return errors.Errorf("failed to upload %v - not sure why", o)
+ }
+ o.setHashes(&result.Checksums[0])
+ return o.setMetaData(&result.Items[0])
+}
+
+// Remove an object
+func (o *Object) Remove() error {
+ opts := rest.Opts{
+ Method: "POST",
+ Path: "/deletefile",
+ Parameters: url.Values{},
+ }
+ var result api.ItemResult
+ opts.Parameters.Set("fileid", fileIDtoNumber(o.id))
+ return o.fs.pacer.Call(func() (bool, error) {
+ resp, err := o.fs.srv.CallJSON(&opts, nil, &result)
+ err = result.Error.Update(err)
+ return shouldRetry(resp, err)
+ })
+}
+
+// Check the interfaces are satisfied
+var (
+ _ fs.Fs = (*Fs)(nil)
+ _ fs.Purger = (*Fs)(nil)
+ _ fs.CleanUpper = (*Fs)(nil)
+ _ fs.Copier = (*Fs)(nil)
+ _ fs.Mover = (*Fs)(nil)
+ _ fs.DirMover = (*Fs)(nil)
+ _ fs.DirCacheFlusher = (*Fs)(nil)
+ _ fs.Object = (*Object)(nil)
+)
diff --git a/pcloud/pcloud_test.go b/pcloud/pcloud_test.go
new file mode 100644
index 000000000..d9fe37564
--- /dev/null
+++ b/pcloud/pcloud_test.go
@@ -0,0 +1,73 @@
+// Test Pcloud filesystem interface
+//
+// Automatically generated - DO NOT EDIT
+// Regenerate with: make gen_tests
+package pcloud_test
+
+import (
+ "testing"
+
+ "github.com/ncw/rclone/fs"
+ "github.com/ncw/rclone/fstest/fstests"
+ "github.com/ncw/rclone/pcloud"
+)
+
+func TestSetup(t *testing.T) {
+ fstests.NilObject = fs.Object((*pcloud.Object)(nil))
+ fstests.RemoteName = "TestPcloud:"
+}
+
+// Generic tests for the Fs
+func TestInit(t *testing.T) { fstests.TestInit(t) }
+func TestFsString(t *testing.T) { fstests.TestFsString(t) }
+func TestFsName(t *testing.T) { fstests.TestFsName(t) }
+func TestFsRoot(t *testing.T) { fstests.TestFsRoot(t) }
+func TestFsRmdirEmpty(t *testing.T) { fstests.TestFsRmdirEmpty(t) }
+func TestFsRmdirNotFound(t *testing.T) { fstests.TestFsRmdirNotFound(t) }
+func TestFsMkdir(t *testing.T) { fstests.TestFsMkdir(t) }
+func TestFsMkdirRmdirSubdir(t *testing.T) { fstests.TestFsMkdirRmdirSubdir(t) }
+func TestFsListEmpty(t *testing.T) { fstests.TestFsListEmpty(t) }
+func TestFsListDirEmpty(t *testing.T) { fstests.TestFsListDirEmpty(t) }
+func TestFsListRDirEmpty(t *testing.T) { fstests.TestFsListRDirEmpty(t) }
+func TestFsNewObjectNotFound(t *testing.T) { fstests.TestFsNewObjectNotFound(t) }
+func TestFsPutFile1(t *testing.T) { fstests.TestFsPutFile1(t) }
+func TestFsPutError(t *testing.T) { fstests.TestFsPutError(t) }
+func TestFsPutFile2(t *testing.T) { fstests.TestFsPutFile2(t) }
+func TestFsUpdateFile1(t *testing.T) { fstests.TestFsUpdateFile1(t) }
+func TestFsListDirFile2(t *testing.T) { fstests.TestFsListDirFile2(t) }
+func TestFsListRDirFile2(t *testing.T) { fstests.TestFsListRDirFile2(t) }
+func TestFsListDirRoot(t *testing.T) { fstests.TestFsListDirRoot(t) }
+func TestFsListRDirRoot(t *testing.T) { fstests.TestFsListRDirRoot(t) }
+func TestFsListSubdir(t *testing.T) { fstests.TestFsListSubdir(t) }
+func TestFsListRSubdir(t *testing.T) { fstests.TestFsListRSubdir(t) }
+func TestFsListLevel2(t *testing.T) { fstests.TestFsListLevel2(t) }
+func TestFsListRLevel2(t *testing.T) { fstests.TestFsListRLevel2(t) }
+func TestFsListFile1(t *testing.T) { fstests.TestFsListFile1(t) }
+func TestFsNewObject(t *testing.T) { fstests.TestFsNewObject(t) }
+func TestFsListFile1and2(t *testing.T) { fstests.TestFsListFile1and2(t) }
+func TestFsNewObjectDir(t *testing.T) { fstests.TestFsNewObjectDir(t) }
+func TestFsCopy(t *testing.T) { fstests.TestFsCopy(t) }
+func TestFsMove(t *testing.T) { fstests.TestFsMove(t) }
+func TestFsDirMove(t *testing.T) { fstests.TestFsDirMove(t) }
+func TestFsRmdirFull(t *testing.T) { fstests.TestFsRmdirFull(t) }
+func TestFsPrecision(t *testing.T) { fstests.TestFsPrecision(t) }
+func TestFsDirChangeNotify(t *testing.T) { fstests.TestFsDirChangeNotify(t) }
+func TestObjectString(t *testing.T) { fstests.TestObjectString(t) }
+func TestObjectFs(t *testing.T) { fstests.TestObjectFs(t) }
+func TestObjectRemote(t *testing.T) { fstests.TestObjectRemote(t) }
+func TestObjectHashes(t *testing.T) { fstests.TestObjectHashes(t) }
+func TestObjectModTime(t *testing.T) { fstests.TestObjectModTime(t) }
+func TestObjectMimeType(t *testing.T) { fstests.TestObjectMimeType(t) }
+func TestObjectSetModTime(t *testing.T) { fstests.TestObjectSetModTime(t) }
+func TestObjectSize(t *testing.T) { fstests.TestObjectSize(t) }
+func TestObjectOpen(t *testing.T) { fstests.TestObjectOpen(t) }
+func TestObjectOpenSeek(t *testing.T) { fstests.TestObjectOpenSeek(t) }
+func TestObjectPartialRead(t *testing.T) { fstests.TestObjectPartialRead(t) }
+func TestObjectUpdate(t *testing.T) { fstests.TestObjectUpdate(t) }
+func TestObjectStorable(t *testing.T) { fstests.TestObjectStorable(t) }
+func TestFsIsFile(t *testing.T) { fstests.TestFsIsFile(t) }
+func TestFsIsFileNotFound(t *testing.T) { fstests.TestFsIsFileNotFound(t) }
+func TestObjectRemove(t *testing.T) { fstests.TestObjectRemove(t) }
+func TestFsPutStream(t *testing.T) { fstests.TestFsPutStream(t) }
+func TestObjectPurge(t *testing.T) { fstests.TestObjectPurge(t) }
+func TestFinalise(t *testing.T) { fstests.TestFinalise(t) }