Compare commits

...
Sign in to create a new pull request.

26 commits

Author SHA1 Message Date
Nick Craig-Wood
02f04b08bd Version v1.58.1 2022-04-29 12:15:41 +01:00
Zsolt Ero
04a20350f7 docs: Add --multi-thread-streams note to --transfers. 2022-04-28 16:26:07 +01:00
Nick Craig-Wood
90ca184338 webdav: don't override Referer if user sets it - fixes #6040 2022-04-28 16:26:07 +01:00
Nick Craig-Wood
02f7da4b0c mount: fix --devname and fusermount: unknown option 'fsname' when mounting via rc
In this commit

f4c40bf79d mount: add --devname to set the device name sent to FUSE for mount display

The --devname parameter was added. However it was soon noticed that
attempting to mount via the rc gave this error:

    mount helper error: fusermount: unknown option 'fsname'
    mount FAILED: fusermount: exit status 1

This was because the DeviceName (and VolumeName) parameter was never
being initialised when the mount was called via the rc.

The fix for this was to refactor the rc interface so it called the
same Mount method as the command line mount which initialised the
DeviceName and VolumeName parameters properly.

This also fixes the cmd/mount tests which were breaking in the same
way but since they aren't normally run on the CI we didn't notice.

Fixes #6044
2022-04-28 16:26:07 +01:00
Nick Craig-Wood
4ba1944186 vfs: remove wording which suggests VFS is only for mounting
See: https://forum.rclone.org/t/solved-rclone-serve-read-only/30377/2
2022-04-28 16:26:07 +01:00
albertony
5aa15459c9 jottacloud: fix scope in token request
The existing code in rclone set the value "offline_access+openid",
when encoded in body it will become "offline_access%2Bopenid". I think
this is wrong. Probably an artifact of "double urlencoding" mixup -
either in rclone or in the jottacloud cli tool version it was sniffed
from? It does work, though. The token received will have scopes "email
offline_access" in it, and the same is true if I change to only
sending "offline_access" as scope.

If a proper space delimited list of "offline_access openid"
is used in the request, the response also includes openid scope:
"openid email offline_access". I think this is more correct and this
patch implements this.

See: #6107
2022-04-28 16:26:07 +01:00
Sơn Trần-Nguyễn
20343a7001 fs/rc/js: correct RC method names 2022-04-28 16:26:07 +01:00
Nick Craig-Wood
dc46c76e80 storj: fix bucket creation on Move picked up by integration tests 2022-04-28 16:26:07 +01:00
Nick Craig-Wood
c125792a78 build: update github.com/billziss-gh to github.com/winfsp 2022-04-28 16:26:07 +01:00
albertony
bc63bdd608 docs: fix some links to command pages 2022-04-28 16:12:03 +01:00
Adrien Rey-Jarthon
7c0e00a0c7 docs: Note that Scaleway C14 is deprecating SFTP in favor of S3
This updates the documentation to reflect the new C14 Cold Storage API
works with S3 and not with SFTP any more.

See: https://github.com/rclone/rclone/issues/1080#issuecomment-1082088870
2022-04-28 16:11:47 +01:00
Nil Alexandrov
68ca748227 netstorage: add support contacts to netstorage doc 2022-04-13 23:08:20 +01:00
rafma0
3cddf996bb putio: fix multithread download and other ranged requests
Before this change the 206 responses from putio Range requests were being
returned as errors.

This change checks for 200 and 206 in the GET response now.
2022-04-04 11:16:59 +01:00
GH
d54b99d101 onedrive: note that sharepoint also changes web files (.html, .aspx) 2022-04-04 11:16:59 +01:00
KARBOWSKI Piotr
f19f939abe sftp: Fix OpenSSH 8.8+ RSA keys incompatibility (#6076)
Updates golang.org/x/crypto to v0.0.0-20220331220935-ae2d96664a29.

Fixes the issues with connecting to OpenSSH 8.8+ remotes in case the
client uses RSA key pair due to OpenSSH dropping support for SHA1 based
ssh-rsa signature.

Bug: https://github.com/rclone/rclone/issues/6076
Bug: https://github.com/golang/go/issues/37278
Signed-off-by: KARBOWSKI Piotr <piotr.karbowski@gmail.com>
2022-04-01 12:51:16 +01:00
Nick Craig-Wood
a64a37fb83 netstorage: make levels of headings consistent 2022-03-31 18:14:00 +01:00
Nick Craig-Wood
4061b12a95 s3: sync providers in config description with providers 2022-03-31 17:58:18 +01:00
Berkan Teber
92b77eb334 putio: handle rate limit errors
For rate limit errors, "x-ratelimit-reset" header is now respected.
2022-03-31 17:58:18 +01:00
Nick Craig-Wood
5cdb6678ab filter: fix timezone of --min-age/-max-age from UTC to local as documented
Before this change if the timezone was omitted in a
--min-age/--max-age time specifier then rclone defaulted to a UTC
timezone.

This is documented as using the local timezone if the time zone
specifier is omitted which is a much more useful default and this
patch corrects the implementation to agree with the documentation.

See: https://forum.rclone.org/t/problem-utc-windows-europe-1-summer-problem/29917
2022-03-31 17:58:18 +01:00
GuoXingbin
76d5f02b2c s3: Add ChinaMobile EOS to provider list
China Mobile Ecloud Elastic Object Storage (EOS) is a cloud object storage service, and is fully compatible with S3.

Fixes #6054
2022-03-31 17:58:18 +01:00
Nick Craig-Wood
f4fd910c9a dropbox: fix retries of multipart uploads with incorrect_offset error
Before this fix, rclone retries chunks of multipart uploads. However
if they had been partially received dropbox would reply with an
incorrect_offset error which rclone was ignoring.

This patch parses the new offset from the error response and uses it
to adjust the data that rclone sends so it is the same as what dropbox
is expecting.

See: https://forum.rclone.org/t/dropbox-rate-limiting-for-upload/29779
2022-03-25 15:41:16 +00:00
Nick Craig-Wood
5074bd0d51 googlecloudstorage: use the s3 pacer to speed up transactions
This commit switches Google Cloud Storage from the drive pacer to the
s3 pacer. The main difference between them is that the s3 pacer does
not limit transactions in the non-error case. This is appropriate for
a cloud storage backend where you pay for each transaction.
2022-03-25 15:32:28 +00:00
Nick Craig-Wood
9fabf40fc5 pacer: default the Google pacer to a burst of 100 to fix gcs pacing
Before this change the pacer defaulted to a burst of 1 which mean that
it kept being activated unecessarily.

This affected Google Cloud Storage and Google Photos.

See: https://forum.rclone.org/t/no-traverse-too-slow-with-lot-of-files/29886/12
2022-03-25 15:32:28 +00:00
Nick Craig-Wood
0832b932ef azureblob: fix lint error with golangci-lint 1.45.0 2022-03-25 15:32:28 +00:00
Nick Craig-Wood
1c4f790609 netstorage: fix unescaped HTML in documentation 2022-03-18 14:43:45 +00:00
Nick Craig-Wood
ea8a4f24bd Start v1.58.1-DEV development 2022-03-18 14:43:37 +00:00
52 changed files with 3931 additions and 745 deletions

865
MANUAL.html generated

File diff suppressed because it is too large Load diff

709
MANUAL.md generated

File diff suppressed because it is too large Load diff

797
MANUAL.txt generated

File diff suppressed because it is too large Load diff

View file

@ -28,6 +28,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
* Box [:page_facing_up:](https://rclone.org/box/)
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
* China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
* Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
* Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)

View file

@ -1 +1 @@
v1.58.0
v1.58.1

View file

@ -612,7 +612,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
serviceURL = azblob.NewServiceURL(*u, pipeline)
case opt.UseMSI:
var token adal.Token
var userMSI *userMSI = &userMSI{}
var userMSI = &userMSI{}
if len(opt.MSIClientID) > 0 || len(opt.MSIObjectID) > 0 || len(opt.MSIResourceID) > 0 {
// Specifying a user-assigned identity. Exactly one of the above IDs must be specified.
// Validate and ensure exactly one is set. (To do: better validation.)

View file

@ -1650,13 +1650,37 @@ func (o *Object) uploadChunked(ctx context.Context, in0 io.Reader, commitInfo *f
}
chunk := readers.NewRepeatableLimitReaderBuffer(in, buf, chunkSize)
skip := int64(0)
err = o.fs.pacer.Call(func() (bool, error) {
// seek to the start in case this is a retry
if _, err = chunk.Seek(0, io.SeekStart); err != nil {
return false, nil
if _, err = chunk.Seek(skip, io.SeekStart); err != nil {
return false, err
}
err = o.fs.srv.UploadSessionAppendV2(&appendArg, chunk)
// after session is started, we retry everything
if err != nil {
// Check for incorrect offset error and retry with new offset
if uErr, ok := err.(files.UploadSessionAppendV2APIError); ok {
if uErr.EndpointError != nil && uErr.EndpointError.IncorrectOffset != nil {
correctOffset := uErr.EndpointError.IncorrectOffset.CorrectOffset
delta := int64(correctOffset) - int64(cursor.Offset)
skip += delta
what := fmt.Sprintf("incorrect offset error receved: sent %d, need %d, skip %d", cursor.Offset, correctOffset, skip)
if skip < 0 {
return false, fmt.Errorf("can't seek backwards to correct offset: %s", what)
} else if skip == chunkSize {
fs.Debugf(o, "%s: chunk received OK - continuing", what)
return false, nil
} else if skip > chunkSize {
// This error should never happen
return false, fmt.Errorf("can't seek forwards by more than a chunk to correct offset: %s", what)
}
// Skip the sent data on next retry
cursor.Offset = uint64(int64(cursor.Offset) + delta)
fs.Debugf(o, "%s: skipping bytes on retry to fix offset", what)
}
}
}
return err != nil, err
})
if err != nil {

View file

@ -482,7 +482,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
name: name,
root: root,
opt: *opt,
pacer: fs.NewPacer(ctx, pacer.NewGoogleDrive(pacer.MinSleep(minSleep))),
pacer: fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep))),
cache: bucket.NewCache(),
}
f.setRoot(root)

View file

@ -519,7 +519,7 @@ func doTokenAuth(ctx context.Context, apiSrv *rest.Client, loginTokenBase64 stri
values.Set("client_id", defaultClientID)
values.Set("grant_type", "password")
values.Set("password", loginToken.AuthToken)
values.Set("scope", "offline_access+openid")
values.Set("scope", "openid offline_access")
values.Set("username", loginToken.Username)
values.Encode()
opts = rest.Opts{

View file

@ -65,7 +65,7 @@ HTTP is provided primarily for debugging purposes.`,
Name: "host",
Help: `Domain+path of NetStorage host to connect to.
Format should be <domain>/<internal folders>`,
Format should be ` + "`<domain>/<internal folders>`",
Required: true,
}, {
Name: "account",
@ -94,7 +94,7 @@ files stored in any sub-directories that may exist.`,
Long: `The desired path location (including applicable sub-directories) ending in
the object that will be the target of the symlink (for example, /links/mylink).
Include the file extension for the object, if applicable.
rclone backend symlink <src> <path>`,
` + "`rclone backend symlink <src> <path>`",
},
}

View file

@ -4,16 +4,21 @@ import (
"context"
"fmt"
"net/http"
"strconv"
"time"
"github.com/putdotio/go-putio/putio"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/pacer"
)
func checkStatusCode(resp *http.Response, expected int) error {
if resp.StatusCode != expected {
return &statusCodeError{response: resp}
func checkStatusCode(resp *http.Response, expected ...int) error {
for _, code := range expected {
if resp.StatusCode == code {
return nil
}
}
return nil
return &statusCodeError{response: resp}
}
type statusCodeError struct {
@ -24,8 +29,10 @@ func (e *statusCodeError) Error() string {
return fmt.Sprintf("unexpected status code (%d) response while doing %s to %s", e.response.StatusCode, e.response.Request.Method, e.response.Request.URL.String())
}
// This method is called from fserrors.ShouldRetry() to determine if an error should be retried.
// Some errors (e.g. 429 Too Many Requests) are handled before this step, so they are not included here.
func (e *statusCodeError) Temporary() bool {
return e.response.StatusCode == 429 || e.response.StatusCode >= 500
return e.response.StatusCode >= 500
}
// shouldRetry returns a boolean as to whether this err deserves to be
@ -40,6 +47,16 @@ func shouldRetry(ctx context.Context, err error) (bool, error) {
if perr, ok := err.(*putio.ErrorResponse); ok {
err = &statusCodeError{response: perr.Response}
}
if scerr, ok := err.(*statusCodeError); ok && scerr.response.StatusCode == 429 {
delay := defaultRateLimitSleep
header := scerr.response.Header.Get("x-ratelimit-reset")
if header != "" {
if resetTime, cerr := strconv.ParseInt(header, 10, 64); cerr == nil {
delay = time.Until(time.Unix(resetTime+1, 0))
}
}
return true, pacer.RetryAfterError(scerr, delay)
}
if fserrors.ShouldRetry(err) {
return true, err
}

View file

@ -302,8 +302,8 @@ func (f *Fs) createUpload(ctx context.Context, name string, size int64, parentID
if err != nil {
return false, err
}
if resp.StatusCode != 201 {
return false, fmt.Errorf("unexpected status code from upload create: %d", resp.StatusCode)
if err := checkStatusCode(resp, 201); err != nil {
return shouldRetry(ctx, err)
}
location = resp.Header.Get("location")
if location == "" {

View file

@ -241,7 +241,13 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
// fs.Debugf(o, "opening file: id=%d", o.file.ID)
resp, err = o.fs.httpClient.Do(req)
return shouldRetry(ctx, err)
if err != nil {
return shouldRetry(ctx, err)
}
if err := checkStatusCode(resp, 200, 206); err != nil {
return shouldRetry(ctx, err)
}
return false, nil
})
if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode >= 400 && perr.Response.StatusCode <= 499 {
_ = resp.Body.Close()

View file

@ -33,8 +33,9 @@ const (
rcloneObscuredClientSecret = "cMwrjWVmrHZp3gf1ZpCrlyGAmPpB-YY5BbVnO1fj-G9evcd8"
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
decayConstant = 1 // bigger for slower decay, exponential
defaultChunkSize = 48 * fs.Mebi
defaultRateLimitSleep = 60 * time.Second
)
var (

View file

@ -58,7 +58,7 @@ import (
func init() {
fs.Register(&fs.RegInfo{
Name: "s3",
Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS",
Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi",
NewFs: NewFs,
CommandHelp: commandHelp,
Options: []fs.Option{{
@ -75,6 +75,9 @@ func init() {
}, {
Value: "Ceph",
Help: "Ceph Object Storage",
}, {
Value: "ChinaMobile",
Help: "China Mobile Ecloud Elastic Object Storage (EOS)",
}, {
Value: "DigitalOcean",
Help: "Digital Ocean Spaces",
@ -294,7 +297,7 @@ func init() {
}, {
Name: "region",
Help: "Region to connect to.\n\nLeave blank if you are using an S3 clone and you don't have a region.",
Provider: "!AWS,Alibaba,RackCorp,Scaleway,Storj,TencentCOS",
Provider: "!AWS,Alibaba,ChinaMobile,RackCorp,Scaleway,Storj,TencentCOS",
Examples: []fs.OptionExample{{
Value: "",
Help: "Use this if unsure.\nWill use v4 signatures and an empty region.",
@ -306,6 +309,102 @@ func init() {
Name: "endpoint",
Help: "Endpoint for S3 API.\n\nLeave blank if using AWS to use the default endpoint for the region.",
Provider: "AWS",
}, {
// ChinaMobile endpoints: https://ecloud.10086.cn/op-help-center/doc/article/24534
Name: "endpoint",
Help: "Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.",
Provider: "ChinaMobile",
Examples: []fs.OptionExample{{
Value: "eos-wuxi-1.cmecloud.cn",
Help: "The default endpoint - a good choice if you are unsure.\nEast China (Suzhou)",
}, {
Value: "eos-jinan-1.cmecloud.cn",
Help: "East China (Jinan)",
}, {
Value: "eos-ningbo-1.cmecloud.cn",
Help: "East China (Hangzhou)",
}, {
Value: "eos-shanghai-1.cmecloud.cn",
Help: "East China (Shanghai-1)",
}, {
Value: "eos-zhengzhou-1.cmecloud.cn",
Help: "Central China (Zhengzhou)",
}, {
Value: "eos-hunan-1.cmecloud.cn",
Help: "Central China (Changsha-1)",
}, {
Value: "eos-zhuzhou-1.cmecloud.cn",
Help: "Central China (Changsha-2)",
}, {
Value: "eos-guangzhou-1.cmecloud.cn",
Help: "South China (Guangzhou-2)",
}, {
Value: "eos-dongguan-1.cmecloud.cn",
Help: "South China (Guangzhou-3)",
}, {
Value: "eos-beijing-1.cmecloud.cn",
Help: "North China (Beijing-1)",
}, {
Value: "eos-beijing-2.cmecloud.cn",
Help: "North China (Beijing-2)",
}, {
Value: "eos-beijing-4.cmecloud.cn",
Help: "North China (Beijing-3)",
}, {
Value: "eos-huhehaote-1.cmecloud.cn",
Help: "North China (Huhehaote)",
}, {
Value: "eos-chengdu-1.cmecloud.cn",
Help: "Southwest China (Chengdu)",
}, {
Value: "eos-chongqing-1.cmecloud.cn",
Help: "Southwest China (Chongqing)",
}, {
Value: "eos-guiyang-1.cmecloud.cn",
Help: "Southwest China (Guiyang)",
}, {
Value: "eos-xian-1.cmecloud.cn",
Help: "Nouthwest China (Xian)",
}, {
Value: "eos-yunnan.cmecloud.cn",
Help: "Yunnan China (Kunming)",
}, {
Value: "eos-yunnan-2.cmecloud.cn",
Help: "Yunnan China (Kunming-2)",
}, {
Value: "eos-tianjin-1.cmecloud.cn",
Help: "Tianjin China (Tianjin)",
}, {
Value: "eos-jilin-1.cmecloud.cn",
Help: "Jilin China (Changchun)",
}, {
Value: "eos-hubei-1.cmecloud.cn",
Help: "Hubei China (Xiangyan)",
}, {
Value: "eos-jiangxi-1.cmecloud.cn",
Help: "Jiangxi China (Nanchang)",
}, {
Value: "eos-gansu-1.cmecloud.cn",
Help: "Gansu China (Lanzhou)",
}, {
Value: "eos-shanxi-1.cmecloud.cn",
Help: "Shanxi China (Taiyuan)",
}, {
Value: "eos-liaoning-1.cmecloud.cn",
Help: "Liaoning China (Shenyang)",
}, {
Value: "eos-hebei-1.cmecloud.cn",
Help: "Hebei China (Shijiazhuang)",
}, {
Value: "eos-fujian-1.cmecloud.cn",
Help: "Fujian China (Xiamen)",
}, {
Value: "eos-guangxi-1.cmecloud.cn",
Help: "Guangxi China (Nanning)",
}, {
Value: "eos-anhui-1.cmecloud.cn",
Help: "Anhui China (Huainan)",
}},
}, {
Name: "endpoint",
Help: "Endpoint for IBM COS S3 API.\n\nSpecify if using an IBM COS On Premise.",
@ -746,7 +845,7 @@ func init() {
}, {
Name: "endpoint",
Help: "Endpoint for S3 API.\n\nRequired when using an S3 clone.",
Provider: "!AWS,IBMCOS,TencentCOS,Alibaba,Scaleway,StackPath,Storj,RackCorp",
Provider: "!AWS,IBMCOS,TencentCOS,Alibaba,ChinaMobile,Scaleway,StackPath,Storj,RackCorp",
Examples: []fs.OptionExample{{
Value: "objects-us-east-1.dream.io",
Help: "Dream Objects endpoint",
@ -880,6 +979,101 @@ func init() {
Value: "us-gov-west-1",
Help: "AWS GovCloud (US) Region",
}},
}, {
Name: "location_constraint",
Help: "Location constraint - must match endpoint.\n\nUsed when creating buckets only.",
Provider: "ChinaMobile",
Examples: []fs.OptionExample{{
Value: "wuxi1",
Help: "East China (Suzhou)",
}, {
Value: "jinan1",
Help: "East China (Jinan)",
}, {
Value: "ningbo1",
Help: "East China (Hangzhou)",
}, {
Value: "shanghai1",
Help: "East China (Shanghai-1)",
}, {
Value: "zhengzhou1",
Help: "Central China (Zhengzhou)",
}, {
Value: "hunan1",
Help: "Central China (Changsha-1)",
}, {
Value: "zhuzhou1",
Help: "Central China (Changsha-2)",
}, {
Value: "guangzhou1",
Help: "South China (Guangzhou-2)",
}, {
Value: "dongguan1",
Help: "South China (Guangzhou-3)",
}, {
Value: "beijing1",
Help: "North China (Beijing-1)",
}, {
Value: "beijing2",
Help: "North China (Beijing-2)",
}, {
Value: "beijing4",
Help: "North China (Beijing-3)",
}, {
Value: "huhehaote1",
Help: "North China (Huhehaote)",
}, {
Value: "chengdu1",
Help: "Southwest China (Chengdu)",
}, {
Value: "chongqing1",
Help: "Southwest China (Chongqing)",
}, {
Value: "guiyang1",
Help: "Southwest China (Guiyang)",
}, {
Value: "xian1",
Help: "Nouthwest China (Xian)",
}, {
Value: "yunnan",
Help: "Yunnan China (Kunming)",
}, {
Value: "yunnan2",
Help: "Yunnan China (Kunming-2)",
}, {
Value: "tianjin1",
Help: "Tianjin China (Tianjin)",
}, {
Value: "jilin1",
Help: "Jilin China (Changchun)",
}, {
Value: "hubei1",
Help: "Hubei China (Xiangyan)",
}, {
Value: "jiangxi1",
Help: "Jiangxi China (Nanchang)",
}, {
Value: "gansu1",
Help: "Gansu China (Lanzhou)",
}, {
Value: "shanxi1",
Help: "Shanxi China (Taiyuan)",
}, {
Value: "liaoning1",
Help: "Liaoning China (Shenyang)",
}, {
Value: "hebei1",
Help: "Hebei China (Shijiazhuang)",
}, {
Value: "fujian1",
Help: "Fujian China (Xiamen)",
}, {
Value: "guangxi1",
Help: "Guangxi China (Nanning)",
}, {
Value: "anhui1",
Help: "Anhui China (Huainan)",
}},
}, {
Name: "location_constraint",
Help: "Location constraint - must match endpoint when using IBM Cloud Public.\n\nFor on-prem COS, do not make a selection from this list, hit enter.",
@ -1046,7 +1240,7 @@ func init() {
}, {
Name: "location_constraint",
Help: "Location constraint - must be set to match the Region.\n\nLeave blank if not sure. Used when creating buckets only.",
Provider: "!AWS,IBMCOS,Alibaba,RackCorp,Scaleway,StackPath,Storj,TencentCOS",
Provider: "!AWS,IBMCOS,Alibaba,ChinaMobile,RackCorp,Scaleway,StackPath,Storj,TencentCOS",
}, {
Name: "acl",
Help: `Canned ACL used when creating buckets and storing or copying objects.
@ -1081,11 +1275,11 @@ doesn't copy the ACL from the source but rather writes a fresh one.`,
}, {
Value: "bucket-owner-read",
Help: "Object owner gets FULL_CONTROL.\nBucket owner gets READ access.\nIf you specify this canned ACL when creating a bucket, Amazon S3 ignores it.",
Provider: "!IBMCOS",
Provider: "!IBMCOS,ChinaMobile",
}, {
Value: "bucket-owner-full-control",
Help: "Both the object owner and the bucket owner get FULL_CONTROL over the object.\nIf you specify this canned ACL when creating a bucket, Amazon S3 ignores it.",
Provider: "!IBMCOS",
Provider: "!IBMCOS,ChinaMobile",
}, {
Value: "private",
Help: "Owner gets FULL_CONTROL.\nNo one else has access rights (default).\nThis acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS.",
@ -1134,7 +1328,7 @@ isn't set then "acl" is used instead.`,
}, {
Name: "server_side_encryption",
Help: "The server-side encryption algorithm used when storing this object in S3.",
Provider: "AWS,Ceph,Minio",
Provider: "AWS,Ceph,ChinaMobile,Minio",
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
@ -1142,13 +1336,14 @@ isn't set then "acl" is used instead.`,
Value: "AES256",
Help: "AES256",
}, {
Value: "aws:kms",
Help: "aws:kms",
Value: "aws:kms",
Help: "aws:kms",
Provider: "!ChinaMobile",
}},
}, {
Name: "sse_customer_algorithm",
Help: "If using SSE-C, the server-side encryption algorithm used when storing this object in S3.",
Provider: "AWS,Ceph,Minio",
Provider: "AWS,Ceph,ChinaMobile,Minio",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
@ -1171,7 +1366,7 @@ isn't set then "acl" is used instead.`,
}, {
Name: "sse_customer_key",
Help: "If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.",
Provider: "AWS,Ceph,Minio",
Provider: "AWS,Ceph,ChinaMobile,Minio",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
@ -1183,7 +1378,7 @@ isn't set then "acl" is used instead.`,
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
`,
Provider: "AWS,Ceph,Minio",
Provider: "AWS,Ceph,ChinaMobile,Minio",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
@ -1239,6 +1434,24 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
Value: "STANDARD_IA",
Help: "Infrequent access storage mode",
}},
}, {
// Mapping from here: https://ecloud.10086.cn/op-help-center/doc/article/24495
Name: "storage_class",
Help: "The storage class to use when storing new objects in ChinaMobile.",
Provider: "ChinaMobile",
Examples: []fs.OptionExample{{
Value: "",
Help: "Default",
}, {
Value: "STANDARD",
Help: "Standard storage class",
}, {
Value: "GLACIER",
Help: "Archive storage mode",
}, {
Value: "STANDARD_IA",
Help: "Infrequent access storage mode",
}},
}, {
// Mapping from here: https://intl.cloud.tencent.com/document/product/436/30925
Name: "storage_class",
@ -1950,6 +2163,10 @@ func setQuirks(opt *Options) {
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
case "ChinaMobile":
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
case "DigitalOcean":
urlEncodeListings = false
case "Dreamhost":

View file

@ -705,7 +705,16 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
// Do the move
err := f.project.MoveObject(ctx, srcBucket, srcKey, dstBucket, dstKey, &options)
if err != nil {
return nil, fmt.Errorf("rename object failed: %w", err)
// Make sure destination bucket exists
_, err := f.project.EnsureBucket(ctx, dstBucket)
if err != nil {
return nil, fmt.Errorf("rename object failed to create destination bucket: %w", err)
}
// And try again
err = f.project.MoveObject(ctx, srcBucket, srcKey, dstBucket, dstKey, &options)
if err != nil {
return nil, fmt.Errorf("rename object failed: %w", err)
}
}
// Read the new object

View file

@ -454,7 +454,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, err
}
f.srv.SetHeader("Referer", u.String())
if !f.findHeader(opt.Headers, "Referer") {
f.srv.SetHeader("Referer", u.String())
}
if root != "" && !rootIsDir {
// Check to see if the root actually an existing file
@ -517,6 +519,17 @@ func (f *Fs) addHeaders(headers fs.CommaSepList) {
}
}
// Returns true if the header was configured
func (f *Fs) findHeader(headers fs.CommaSepList, find string) bool {
for i := 0; i < len(headers); i += 2 {
key := f.opt.Headers[i]
if strings.EqualFold(key, find) {
return true
}
}
return false
}
// fetch the bearer token and set it if successful
func (f *Fs) fetchAndSetBearerToken() error {
if f.opt.BearerTokenCommand == "" {

View file

@ -1,5 +1,5 @@
#!/bin/bash
set -e
docker build -t rclone/xgo-cgofuse https://github.com/billziss-gh/cgofuse.git
docker build -t rclone/xgo-cgofuse https://github.com/winfsp/cgofuse.git
docker images
docker push rclone/xgo-cgofuse

View file

@ -13,7 +13,7 @@ import (
"sync/atomic"
"time"
"github.com/billziss-gh/cgofuse/fuse"
"github.com/winfsp/cgofuse/fuse"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"

View file

@ -18,7 +18,7 @@ import (
"sync/atomic"
"time"
"github.com/billziss-gh/cgofuse/fuse"
"github.com/winfsp/cgofuse/fuse"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/atexit"

View file

@ -65,10 +65,10 @@ at all, then 1 PiB is set as both the total and the free size.
To run rclone @ on Windows, you will need to
download and install [WinFsp](http://www.secfs.net/winfsp/).
[WinFsp](https://github.com/billziss-gh/winfsp) is an open-source
[WinFsp](https://github.com/winfsp/winfsp) is an open-source
Windows File System Proxy which makes it easy to write user space file
systems for Windows. It provides a FUSE emulation layer which rclone
uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse).
uses combination with [cgofuse](https://github.com/winfsp/cgofuse).
Both of these packages are by Bill Zissimopoulos who was very helpful
during the implementation of rclone @ for Windows.
@ -218,7 +218,7 @@ from Microsoft's Sysinternals suite, which has option |-s| to start
processes as the SYSTEM account. Another alternative is to run the mount
command from a Windows Scheduled Task, or a Windows Service, configured
to run as the SYSTEM account. A third alternative is to use the
[WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)).
[WinFsp.Launcher infrastructure](https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)).
Note that when running rclone as another user, it will not use
the configuration file from your profile unless you tell it to
with the [|--config|](https://rclone.org/docs/#config-config-file) option.

View file

@ -78,6 +78,17 @@ type MountPoint struct {
ErrChan <-chan error
}
// NewMountPoint makes a new mounting structure
func NewMountPoint(mount MountFn, mountPoint string, f fs.Fs, mountOpt *Options, vfsOpt *vfscommon.Options) *MountPoint {
return &MountPoint{
MountFn: mount,
MountPoint: mountPoint,
Fs: f,
MountOpt: *mountOpt,
VFSOpt: *vfsOpt,
}
}
// Global constants
const (
MaxLeafSize = 1024 // don't pass file names longer than this
@ -167,14 +178,7 @@ func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Comm
defer cmd.StartStats()()
}
mnt := &MountPoint{
MountFn: mount,
MountPoint: args[1],
Fs: cmd.NewFsDir(args),
MountOpt: Opt,
VFSOpt: vfsflags.Opt,
}
mnt := NewMountPoint(mount, args[1], cmd.NewFsDir(args), &Opt, &vfsflags.Opt)
daemon, err := mnt.Mount()
// Wait for foreground mount, if any...
@ -253,6 +257,7 @@ func (m *MountPoint) Mount() (daemon *os.Process, err error) {
if err != nil {
return nil, fmt.Errorf("failed to mount FUSE fs: %w", err)
}
m.MountedOn = time.Now()
return nil, nil
}

View file

@ -10,7 +10,6 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
)
@ -117,23 +116,15 @@ func mountRc(ctx context.Context, in rc.Params) (out rc.Params, err error) {
return nil, err
}
VFS := vfs.New(fdst, &vfsOpt)
_, unmountFn, err := mountFn(VFS, mountPoint, &mountOpt)
mnt := NewMountPoint(mountFn, mountPoint, fdst, &mountOpt, &vfsOpt)
_, err = mnt.Mount()
if err != nil {
log.Printf("mount FAILED: %v", err)
return nil, err
}
// Add mount to list if mount point was successfully created
liveMounts[mountPoint] = &MountPoint{
MountPoint: mountPoint,
MountedOn: time.Now(),
MountFn: mountFn,
UnmountFn: unmountFn,
MountOpt: mountOpt,
VFSOpt: vfsOpt,
Fs: fdst,
}
liveMounts[mountPoint] = mnt
fs.Debugf(nil, "Mount for %s created at %s using %s", fdst.String(), mountPoint, mountType)
return nil, nil

View file

@ -274,7 +274,6 @@ func (vol *Volume) mount(id string) error {
if _, err := vol.mnt.Mount(); err != nil {
return err
}
vol.mnt.MountedOn = time.Now()
vol.mountReqs[id] = nil
vol.drv.monChan <- false // ask monitor to refresh channels
return nil

View file

@ -112,8 +112,9 @@ WebDAV or S3, that work out of the box.)
{{< provider name="Backblaze B2" home="https://www.backblaze.com/b2/cloud-storage.html" config="/b2/" >}}
{{< provider name="Box" home="https://www.box.com/" config="/box/" >}}
{{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
{{< provider name="China Mobile Ecloud Elastic Object Storage (EOS)" home="https://ecloud.10086.cn/home/product-introduction/eos/" config="/s3/#china-mobile-ecloud-eos" >}}
{{< provider name="Citrix ShareFile" home="http://sharefile.com/" config="/sharefile/" >}}
{{< provider name="C14" home="https://www.online.net/en/storage/c14-cold-storage" config="/sftp/#c14" >}}
{{< provider name="C14" home="https://www.online.net/en/storage/c14-cold-storage" config="/s3/#scaleway" >}}
{{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
{{< provider name="Digi Storage" home="https://storage.rcs-rds.ro/" config="/koofr/#digi-storage" >}}
{{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}

View file

@ -5,6 +5,48 @@ description: "Rclone Changelog"
# Changelog
## v1.58.1 - 2022-04-29
[See commits](https://github.com/rclone/rclone/compare/v1.58.0...v1.58.1)
* Bug Fixes
* build: Update github.com/billziss-gh to github.com/winfsp (Nick Craig-Wood)
* filter: Fix timezone of `--min-age`/`-max-age` from UTC to local as documented (Nick Craig-Wood)
* rc/js: Correct RC method names (Sơn Trần-Nguyễn)
* docs
* Fix some links to command pages (albertony)
* Add `--multi-thread-streams` note to `--transfers`. (Zsolt Ero)
* Mount
* Fix `--devname` and fusermount: unknown option 'fsname' when mounting via rc (Nick Craig-Wood)
* VFS
* Remove wording which suggests VFS is only for mounting (Nick Craig-Wood)
* Dropbox
* Fix retries of multipart uploads with incorrect_offset error (Nick Craig-Wood)
* Google Cloud Storage
* Use the s3 pacer to speed up transactions (Nick Craig-Wood)
* pacer: Default the Google pacer to a burst of 100 to fix gcs pacing (Nick Craig-Wood)
* Jottacloud
* Fix scope in token request (albertony)
* Netstorage
* Fix unescaped HTML in documentation (Nick Craig-Wood)
* Make levels of headings consistent (Nick Craig-Wood)
* Add support contacts to netstorage doc (Nil Alexandrov)
* Onedrive
* Note that sharepoint also changes web files (.html, .aspx) (GH)
* Putio
* Handle rate limit errors (Berkan Teber)
* Fix multithread download and other ranged requests (rafma0)
* S3
* Add ChinaMobile EOS to provider list (GuoXingbin)
* Sync providers in config description with providers (Nick Craig-Wood)
* SFTP
* Fix OpenSSH 8.8+ RSA keys incompatibility (KARBOWSKI Piotr)
* Note that Scaleway C14 is deprecating SFTP in favor of S3 (Adrien Rey-Jarthon)
* Storj
* Fix bucket creation on Move (Nick Craig-Wood)
* WebDAV
* Don't override Referer if user sets it (Nick Craig-Wood)
## v1.58.0 - 2022-03-18
[See commits](https://github.com/rclone/rclone/compare/v1.57.0...v1.58.0)

View file

@ -75,10 +75,10 @@ at all, then 1 PiB is set as both the total and the free size.
To run rclone mount on Windows, you will need to
download and install [WinFsp](http://www.secfs.net/winfsp/).
[WinFsp](https://github.com/billziss-gh/winfsp) is an open-source
[WinFsp](https://github.com/winfsp/winfsp) is an open-source
Windows File System Proxy which makes it easy to write user space file
systems for Windows. It provides a FUSE emulation layer which rclone
uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse).
uses combination with [cgofuse](https://github.com/winfsp/cgofuse).
Both of these packages are by Bill Zissimopoulos who was very helpful
during the implementation of rclone mount for Windows.
@ -228,7 +228,7 @@ from Microsoft's Sysinternals suite, which has option `-s` to start
processes as the SYSTEM account. Another alternative is to run the mount
command from a Windows Scheduled Task, or a Windows Service, configured
to run as the SYSTEM account. A third alternative is to use the
[WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)).
[WinFsp.Launcher infrastructure](https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)).
Note that when running rclone as another user, it will not use
the configuration file from your profile unless you tell it to
with the [`--config`](https://rclone.org/docs/#config-config-file) option.
@ -410,7 +410,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -607,7 +607,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -619,7 +619,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -636,22 +636,22 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
@ -705,7 +705,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only)
-o, --option stringArray Option for libfuse/WinFsp (repeat if required)
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--read-only Only allow read-only access
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)

View file

@ -51,7 +51,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -248,7 +248,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -260,7 +260,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -277,22 +277,22 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
@ -332,7 +332,7 @@ rclone serve dlna remote:path [flags]
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--read-only Only allow read-only access
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)

View file

@ -69,7 +69,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -266,7 +266,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -278,7 +278,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -295,22 +295,22 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
@ -367,7 +367,7 @@ rclone serve docker [flags]
--noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only)
-o, --option stringArray Option for libfuse/WinFsp (repeat if required)
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--read-only Only allow read-only access
--socket-addr string Address <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)
--socket-gid int GID for unix socket (default: current process GID) (default 1000)
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)

View file

@ -50,7 +50,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -247,7 +247,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -259,7 +259,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -276,22 +276,22 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
@ -416,7 +416,7 @@ rclone serve ftp remote:path [flags]
--passive-port string Passive port range to use (default "30000-32000")
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--public-ip string Public IP address to advertise for passive connections
--read-only Mount read-only
--read-only Only allow read-only access
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication (default "anonymous")

View file

@ -126,7 +126,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -323,7 +323,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -335,7 +335,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -352,22 +352,22 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
@ -412,7 +412,7 @@ rclone serve http remote:path [flags]
--no-seek Don't allow seeking in files
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--read-only Only allow read-only access
--realm string Realm for authentication
--salt string Password hashing salt (default "dlPL2MqE")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)

View file

@ -79,7 +79,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -276,7 +276,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -288,7 +288,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -305,22 +305,22 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
@ -444,7 +444,7 @@ rclone serve sftp remote:path [flags]
--no-seek Don't allow seeking in files
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--read-only Only allow read-only access
--stdio Run an sftp server on run stdin/stdout
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)

View file

@ -130,7 +130,7 @@ about files and directories (but not the data) in memory.
Using the `--dir-cache-time` flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -327,7 +327,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -339,7 +339,7 @@ on disk cache file.
When using VFS write caching (`--vfs-cache-mode` with value writes or full),
the global flag `--transfers` can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag `--checkers` have no effect on mount).
modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -356,22 +356,22 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The `--vfs-case-insensitive` mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The `--vfs-case-insensitive` VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends
@ -500,7 +500,7 @@ rclone serve webdav remote:path [flags]
--no-seek Don't allow seeking in files
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--read-only Only allow read-only access
--realm string Realm for authentication (default "rclone")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)

View file

@ -939,22 +939,22 @@ unit prefix appended to the value (e.g. `9.762Ki`), while in more textual output
the full unit is shown (e.g. `9.762 KiB`). For counts the SI standard notation is
used, e.g. prefix `k` for kilo. Used with file counts, `1k` means 1000 files.
The various [list](commands/rclone_ls/) commands output raw numbers by default.
The various [list](/commands/rclone_ls/) commands output raw numbers by default.
Option `--human-readable` will make them output values in human-readable format
instead (with the short unit prefix).
The [about](commands/rclone_about/) command outputs human-readable by default,
The [about](/commands/rclone_about/) command outputs human-readable by default,
with a command-specific option `--full` to output the raw numbers instead.
Command [size](commands/rclone_size/) outputs both human-readable and raw numbers
Command [size](/commands/rclone_size/) outputs both human-readable and raw numbers
in the same output.
The [tree](commands/rclone_tree/) command also considers `--human-readable`, but
The [tree](/commands/rclone_tree/) command also considers `--human-readable`, but
it will not use the exact same notation as the other commands: It rounds to one
decimal, and uses single letter suffix, e.g. `K` instead of `Ki`. The reason for
this is that it relies on an external library.
The interactive command [ncdu](commands/rclone_ncdu/) shows human-readable by
The interactive command [ncdu](/commands/rclone_ncdu/) shows human-readable by
default, and responds to key `u` for toggling human-readable format.
### --ignore-case-sync ###
@ -1793,6 +1793,8 @@ of timeouts or bigger if you have lots of bandwidth and a fast remote.
The default is to run 4 file transfers in parallel.
Look at --multi-thread-streams if you would like to control single file transfers.
### -u, --update ###
This forces rclone to skip any files which exist on the destination

View file

@ -157,7 +157,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.58.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.58.1")
-v, --verbose count Print lots more stuff (repeat for more)
```

View file

@ -3,8 +3,7 @@ title: "Akamai Netstorage"
description: "Rclone docs for Akamai NetStorage"
---
{{< icon "fas fa-database" >}} Akamai NetStorage
-------------------------------------------------
# {{< icon "fas fa-database" >}} Akamai NetStorage
Paths are specified as `remote:`
You may put subdirectories in too, e.g. `remote:/path/to/dir`.
@ -19,6 +18,8 @@ See all buckets
rclone lsd remote:
The initial setup for Netstorage involves getting an account and secret. Use `rclone config` to walk you through the setup process.
## Configuration
Here's an example of how to make a remote called `ns1`.
1. To begin the interactive configuration process, enter this command:
@ -110,28 +111,31 @@ y/e/d> y
This remote is called `ns1` and can now be used.
### Example operations
## Example operations
Get started with rclone and NetStorage with these examples. For additional rclone commands, visit https://rclone.org/commands/.
##### See contents of a directory in your project
### See contents of a directory in your project
rclone lsd ns1:/974012/testing/
##### Sync the contents local with remote
### Sync the contents local with remote
rclone sync . ns1:/974012/testing/
##### Upload local content to remote
### Upload local content to remote
rclone copy notes.txt ns1:/974012/testing/
##### Delete content on remote
### Delete content on remote
rclone delete ns1:/974012/testing/notes.txt
##### Move or copy content between CP codes.
### Move or copy content between CP codes.
Your credentials must have access to two CP codes on the same remote. You can't perform operations between different remotes.
rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/
## Features
### Symlink Support
@ -152,7 +156,7 @@ With NetStorage, directories can exist in one of two forms:
Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly.
### ListR Feature
### `--fast-list` / ListR support
NetStorage remote supports the ListR feature by using the "list" NetStorage API action to return a lexicographical list of all objects within the specified CP code, recursing into subdirectories as they're encountered.
@ -164,7 +168,7 @@ There are pros and cons of using the ListR method, refer to [rclone documentatio
**Note**: There is a known limitation that "lsf -R" will display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass "--disable listR" flag if these numbers are important in the output.
### Purge Feature
### Purge
NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method.
@ -179,7 +183,7 @@ Here are the standard options specific to netstorage (Akamai NetStorage).
Domain+path of NetStorage host to connect to.
Format should be <domain>/<internal folders>
Format should be `<domain>/<internal folders>`
Properties:
@ -271,6 +275,6 @@ You can create a symbolic link in ObjectStore with the symlink action.
The desired path location (including applicable sub-directories) ending in
the object that will be the target of the symlink (for example, /links/mylink).
Include the file extension for the object, if applicable.
rclone backend symlink <src> <path>
`rclone backend symlink <src> <path>`
{{< rem autogenerated options stop >}}

View file

@ -629,7 +629,7 @@ are converted you will no longer need the ignore options above.
It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue
that Sharepoint (not OneDrive or OneDrive for Business) may return "item not
found" errors when users try to replace or delete uploaded files; this seems to
mainly affect Office files (.docx, .xlsx, etc.). As a workaround, you may use
mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use
the `--backup-dir <BACKUP_DIR>` command line argument so rclone moves the
files to be replaced/deleted into a given backup directory (instead of directly
replacing/deleting them). For example, to instruct rclone to move the files into

View file

@ -127,3 +127,12 @@ Properties:
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
## Limitations
put.io has rate limiting. When you hit a limit, rclone automatically
retries after waiting the amount of time requested by the server.
If you want to avoid ever hitting these limits, you may use the
`--tpslimit` flag with a low number. Note that the imposed limits
may be different for different operations, and may change over time.

View file

@ -11,6 +11,7 @@ The S3 backend can be used with a number of different providers:
{{< provider name="AWS S3" home="https://aws.amazon.com/s3/" config="/s3/#configuration" start="true" >}}
{{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}}
{{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
{{< provider name="China Mobile Ecloud Elastic Object Storage (EOS)" home="https://ecloud.10086.cn/home/product-introduction/eos/" config="/s3/#china-mobile-ecloud-eos" >}}
{{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
{{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
{{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}}
@ -69,7 +70,7 @@ name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, Dreamhost, IBM COS, Minio, and Tencent COS
XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, ChinaMobile, Dreamhost, IBM COS, Minio, and Tencent COS
\ "s3"
[snip]
Storage> s3
@ -566,7 +567,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
### Standard options
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS).
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
#### --s3-provider
@ -585,6 +586,8 @@ Properties:
- Alibaba Cloud Object Storage System (OSS) formerly Aliyun
- "Ceph"
- Ceph Object Storage
- "ChinaMobile"
- China Mobile Ecloud Elastic Object Storage (EOS)
- "DigitalOcean"
- Digital Ocean Spaces
- "Dreamhost"
@ -826,7 +829,7 @@ Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
- Provider: !AWS,Alibaba,RackCorp,Scaleway,Storj,TencentCOS
- Provider: !AWS,Alibaba,ChinaMobile,RackCorp,Scaleway,Storj,TencentCOS
- Type: string
- Required: false
- Examples:
@ -853,6 +856,80 @@ Properties:
#### --s3-endpoint
Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Provider: ChinaMobile
- Type: string
- Required: false
- Examples:
- "eos-wuxi-1.cmecloud.cn"
- The default endpoint - a good choice if you are unsure.
- East China (Suzhou)
- "eos-jinan-1.cmecloud.cn"
- East China (Jinan)
- "eos-ningbo-1.cmecloud.cn"
- East China (Hangzhou)
- "eos-shanghai-1.cmecloud.cn"
- East China (Shanghai-1)
- "eos-zhengzhou-1.cmecloud.cn"
- Central China (Zhengzhou)
- "eos-hunan-1.cmecloud.cn"
- Central China (Changsha-1)
- "eos-zhuzhou-1.cmecloud.cn"
- Central China (Changsha-2)
- "eos-guangzhou-1.cmecloud.cn"
- South China (Guangzhou-2)
- "eos-dongguan-1.cmecloud.cn"
- South China (Guangzhou-3)
- "eos-beijing-1.cmecloud.cn"
- North China (Beijing-1)
- "eos-beijing-2.cmecloud.cn"
- North China (Beijing-2)
- "eos-beijing-4.cmecloud.cn"
- North China (Beijing-3)
- "eos-huhehaote-1.cmecloud.cn"
- North China (Huhehaote)
- "eos-chengdu-1.cmecloud.cn"
- Southwest China (Chengdu)
- "eos-chongqing-1.cmecloud.cn"
- Southwest China (Chongqing)
- "eos-guiyang-1.cmecloud.cn"
- Southwest China (Guiyang)
- "eos-xian-1.cmecloud.cn"
- Nouthwest China (Xian)
- "eos-yunnan.cmecloud.cn"
- Yunnan China (Kunming)
- "eos-yunnan-2.cmecloud.cn"
- Yunnan China (Kunming-2)
- "eos-tianjin-1.cmecloud.cn"
- Tianjin China (Tianjin)
- "eos-jilin-1.cmecloud.cn"
- Jilin China (Changchun)
- "eos-hubei-1.cmecloud.cn"
- Hubei China (Xiangyan)
- "eos-jiangxi-1.cmecloud.cn"
- Jiangxi China (Nanchang)
- "eos-gansu-1.cmecloud.cn"
- Gansu China (Lanzhou)
- "eos-shanxi-1.cmecloud.cn"
- Shanxi China (Taiyuan)
- "eos-liaoning-1.cmecloud.cn"
- Liaoning China (Shenyang)
- "eos-hebei-1.cmecloud.cn"
- Hebei China (Shijiazhuang)
- "eos-fujian-1.cmecloud.cn"
- Fujian China (Xiamen)
- "eos-guangxi-1.cmecloud.cn"
- Guangxi China (Nanning)
- "eos-anhui-1.cmecloud.cn"
- Anhui China (Huainan)
#### --s3-endpoint
Endpoint for IBM COS S3 API.
Specify if using an IBM COS On Premise.
@ -1220,7 +1297,7 @@ Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Provider: !AWS,IBMCOS,TencentCOS,Alibaba,Scaleway,StackPath,Storj,RackCorp
- Provider: !AWS,IBMCOS,TencentCOS,Alibaba,ChinaMobile,Scaleway,StackPath,Storj,RackCorp
- Type: string
- Required: false
- Examples:
@ -1318,6 +1395,81 @@ Properties:
#### --s3-location-constraint
Location constraint - must match endpoint.
Used when creating buckets only.
Properties:
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
- Provider: ChinaMobile
- Type: string
- Required: false
- Examples:
- "wuxi1"
- East China (Suzhou)
- "jinan1"
- East China (Jinan)
- "ningbo1"
- East China (Hangzhou)
- "shanghai1"
- East China (Shanghai-1)
- "zhengzhou1"
- Central China (Zhengzhou)
- "hunan1"
- Central China (Changsha-1)
- "zhuzhou1"
- Central China (Changsha-2)
- "guangzhou1"
- South China (Guangzhou-2)
- "dongguan1"
- South China (Guangzhou-3)
- "beijing1"
- North China (Beijing-1)
- "beijing2"
- North China (Beijing-2)
- "beijing4"
- North China (Beijing-3)
- "huhehaote1"
- North China (Huhehaote)
- "chengdu1"
- Southwest China (Chengdu)
- "chongqing1"
- Southwest China (Chongqing)
- "guiyang1"
- Southwest China (Guiyang)
- "xian1"
- Nouthwest China (Xian)
- "yunnan"
- Yunnan China (Kunming)
- "yunnan2"
- Yunnan China (Kunming-2)
- "tianjin1"
- Tianjin China (Tianjin)
- "jilin1"
- Jilin China (Changchun)
- "hubei1"
- Hubei China (Xiangyan)
- "jiangxi1"
- Jiangxi China (Nanchang)
- "gansu1"
- Gansu China (Lanzhou)
- "shanxi1"
- Shanxi China (Taiyuan)
- "liaoning1"
- Liaoning China (Shenyang)
- "hebei1"
- Hebei China (Shijiazhuang)
- "fujian1"
- Fujian China (Xiamen)
- "guangxi1"
- Guangxi China (Nanning)
- "anhui1"
- Anhui China (Huainan)
#### --s3-location-constraint
Location constraint - must match endpoint when using IBM Cloud Public.
For on-prem COS, do not make a selection from this list, hit enter.
@ -1457,7 +1609,7 @@ Properties:
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
- Provider: !AWS,IBMCOS,Alibaba,RackCorp,Scaleway,StackPath,Storj,TencentCOS
- Provider: !AWS,IBMCOS,Alibaba,ChinaMobile,RackCorp,Scaleway,StackPath,Storj,TencentCOS
- Type: string
- Required: false
@ -1529,7 +1681,7 @@ Properties:
- Config: server_side_encryption
- Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
- Provider: AWS,Ceph,Minio
- Provider: AWS,Ceph,ChinaMobile,Minio
- Type: string
- Required: false
- Examples:
@ -1611,6 +1763,27 @@ Properties:
#### --s3-storage-class
The storage class to use when storing new objects in ChinaMobile.
Properties:
- Config: storage_class
- Env Var: RCLONE_S3_STORAGE_CLASS
- Provider: ChinaMobile
- Type: string
- Required: false
- Examples:
- ""
- Default
- "STANDARD"
- Standard storage class
- "GLACIER"
- Archive storage mode
- "STANDARD_IA"
- Infrequent access storage mode
#### --s3-storage-class
The storage class to use when storing new objects in Tencent COS.
Properties:
@ -1653,7 +1826,7 @@ Properties:
### Advanced options
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS).
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
#### --s3-bucket-acl
@ -1705,7 +1878,7 @@ Properties:
- Config: sse_customer_algorithm
- Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM
- Provider: AWS,Ceph,Minio
- Provider: AWS,Ceph,ChinaMobile,Minio
- Type: string
- Required: false
- Examples:
@ -1722,7 +1895,7 @@ Properties:
- Config: sse_customer_key
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY
- Provider: AWS,Ceph,Minio
- Provider: AWS,Ceph,ChinaMobile,Minio
- Type: string
- Required: false
- Examples:
@ -1740,7 +1913,7 @@ Properties:
- Config: sse_customer_key_md5
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5
- Provider: AWS,Ceph,Minio
- Provider: AWS,Ceph,ChinaMobile,Minio
- Type: string
- Required: false
- Examples:
@ -2315,7 +2488,7 @@ Options:
{{< rem autogenerated options stop >}}
### Anonymous access to public buckets ###
### Anonymous access to public buckets
If you want to use rclone to access a public bucket, configure with a
blank `access_key_id` and `secret_access_key`. Your config should end
@ -2521,7 +2694,7 @@ Choose a number from below, or type in your own value
\ "alias"
2 / Amazon Drive
\ "amazon cloud drive"
3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, ChinaMobile, Minio, IBM COS)
\ "s3"
4 / Backblaze B2
\ "b2"
@ -2778,6 +2951,9 @@ server_side_encryption =
storage_class =
```
[C14 Cold Storage](https://www.online.net/en/storage/c14-cold-storage) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
### Seagate Lyve Cloud {#lyve}
[Seagate Lyve Cloud](https://www.seagate.com/gb/en/services/cloud/storage/) is an S3
@ -2803,7 +2979,7 @@ Choose `s3` backend
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
\ (s3)
[snip]
Storage> s3
@ -2990,7 +3166,7 @@ name> wasabi
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
XX / Amazon S3 (also Dreamhost, Ceph, Minio)
XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, Minio)
\ "s3"
[snip]
Storage> s3
@ -3104,7 +3280,7 @@ Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS
4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS
\ "s3"
[snip]
Storage> s3
@ -3194,6 +3370,256 @@ d) Delete this remote
y/e/d> y
```
### China Mobile Ecloud Elastic Object Storage (EOS) {#china-mobile-ecloud-eos}
Here is an example of making an [China Mobile Ecloud Elastic Object Storage (EOS)](https:///ecloud.10086.cn/home/product-introduction/eos/)
configuration. First run:
rclone config
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> ChinaMobile
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
...
5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
\ (s3)
...
Storage> s3
Option provider.
Choose your S3 provider.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
...
4 / China Mobile Ecloud Elastic Object Storage (EOS)
\ (ChinaMobile)
...
provider> ChinaMobile
Option env_auth.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own boolean value (true or false).
Press Enter for the default (false).
1 / Enter AWS credentials in the next step.
\ (false)
2 / Get AWS credentials from the environment (env vars or IAM).
\ (true)
env_auth>
Option access_key_id.
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
access_key_id> accesskeyid
Option secret_access_key.
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Enter a value. Press Enter to leave empty.
secret_access_key> secretaccesskey
Option endpoint.
Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ The default endpoint - a good choice if you are unsure.
1 | East China (Suzhou)
\ (eos-wuxi-1.cmecloud.cn)
2 / East China (Jinan)
\ (eos-jinan-1.cmecloud.cn)
3 / East China (Hangzhou)
\ (eos-ningbo-1.cmecloud.cn)
4 / East China (Shanghai-1)
\ (eos-shanghai-1.cmecloud.cn)
5 / Central China (Zhengzhou)
\ (eos-zhengzhou-1.cmecloud.cn)
6 / Central China (Changsha-1)
\ (eos-hunan-1.cmecloud.cn)
7 / Central China (Changsha-2)
\ (eos-zhuzhou-1.cmecloud.cn)
8 / South China (Guangzhou-2)
\ (eos-guangzhou-1.cmecloud.cn)
9 / South China (Guangzhou-3)
\ (eos-dongguan-1.cmecloud.cn)
10 / North China (Beijing-1)
\ (eos-beijing-1.cmecloud.cn)
11 / North China (Beijing-2)
\ (eos-beijing-2.cmecloud.cn)
12 / North China (Beijing-3)
\ (eos-beijing-4.cmecloud.cn)
13 / North China (Huhehaote)
\ (eos-huhehaote-1.cmecloud.cn)
14 / Southwest China (Chengdu)
\ (eos-chengdu-1.cmecloud.cn)
15 / Southwest China (Chongqing)
\ (eos-chongqing-1.cmecloud.cn)
16 / Southwest China (Guiyang)
\ (eos-guiyang-1.cmecloud.cn)
17 / Nouthwest China (Xian)
\ (eos-xian-1.cmecloud.cn)
18 / Yunnan China (Kunming)
\ (eos-yunnan.cmecloud.cn)
19 / Yunnan China (Kunming-2)
\ (eos-yunnan-2.cmecloud.cn)
20 / Tianjin China (Tianjin)
\ (eos-tianjin-1.cmecloud.cn)
21 / Jilin China (Changchun)
\ (eos-jilin-1.cmecloud.cn)
22 / Hubei China (Xiangyan)
\ (eos-hubei-1.cmecloud.cn)
23 / Jiangxi China (Nanchang)
\ (eos-jiangxi-1.cmecloud.cn)
24 / Gansu China (Lanzhou)
\ (eos-gansu-1.cmecloud.cn)
25 / Shanxi China (Taiyuan)
\ (eos-shanxi-1.cmecloud.cn)
26 / Liaoning China (Shenyang)
\ (eos-liaoning-1.cmecloud.cn)
27 / Hebei China (Shijiazhuang)
\ (eos-hebei-1.cmecloud.cn)
28 / Fujian China (Xiamen)
\ (eos-fujian-1.cmecloud.cn)
29 / Guangxi China (Nanning)
\ (eos-guangxi-1.cmecloud.cn)
30 / Anhui China (Huainan)
\ (eos-anhui-1.cmecloud.cn)
endpoint> 1
Option location_constraint.
Location constraint - must match endpoint.
Used when creating buckets only.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / East China (Suzhou)
\ (wuxi1)
2 / East China (Jinan)
\ (jinan1)
3 / East China (Hangzhou)
\ (ningbo1)
4 / East China (Shanghai-1)
\ (shanghai1)
5 / Central China (Zhengzhou)
\ (zhengzhou1)
6 / Central China (Changsha-1)
\ (hunan1)
7 / Central China (Changsha-2)
\ (zhuzhou1)
8 / South China (Guangzhou-2)
\ (guangzhou1)
9 / South China (Guangzhou-3)
\ (dongguan1)
10 / North China (Beijing-1)
\ (beijing1)
11 / North China (Beijing-2)
\ (beijing2)
12 / North China (Beijing-3)
\ (beijing4)
13 / North China (Huhehaote)
\ (huhehaote1)
14 / Southwest China (Chengdu)
\ (chengdu1)
15 / Southwest China (Chongqing)
\ (chongqing1)
16 / Southwest China (Guiyang)
\ (guiyang1)
17 / Nouthwest China (Xian)
\ (xian1)
18 / Yunnan China (Kunming)
\ (yunnan)
19 / Yunnan China (Kunming-2)
\ (yunnan2)
20 / Tianjin China (Tianjin)
\ (tianjin1)
21 / Jilin China (Changchun)
\ (jilin1)
22 / Hubei China (Xiangyan)
\ (hubei1)
23 / Jiangxi China (Nanchang)
\ (jiangxi1)
24 / Gansu China (Lanzhou)
\ (gansu1)
25 / Shanxi China (Taiyuan)
\ (shanxi1)
26 / Liaoning China (Shenyang)
\ (liaoning1)
27 / Hebei China (Shijiazhuang)
\ (hebei1)
28 / Fujian China (Xiamen)
\ (fujian1)
29 / Guangxi China (Nanning)
\ (guangxi1)
30 / Anhui China (Huainan)
\ (anhui1)
location_constraint> 1
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
/ Owner gets FULL_CONTROL.
1 | No one else has access rights (default).
\ (private)
/ Owner gets FULL_CONTROL.
2 | The AllUsers group gets READ access.
\ (public-read)
/ Owner gets FULL_CONTROL.
3 | The AllUsers group gets READ and WRITE access.
| Granting this on a bucket is generally not recommended.
\ (public-read-write)
/ Owner gets FULL_CONTROL.
4 | The AuthenticatedUsers group gets READ access.
\ (authenticated-read)
/ Object owner gets FULL_CONTROL.
acl> private
Option server_side_encryption.
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / None
\ ()
2 / AES256
\ (AES256)
server_side_encryption>
Option storage_class.
The storage class to use when storing new objects in ChinaMobile.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
1 / Default
\ ()
2 / Standard storage class
\ (STANDARD)
3 / Archive storage mode
\ (GLACIER)
4 / Infrequent access storage mode
\ (STANDARD_IA)
storage_class>
Edit advanced config?
y) Yes
n) No (default)
y/n> n
--------------------
[ChinaMobile]
type = s3
provider = ChinaMobile
access_key_id = accesskeyid
secret_access_key = secretaccesskey
endpoint = eos-wuxi-1.cmecloud.cn
location_constraint = wuxi1
acl = private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
### Tencent COS {#tencent-cos}
[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.
@ -3227,7 +3653,7 @@ Choose a number from below, or type in your own value
\ "alias"
3 / Amazon Drive
\ "amazon cloud drive"
4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS
4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS
\ "s3"
[snip]
Storage> s3

View file

@ -11,7 +11,6 @@ Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol).
The SFTP backend can be used with a number of different providers:
{{< provider_list >}}
{{< provider name="C14" home="https://www.online.net/en/storage/c14-cold-storage" config="/sftp/#c14">}}
{{< provider name="rsync.net" home="https://rsync.net/products/rclone.html" config="/sftp/#rsync-net">}}
{{< /provider_list >}}
@ -676,12 +675,6 @@ with it: `--dump-headers`, `--dump-bodies`, `--dump-auth`
Note that `--timeout` and `--contimeout` are both supported.
## C14 {#c14}
C14 is supported through the SFTP backend.
See [C14's documentation](https://www.online.net/en/storage/c14-cold-storage)
## rsync.net {#rsync-net}
rsync.net is supported through the SFTP backend.

View file

@ -1 +1 @@
v1.58.0
v1.58.1

View file

@ -80,7 +80,7 @@ var timeFormats = []string{
func parseDurationDates(age string, epoch time.Time) (t time.Duration, err error) {
var instant time.Time
for _, timeFormat := range timeFormats {
instant, err = time.Parse(timeFormat, age)
instant, err = time.ParseInLocation(timeFormat, age, time.Local)
if err == nil {
return epoch.Sub(instant), nil
}

View file

@ -42,10 +42,12 @@ func TestParseDuration(t *testing.T) {
{"1x", 0, true},
{"off", time.Duration(DurationOff), false},
{"1h2m3s", time.Hour + 2*time.Minute + 3*time.Second, false},
{"2001-02-03", now.Sub(time.Date(2001, 2, 3, 0, 0, 0, 0, time.UTC)), false},
{"2001-02-03 10:11:12", now.Sub(time.Date(2001, 2, 3, 10, 11, 12, 0, time.UTC)), false},
{"2001-02-03T10:11:12", now.Sub(time.Date(2001, 2, 3, 10, 11, 12, 0, time.UTC)), false},
{"2001-02-03", now.Sub(time.Date(2001, 2, 3, 0, 0, 0, 0, time.Local)), false},
{"2001-02-03 10:11:12", now.Sub(time.Date(2001, 2, 3, 10, 11, 12, 0, time.Local)), false},
{"2001-08-03 10:11:12", now.Sub(time.Date(2001, 8, 3, 10, 11, 12, 0, time.Local)), false},
{"2001-02-03T10:11:12", now.Sub(time.Date(2001, 2, 3, 10, 11, 12, 0, time.Local)), false},
{"2001-02-03T10:11:12.123Z", now.Sub(time.Date(2001, 2, 3, 10, 11, 12, 123, time.UTC)), false},
{"2001-02-03T10:11:12.123+00:00", now.Sub(time.Date(2001, 2, 3, 10, 11, 12, 123, time.UTC)), false},
} {
duration, err := parseDurationFromNow(test.in, getNow)
if test.err {

View file

@ -30,7 +30,7 @@ rcValid.then(() => {
// If the output object has an "error" and a "status" then it is an
// error (it would be nice to signal this out of band).
console.log("core/version", rc("core/version", null))
console.log("core/version", rc("rc/noop", {"string":"one",number:2}))
console.log("core/version", rc("operations/mkdir", {"fs":":memory:","remote":"bucket"}))
console.log("core/version", rc("operations/list", {"fs":":memory:","remote":"bucket"}))
console.log("rc/noop", rc("rc/noop", {"string":"one",number:2}))
console.log("operations/mkdir", rc("operations/mkdir", {"fs":":memory:","remote":"bucket"}))
console.log("operations/list", rc("operations/list", {"fs":":memory:","remote":"bucket"}))
})

View file

@ -1,4 +1,4 @@
package fs
// Version of rclone
var Version = "v1.58.0-DEV"
var Version = "v1.58.1-DEV"

4
go.mod
View file

@ -17,7 +17,7 @@ require (
github.com/artyom/mtab v0.0.0-20141107123140-74b6fd01d416
github.com/atotto/clipboard v0.1.4
github.com/aws/aws-sdk-go v1.42.1
github.com/billziss-gh/cgofuse v1.5.0
github.com/winfsp/cgofuse v1.5.0
github.com/buengese/sgzip v0.1.1
github.com/colinmarc/hdfs/v2 v2.2.0
github.com/coreos/go-semver v0.3.0
@ -59,7 +59,7 @@ require (
github.com/yunify/qingstor-sdk-go/v3 v3.2.0
go.etcd.io/bbolt v1.3.6
goftp.io/server v0.4.1
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c

4
go.sum
View file

@ -643,6 +643,8 @@ github.com/viant/toolbox v0.24.0/go.mod h1:OxMCG57V0PXuIP2HNQrtJf2CjqdmbrOx5EkMI
github.com/vivint/infectious v0.0.0-20200605153912-25a574ae18a3 h1:zMsHhfK9+Wdl1F7sIKLyx3wrOFofpb3rWFbA4HgcK5k=
github.com/vivint/infectious v0.0.0-20200605153912-25a574ae18a3/go.mod h1:R0Gbuw7ElaGSLOZUSwBm/GgVwMd30jWxBDdAyMOeTuc=
github.com/willf/bitset v1.1.9/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
github.com/winfsp/cgofuse v1.5.0 h1:MsBP7Mi/LiJf/7/F3O/7HjjR009ds6KCdqXzKpZSWxI=
github.com/winfsp/cgofuse v1.5.0/go.mod h1:h3awhoUOcn2VYVKCwDaYxSLlZwnyK+A8KaDoLUp2lbU=
github.com/xanzy/ssh-agent v0.3.1 h1:AmzO1SSWxw73zxFZPRwaMN1MohDw8UyHnmuxyceTEGo=
github.com/xanzy/ssh-agent v0.3.1/go.mod h1:QIE4lCeL7nkC25x+yA3LBIYfwCc1TFziCtG7cBAac6w=
github.com/youmark/pkcs8 v0.0.0-20201027041543-1326539a0a0a h1:fZHgsYlfvtyqToslyjUt3VOPF4J7aK/3MPcK7xp3PDk=
@ -709,6 +711,8 @@ golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5y
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3 h1:0es+/5331RGQPcXlMfP+WrnIIS6dNnNRe0WB02W0F4M=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29 h1:tkVvjkPTB7pnW3jnid7kNyAMPVWllTNOf/qKDze4p9o=
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=

View file

@ -219,7 +219,7 @@ type GoogleDriveOption interface {
func NewGoogleDrive(opts ...GoogleDriveOption) *GoogleDrive {
c := &GoogleDrive{
minSleep: 10 * time.Millisecond,
burst: 1,
burst: 100,
}
c.Update(opts...)
return c

1113
rclone.1 generated

File diff suppressed because it is too large Load diff

View file

@ -27,7 +27,7 @@ about files and directories (but not the data) in memory.
Using the !--dir-cache-time! flag, you can control how long a
directory should be considered up to date and not refreshed from the
backend. Changes made through the mount will appear immediately or
backend. Changes made through the VFS will appear immediately or
invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@ -224,7 +224,7 @@ read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather
than seeking rclone will wait a short time for the in sequence read or
@ -236,7 +236,7 @@ on disk cache file.
When using VFS write caching (!--vfs-cache-mode! with value writes or full),
the global flag !--transfers! can be set to adjust the number of parallel uploads of
modified files from cache (the related global flag !--checkers! have no effect on mount).
modified files from the cache (the related global flag !--checkers! has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
@ -253,22 +253,22 @@ It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS
file systems case-sensitive but that is not the default.
The !--vfs-case-insensitive! mount flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the mounted
file system as-is. If the flag is "true" (or appears without a value on
The !--vfs-case-insensitive! VFS flag controls how rclone handles these
two cases. If its value is "false", rclone passes file names to the remote
as-is. If the flag is "true" (or appears without a value on the
command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case
different than what is stored on mounted file system. If an argument refers
different than what is stored on the remote. If an argument refers
to an existing file with exactly the same name, then the case of the existing
file on the disk will be used. However, if a file name with exactly the same
name is not found but a name differing only by case exists, rclone will
transparently fixup the name. This fixup happens only when an existing file
is requested. Case sensitivity of file names created anew by rclone is
controlled by an underlying mounted file system.
controlled by the underlying remote.
Note that case sensitivity of the operating system running rclone (the target)
may differ from case sensitivity of a file system mounted by rclone (the source).
may differ from case sensitivity of a file system presented by rclone (the source).
The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends

View file

@ -23,7 +23,7 @@ func AddFlags(flagSet *pflag.FlagSet) {
flags.BoolVarP(flagSet, &Opt.NoSeek, "no-seek", "", Opt.NoSeek, "Don't allow seeking in files")
flags.DurationVarP(flagSet, &Opt.DirCacheTime, "dir-cache-time", "", Opt.DirCacheTime, "Time to cache directory entries for")
flags.DurationVarP(flagSet, &Opt.PollInterval, "poll-interval", "", Opt.PollInterval, "Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable)")
flags.BoolVarP(flagSet, &Opt.ReadOnly, "read-only", "", Opt.ReadOnly, "Mount read-only")
flags.BoolVarP(flagSet, &Opt.ReadOnly, "read-only", "", Opt.ReadOnly, "Only allow read-only access")
flags.FVarP(flagSet, &Opt.CacheMode, "vfs-cache-mode", "", "Cache mode off|minimal|writes|full")
flags.DurationVarP(flagSet, &Opt.CachePollInterval, "vfs-cache-poll-interval", "", Opt.CachePollInterval, "Interval to poll the cache for stale objects")
flags.DurationVarP(flagSet, &Opt.CacheMaxAge, "vfs-cache-max-age", "", Opt.CacheMaxAge, "Max age of objects in the cache")

View file

@ -102,16 +102,15 @@ func RunTests(t *testing.T, useVFS bool, fn mountlib.MountFn) {
// Run holds the remotes for a test run
type Run struct {
os Oser
vfs *vfs.VFS
useVFS bool // set if we are testing a VFS not a mount
mountPath string
fremote fs.Fs
fremoteName string
cleanRemote func()
umountResult <-chan error
umountFn mountlib.UnmountFn
skip bool
os Oser
vfs *vfs.VFS
useVFS bool // set if we are testing a VFS not a mount
mnt *mountlib.MountPoint
mountPath string
fremote fs.Fs
fremoteName string
cleanRemote func()
skip bool
}
// run holds the master Run data
@ -125,8 +124,7 @@ var run *Run
// Finalise() will tidy them away when done.
func newRun(useVFS bool) *Run {
r := &Run{
useVFS: useVFS,
umountResult: make(chan error, 1),
useVFS: useVFS,
}
fstest.Initialise()
@ -173,14 +171,16 @@ func findMountPath() string {
func (r *Run) mount() {
log.Printf("mount %q %q", r.fremote, r.mountPath)
var err error
r.vfs = vfs.New(r.fremote, &vfsflags.Opt)
r.umountResult, r.umountFn, err = mountFn(r.vfs, r.mountPath, &mountlib.Opt)
r.mnt = mountlib.NewMountPoint(mountFn, r.mountPath, r.fremote, &mountlib.Opt, &vfsflags.Opt)
_, err = r.mnt.Mount()
if err != nil {
log.Printf("mount FAILED: %v", err)
r.skip = true
} else {
log.Printf("mount OK")
}
r.vfs = r.mnt.VFS
if r.useVFS {
r.os = vfsOs{r.vfs}
} else {
@ -202,17 +202,17 @@ func (r *Run) umount() {
}
*/
log.Printf("Unmounting %q", r.mountPath)
err := r.umountFn()
err := r.mnt.Unmount()
if err != nil {
log.Printf("signal to umount failed - retrying: %v", err)
time.Sleep(3 * time.Second)
err = r.umountFn()
err = r.mnt.Unmount()
}
if err != nil {
log.Fatalf("signal to umount failed: %v", err)
}
log.Printf("Waiting for umount")
err = <-r.umountResult
err = <-r.mnt.ErrChan
if err != nil {
log.Fatalf("umount failed: %v", err)
}