Compare commits

...
Sign in to create a new pull request.

32 commits

Author SHA1 Message Date
Lucas Messenger
0012b981c1 hdfs: fix permissions for when directory is created 2021-03-12 09:15:47 +00:00
Nick Craig-Wood
707cdaa604 Start v1.54.2-DEV development 2021-03-08 11:04:59 +00:00
Nick Craig-Wood
e2531e08be Version v1.54.1 2021-03-08 10:04:23 +00:00
Nick Craig-Wood
86babc6393 build: fix nfpm install by using the released binary 2021-03-07 17:04:04 +00:00
Ivan Andreev
d45d48cbe5 chunker: fix integration tests after backport commit 6baa4e294
It was my stupid typo: "f.base.NewObject" instead of correct "f.NewObject"
I made it during backport. The commit on master was correct.
2021-03-07 13:28:43 +00:00
edwardxml
7ad2c22d5b docs: remove dead link from rc.md (#5038) 2021-03-06 11:51:44 +00:00
Dmitry Chepurovskiy
0cfcc08be1 s3: Fix shared_credentials_file auth
S3 backend shared_credentials_file option wasn't working neither from
config option nor from command line option. This was caused cause
shared_credentials_file_provider works as part of chain provider, but in
case user haven't specified access_token and access_key we had removed
(set nil) to credentials field, that may contain actual credentials got
from ChainProvider.

AWS_SHARED_CREDENTIALS_FILE env varible as far as i understood worked,
cause aws_sdk code handles it as one of default auth options, when
there's not configured credentials.
2021-03-06 11:51:19 +00:00
edwardxml
2c4a25de5b docs: convert bogus example link to code
Convert the bogus example plex url from a url that is auto created to code format that hopefully isn't.
2021-03-06 11:50:23 +00:00
edwardxml
f5a95b2ad0 docs: badly formed link
Fix for a badly formed link created in earlier rewrite
2021-03-06 11:50:08 +00:00
Nick Craig-Wood
f2caa0eabb vfs: document simultaneous usage with the same cache shouldn't be used
Fixes #2227
2021-03-06 11:49:48 +00:00
Miron Veryanskiy
4943a5028c docs: replace #file-caching with #vfs-file-caching
The documentation had dead links pointing to #file-caching. They've been
moved to point to #vfs-file-caching.
2021-03-06 11:49:23 +00:00
Romeo Kienzler
60bebe4b35 docs: fix typo in crypt.md (#5037) 2021-03-06 11:44:51 +00:00
edwardxml
61031cfdea docs: fix broken link in sftp page
Just a spare line break had crept in breaking the link form.
2021-03-06 11:44:38 +00:00
edwardxml
da7e4379fa docs: fix nesting of brackets and backticks in ftp docs 2021-03-06 11:43:35 +00:00
Nick Craig-Wood
7e7a91ce3d rc: sync,copy,move: document createEmptySrcDirs parameter - fixes #4489 2021-03-06 11:43:15 +00:00
Ivan Andreev
6baa4e2947 address stringent ineffectual assignment check in golangci-lint (#5093) 2021-03-05 20:52:38 +03:00
Nick Craig-Wood
3f53283ebf s3: fix Wasabi HEAD requests returning stale data by using only 1 transport
In this commit

fc5b14b620 s3: Added `--s3-disable-http2` to disable http/2

We created our own transport so we could disable http/2. However the
added function is called twice meaning that we create two HTTP
transports. This didn't happen with the original code because the
default transport is cached by fshttp.

Rclone normally does a PUT followed by a HEAD request to check an
upload has been successful.

With the two transports, the PUT and the HEAD were being done on
different HTTP transports. This means that it wasn't re-using the same
HTTP connection, so the HEAD request showed the previous object value.
This caused rclone to declare the upload was corrupted, delete the
object and try again.

This patch makes sure we only create one transport and use it for both
PUT and HEAD requests which fixes the problem with Wasabi.

See: https://forum.rclone.org/t/each-time-rclone-is-run-1-3-fails-2-3-succeeds/22545
2021-03-05 15:35:23 +00:00
Nick Craig-Wood
da9dd543e4 s3: fix failed to create file system with folder level permissions policy
Before this change, if folder level access permissions policy was in
use, with trailing `/` marking the folders then rclone would HEAD the
path without a trailing `/` to work out if it was a file or a folder.
This returned a permission denied error, which rclone returned to the
user.

    Failed to create file system for "s3:bucket/path/": Forbidden: Forbidden
        status code: 403, request id: XXXX, host id:

Previous to this change

53aa03cc44 s3: complete sse-c implementation

rclone would assume any errors when HEAD-ing the object implied it
didn't exist and this test would not fail.

This change reverts the functionality of the test to work as it did
before, meaning any errors on HEAD will make rclone assume the object
does not exist and the path is referring to a directory.

Fixes #4990
2021-02-24 20:36:09 +00:00
Ivan Andreev
e3cf4f82eb build: replace go 1.16-rc1 by 1.16.x (#5036) 2021-02-24 20:10:51 +00:00
buengese
406e26c7b7 zoho: fix custom client id's 2021-02-23 11:27:19 +00:00
Nick Craig-Wood
f4214882ab cmount: fix mount dropping on macOS by setting --daemon-timeout 10m
Previously rclone set --daemon-timeout to 15m by default. However
osxfuse seems to be ignoring that value since it is above the maximum
value of 10m. This is conjecture since the source of osxfuse is no
longer available.

Setting the value to 10m seems to resolve the problem.

See: https://forum.rclone.org/t/rclone-mount-frequently-drops-when-using-plex/22352
2021-02-21 13:00:47 +00:00
Nick Craig-Wood
231ab31d2a union: fix mkdir at root with remote:/
Before the this fix if you specified remote:/ then the union backend
would fail to notice the root directory existed.

This was fixed by stripping the trailing / from the root.

See: https://forum.rclone.org/t/upgraded-from-1-45-to-1-54-now-cant-create-new-directory-within-union-mount/22284/
2021-02-17 12:12:14 +00:00
Nick Craig-Wood
f76bc86cc8 accounting: fix --bwlimit when up or down is off - fixes #5019
Before this change the core bandwidth limit was limited to upload or
download value if the other value was off.

This fix only applies a core bandwidth limit when both values are set.
2021-02-13 12:45:45 +00:00
Nick Craig-Wood
2d11f5672d dropbox: add scopes to oauth request and optionally "members.read"
This change adds the scopes rclone wants during the oauth request.
Previously rclone left these blank to get a default set.

This allows rclone to add the "members.read" scope which is necessary
for "impersonate" to work, but only when it is in use as it require
authorisation from a Team Admin.

See: https://forum.rclone.org/t/dropbox-no-members-read/22223/3
2021-02-13 12:35:45 +00:00
Nick Craig-Wood
cf0563f99e b2: fix failed to create file system with application key limited to a prefix
Before this change, if an application key limited to a prefix was in
use, with trailing `/` marking the folders then rclone would HEAD the
path without a trailing `/` to work out if it was a file or a folder.
This returned a permission denied error, which rclone returned to the
user.

    Failed to create file system for "b2:bucket/path/":
        failed to HEAD for download: Unknown 401  (401 unknown)

This change assumes any errors on HEAD will make rclone assume the
object does not exist and the path is referring to a directory.

See: https://forum.rclone.org/t/b2-error-on-application-key-limited-to-a-prefix/22159/
2021-02-10 15:27:45 +00:00
Nick Craig-Wood
65f691f4de drive: refer to Shared Drives instead of Team Drives 2021-02-10 15:27:19 +00:00
Nick Craig-Wood
f627d42a51 lsjson: fix unterminated JSON in the presence of errors
See: https://forum.rclone.org/t/rclone-lsjson-invalid-json-produced-no-at-the-end/22046
2021-02-10 15:26:48 +00:00
Nick Craig-Wood
f08e43fb77 b2: automatically raise upload cutoff to avoid spurious error
Before this change, if --b2-chunk-size was raised above 200M then this
error would be produced:

    b2: upload cutoff: 200M is less than chunk size 1G

This change automatically reaises --b2-upload-cutoff to be the value
of --b2-chunk-size if it is below it, which stops this error being
generated.

Fixes #4475
2021-02-10 15:26:17 +00:00
Nick Craig-Wood
cd7611e7ce s3: add --s3-no-head to reducing costs docs - Fixes #2163 2021-02-10 15:24:46 +00:00
Nick Craig-Wood
42f28f9458 build: update GitHub release tool to use gh and put a link to changelog
Fixes #4994
2021-02-10 15:24:46 +00:00
Alex JOST
92046b457f docs: Changelog: Correct link to digitalis.io 2021-02-10 15:24:46 +00:00
Nick Craig-Wood
098de1cff5 Start v1.54.1-DEV development 2021-02-10 15:24:46 +00:00
43 changed files with 1386 additions and 563 deletions

View file

@ -87,7 +87,7 @@ jobs:
- job_name: go1.16 - job_name: go1.16
os: ubuntu-latest os: ubuntu-latest
go: '1.16.0-rc1' go: '1.16.x'
quicktest: true quicktest: true
racequicktest: true racequicktest: true

304
MANUAL.html generated
View file

@ -17,7 +17,7 @@
<header id="title-block-header"> <header id="title-block-header">
<h1 class="title">rclone(1) User Manual</h1> <h1 class="title">rclone(1) User Manual</h1>
<p class="author">Nick Craig-Wood</p> <p class="author">Nick Craig-Wood</p>
<p class="date">Feb 02, 2021</p> <p class="date">Mar 08, 2021</p>
</header> </header>
<h1 id="rclone-syncs-your-files-to-cloud-storage">Rclone syncs your files to cloud storage</h1> <h1 id="rclone-syncs-your-files-to-cloud-storage">Rclone syncs your files to cloud storage</h1>
<p><img width="50%" src="https://rclone.org/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" ></p> <p><img width="50%" src="https://rclone.org/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" ></p>
@ -1471,11 +1471,11 @@ rclone mount remote:path/to/files * --volname \\cloud\remote</code></pre>
<p>Note that drives created as Administrator are not visible by other accounts (including the account that was elevated as Administrator). So if you start a Windows drive from an Administrative Command Prompt and then try to access the same drive from Explorer (which does not run as Administrator), you will not be able to see the new drive.</p> <p>Note that drives created as Administrator are not visible by other accounts (including the account that was elevated as Administrator). So if you start a Windows drive from an Administrative Command Prompt and then try to access the same drive from Explorer (which does not run as Administrator), you will not be able to see the new drive.</p>
<p>The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using <a href="https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture">the WinFsp.Launcher infrastructure</a>) which creates drives accessible for everyone on the system or alternatively using <a href="https://nssm.cc/usage">the nssm service manager</a>.</p> <p>The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using <a href="https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture">the WinFsp.Launcher infrastructure</a>) which creates drives accessible for everyone on the system or alternatively using <a href="https://nssm.cc/usage">the nssm service manager</a>.</p>
<h2 id="limitations">Limitations</h2> <h2 id="limitations">Limitations</h2>
<p>Without the use of <code>--vfs-cache-mode</code> this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without <code>--vfs-cache-mode writes</code> or <code>--vfs-cache-mode full</code>. See the <a href="#file-caching">File Caching</a> section for more info.</p> <p>Without the use of <code>--vfs-cache-mode</code> this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without <code>--vfs-cache-mode writes</code> or <code>--vfs-cache-mode full</code>. See the <a href="#vfs-file-caching">VFS File Caching</a> section for more info.</p>
<p>The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.</p> <p>The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.</p>
<p>Only supported on Linux, FreeBSD, OS X and Windows at the moment.</p> <p>Only supported on Linux, FreeBSD, OS X and Windows at the moment.</p>
<h2 id="rclone-mount-vs-rclone-synccopy">rclone mount vs rclone sync/copy</h2> <h2 id="rclone-mount-vs-rclone-synccopy">rclone mount vs rclone sync/copy</h2>
<p>File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the <a href="#file-caching">file caching</a> for solutions to make mount more reliable.</p> <p>File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the <a href="#vfs-file-caching">VFS File Caching</a> for solutions to make mount more reliable.</p>
<h2 id="attribute-caching">Attribute caching</h2> <h2 id="attribute-caching">Attribute caching</h2>
<p>You can use the flag <code>--attr-timeout</code> to set the time the kernel caches the attributes (size, modification time, etc.) for directory entries.</p> <p>You can use the flag <code>--attr-timeout</code> to set the time the kernel caches the attributes (size, modification time, etc.) for directory entries.</p>
<p>The default is <code>1s</code> which caches files just long enough to avoid too many callbacks to rclone from the kernel.</p> <p>The default is <code>1s</code> which caches files just long enough to avoid too many callbacks to rclone from the kernel.</p>
@ -1526,6 +1526,7 @@ rclone mount remote:path/to/files * --volname \\cloud\remote</code></pre>
<p>The cache has 4 different modes selected by <code>--vfs-cache-mode</code>. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.</p> <p>The cache has 4 different modes selected by <code>--vfs-cache-mode</code>. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.</p>
<p>Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.</p> <p>Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.</p>
<p>If using <code>--vfs-cache-max-size</code> note that the cache may exceed this size for two reasons. Firstly because it is only checked every <code>--vfs-cache-poll-interval</code>. Secondly because open files cannot be evicted from the cache.</p> <p>If using <code>--vfs-cache-max-size</code> note that the cache may exceed this size for two reasons. Firstly because it is only checked every <code>--vfs-cache-poll-interval</code>. Secondly because open files cannot be evicted from the cache.</p>
<p>You <strong>should not</strong> run two copies of rclone using the same VFS cache with the same or overlapping remotes if using <code>--vfs-cache-mode &gt; off</code>. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with <code>--cache-dir</code>. You don't need to worry about this if the remotes in use don't overlap.</p>
<h3 id="vfs-cache-mode-off">--vfs-cache-mode off</h3> <h3 id="vfs-cache-mode-off">--vfs-cache-mode off</h3>
<p>In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.</p> <p>In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.</p>
<p>This will mean some operations are not possible</p> <p>This will mean some operations are not possible</p>
@ -1848,6 +1849,7 @@ ffmpeg - | rclone rcat remote:path/to/file</code></pre>
<p>The cache has 4 different modes selected by <code>--vfs-cache-mode</code>. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.</p> <p>The cache has 4 different modes selected by <code>--vfs-cache-mode</code>. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.</p>
<p>Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.</p> <p>Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.</p>
<p>If using <code>--vfs-cache-max-size</code> note that the cache may exceed this size for two reasons. Firstly because it is only checked every <code>--vfs-cache-poll-interval</code>. Secondly because open files cannot be evicted from the cache.</p> <p>If using <code>--vfs-cache-max-size</code> note that the cache may exceed this size for two reasons. Firstly because it is only checked every <code>--vfs-cache-poll-interval</code>. Secondly because open files cannot be evicted from the cache.</p>
<p>You <strong>should not</strong> run two copies of rclone using the same VFS cache with the same or overlapping remotes if using <code>--vfs-cache-mode &gt; off</code>. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with <code>--cache-dir</code>. You don't need to worry about this if the remotes in use don't overlap.</p>
<h3 id="vfs-cache-mode-off-1">--vfs-cache-mode off</h3> <h3 id="vfs-cache-mode-off-1">--vfs-cache-mode off</h3>
<p>In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.</p> <p>In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.</p>
<p>This will mean some operations are not possible</p> <p>This will mean some operations are not possible</p>
@ -1982,6 +1984,7 @@ ffmpeg - | rclone rcat remote:path/to/file</code></pre>
<p>The cache has 4 different modes selected by <code>--vfs-cache-mode</code>. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.</p> <p>The cache has 4 different modes selected by <code>--vfs-cache-mode</code>. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.</p>
<p>Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.</p> <p>Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.</p>
<p>If using <code>--vfs-cache-max-size</code> note that the cache may exceed this size for two reasons. Firstly because it is only checked every <code>--vfs-cache-poll-interval</code>. Secondly because open files cannot be evicted from the cache.</p> <p>If using <code>--vfs-cache-max-size</code> note that the cache may exceed this size for two reasons. Firstly because it is only checked every <code>--vfs-cache-poll-interval</code>. Secondly because open files cannot be evicted from the cache.</p>
<p>You <strong>should not</strong> run two copies of rclone using the same VFS cache with the same or overlapping remotes if using <code>--vfs-cache-mode &gt; off</code>. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with <code>--cache-dir</code>. You don't need to worry about this if the remotes in use don't overlap.</p>
<h3 id="vfs-cache-mode-off-2">--vfs-cache-mode off</h3> <h3 id="vfs-cache-mode-off-2">--vfs-cache-mode off</h3>
<p>In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.</p> <p>In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.</p>
<p>This will mean some operations are not possible</p> <p>This will mean some operations are not possible</p>
@ -2246,6 +2249,7 @@ htpasswd -B htpasswd anotherUser</code></pre>
<p>The cache has 4 different modes selected by <code>--vfs-cache-mode</code>. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.</p> <p>The cache has 4 different modes selected by <code>--vfs-cache-mode</code>. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.</p>
<p>Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.</p> <p>Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.</p>
<p>If using <code>--vfs-cache-max-size</code> note that the cache may exceed this size for two reasons. Firstly because it is only checked every <code>--vfs-cache-poll-interval</code>. Secondly because open files cannot be evicted from the cache.</p> <p>If using <code>--vfs-cache-max-size</code> note that the cache may exceed this size for two reasons. Firstly because it is only checked every <code>--vfs-cache-poll-interval</code>. Secondly because open files cannot be evicted from the cache.</p>
<p>You <strong>should not</strong> run two copies of rclone using the same VFS cache with the same or overlapping remotes if using <code>--vfs-cache-mode &gt; off</code>. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with <code>--cache-dir</code>. You don't need to worry about this if the remotes in use don't overlap.</p>
<h3 id="vfs-cache-mode-off-3">--vfs-cache-mode off</h3> <h3 id="vfs-cache-mode-off-3">--vfs-cache-mode off</h3>
<p>In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.</p> <p>In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.</p>
<p>This will mean some operations are not possible</p> <p>This will mean some operations are not possible</p>
@ -2558,6 +2562,7 @@ htpasswd -B htpasswd anotherUser</code></pre>
<p>The cache has 4 different modes selected by <code>--vfs-cache-mode</code>. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.</p> <p>The cache has 4 different modes selected by <code>--vfs-cache-mode</code>. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.</p>
<p>Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.</p> <p>Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.</p>
<p>If using <code>--vfs-cache-max-size</code> note that the cache may exceed this size for two reasons. Firstly because it is only checked every <code>--vfs-cache-poll-interval</code>. Secondly because open files cannot be evicted from the cache.</p> <p>If using <code>--vfs-cache-max-size</code> note that the cache may exceed this size for two reasons. Firstly because it is only checked every <code>--vfs-cache-poll-interval</code>. Secondly because open files cannot be evicted from the cache.</p>
<p>You <strong>should not</strong> run two copies of rclone using the same VFS cache with the same or overlapping remotes if using <code>--vfs-cache-mode &gt; off</code>. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with <code>--cache-dir</code>. You don't need to worry about this if the remotes in use don't overlap.</p>
<h3 id="vfs-cache-mode-off-4">--vfs-cache-mode off</h3> <h3 id="vfs-cache-mode-off-4">--vfs-cache-mode off</h3>
<p>In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.</p> <p>In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.</p>
<p>This will mean some operations are not possible</p> <p>This will mean some operations are not possible</p>
@ -2823,6 +2828,7 @@ htpasswd -B htpasswd anotherUser</code></pre>
<p>The cache has 4 different modes selected by <code>--vfs-cache-mode</code>. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.</p> <p>The cache has 4 different modes selected by <code>--vfs-cache-mode</code>. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.</p>
<p>Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.</p> <p>Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.</p>
<p>If using <code>--vfs-cache-max-size</code> note that the cache may exceed this size for two reasons. Firstly because it is only checked every <code>--vfs-cache-poll-interval</code>. Secondly because open files cannot be evicted from the cache.</p> <p>If using <code>--vfs-cache-max-size</code> note that the cache may exceed this size for two reasons. Firstly because it is only checked every <code>--vfs-cache-poll-interval</code>. Secondly because open files cannot be evicted from the cache.</p>
<p>You <strong>should not</strong> run two copies of rclone using the same VFS cache with the same or overlapping remotes if using <code>--vfs-cache-mode &gt; off</code>. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with <code>--cache-dir</code>. You don't need to worry about this if the remotes in use don't overlap.</p>
<h3 id="vfs-cache-mode-off-5">--vfs-cache-mode off</h3> <h3 id="vfs-cache-mode-off-5">--vfs-cache-mode off</h3>
<p>In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.</p> <p>In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk.</p>
<p>This will mean some operations are not possible</p> <p>This will mean some operations are not possible</p>
@ -4085,7 +4091,7 @@ dir1/dir2/dir3/.ignore</code></pre>
<p>The command <code>rclone ls --exclude-if-present .ignore dir1</code> does not list <code>dir3</code>, <code>file3</code> or <code>.ignore</code>.</p> <p>The command <code>rclone ls --exclude-if-present .ignore dir1</code> does not list <code>dir3</code>, <code>file3</code> or <code>.ignore</code>.</p>
<p><code>--exclude-if-present</code> can only be used once in an rclone command.</p> <p><code>--exclude-if-present</code> can only be used once in an rclone command.</p>
<h2 id="common-pitfalls">Common pitfalls</h2> <h2 id="common-pitfalls">Common pitfalls</h2>
<p>The most frequent filter support issues on the <a href="https://https://forum.rclone.org/">rclone forum</a> are:</p> <p>The most frequent filter support issues on the <a href="https://forum.rclone.org/">rclone forum</a> are:</p>
<ul> <ul>
<li>Not using paths relative to the root of the remote</li> <li>Not using paths relative to the root of the remote</li>
<li>Not using <code>/</code> to match from the root of a remote</li> <li>Not using <code>/</code> to match from the root of a remote</li>
@ -4997,6 +5003,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt=&#39;{&quot;Cache
<ul> <ul>
<li>srcFs - a remote name string e.g. "drive:src" for the source</li> <li>srcFs - a remote name string e.g. "drive:src" for the source</li>
<li>dstFs - a remote name string e.g. "drive:dst" for the destination</li> <li>dstFs - a remote name string e.g. "drive:dst" for the destination</li>
<li>createEmptySrcDirs - create empty src directories on destination if set</li>
</ul> </ul>
<p>See the <a href="https://rclone.org/commands/rclone_copy/">copy command</a> command for more information on the above.</p> <p>See the <a href="https://rclone.org/commands/rclone_copy/">copy command</a> command for more information on the above.</p>
<p><strong>Authentication is required for this call.</strong></p> <p><strong>Authentication is required for this call.</strong></p>
@ -5005,6 +5012,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt=&#39;{&quot;Cache
<ul> <ul>
<li>srcFs - a remote name string e.g. "drive:src" for the source</li> <li>srcFs - a remote name string e.g. "drive:src" for the source</li>
<li>dstFs - a remote name string e.g. "drive:dst" for the destination</li> <li>dstFs - a remote name string e.g. "drive:dst" for the destination</li>
<li>createEmptySrcDirs - create empty src directories on destination if set</li>
<li>deleteEmptySrcDirs - delete empty src directories if set</li> <li>deleteEmptySrcDirs - delete empty src directories if set</li>
</ul> </ul>
<p>See the <a href="https://rclone.org/commands/rclone_move/">move command</a> command for more information on the above.</p> <p>See the <a href="https://rclone.org/commands/rclone_move/">move command</a> command for more information on the above.</p>
@ -5014,6 +5022,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt=&#39;{&quot;Cache
<ul> <ul>
<li>srcFs - a remote name string e.g. "drive:src" for the source</li> <li>srcFs - a remote name string e.g. "drive:src" for the source</li>
<li>dstFs - a remote name string e.g. "drive:dst" for the destination</li> <li>dstFs - a remote name string e.g. "drive:dst" for the destination</li>
<li>createEmptySrcDirs - create empty src directories on destination if set</li>
</ul> </ul>
<p>See the <a href="https://rclone.org/commands/rclone_sync/">sync command</a> command for more information on the above.</p> <p>See the <a href="https://rclone.org/commands/rclone_sync/">sync command</a> command for more information on the above.</p>
<p><strong>Authentication is required for this call.</strong></p> <p><strong>Authentication is required for this call.</strong></p>
@ -6491,7 +6500,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--use-json-log Use json log format. --use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs). --use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata --use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default &quot;rclone/v1.54.0&quot;) --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default &quot;rclone/v1.54.1&quot;)
-v, --verbose count Print lots more stuff (repeat for more)</code></pre> -v, --verbose count Print lots more stuff (repeat for more)</code></pre>
<h2 id="backend-flags">Backend Flags</h2> <h2 id="backend-flags">Backend Flags</h2>
<p>These flags are available for every command. They control the backends and may be set in the config file.</p> <p>These flags are available for every command. They control the backends and may be set in the config file.</p>
@ -6616,7 +6625,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--drive-starred-only Only show files that are starred. --drive-starred-only Only show files that are starred.
--drive-stop-on-download-limit Make download limit errors be fatal --drive-stop-on-download-limit Make download limit errors be fatal
--drive-stop-on-upload-limit Make upload limit errors be fatal --drive-stop-on-upload-limit Make upload limit errors be fatal
--drive-team-drive string ID of the Team Drive --drive-team-drive string ID of the Shared Drive (Team Drive)
--drive-token string OAuth Access Token as a JSON blob. --drive-token string OAuth Access Token as a JSON blob.
--drive-token-url string Token server url. --drive-token-url string Token server url.
--drive-trashed-only Only show files that are in the trash. --drive-trashed-only Only show files that are in the trash.
@ -6897,8 +6906,13 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-token string OAuth Access Token as a JSON blob. --yandex-token string OAuth Access Token as a JSON blob.
--yandex-token-url string Token server url. --yandex-token-url string Token server url.
--zoho-auth-url string Auth server URL.
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8) --zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to. You&#39;ll have to use the region you organization is registered in.</code></pre> --zoho-region string Zoho region to connect to. You&#39;ll have to use the region you organization is registered in.
--zoho-token string OAuth Access Token as a JSON blob.
--zoho-token-url string Token server url.</code></pre>
<h2 id="fichier">1Fichier</h2> <h2 id="fichier">1Fichier</h2>
<p>This is a backend for the <a href="https://1fichier.com">1fichier</a> cloud storage service. Note that a Premium subscription is required to use the API.</p> <p>This is a backend for the <a href="https://1fichier.com">1fichier</a> cloud storage service. Note that a Premium subscription is required to use the API.</p>
<p>Paths are specified as <code>remote:path</code></p> <p>Paths are specified as <code>remote:path</code></p>
@ -7575,6 +7589,10 @@ y/e/d&gt; </code></pre>
<pre><code>rclone copy --min-age 24h --no-traverse /path/to/source s3:bucket</code></pre> <pre><code>rclone copy --min-age 24h --no-traverse /path/to/source s3:bucket</code></pre>
<p>You'd then do a full <code>rclone sync</code> less often.</p> <p>You'd then do a full <code>rclone sync</code> less often.</p>
<p>Note that <code>--fast-list</code> isn't required in the top-up sync.</p> <p>Note that <code>--fast-list</code> isn't required in the top-up sync.</p>
<h4 id="avoiding-head-requests-after-put">Avoiding HEAD requests after PUT</h4>
<p>By default rclone will HEAD every object it uploads. It does this to check the object got uploaded correctly.</p>
<p>You can disable this with the <a href="#s3-no-head">--s3-no-head</a> option - see there for more details.</p>
<p>Setting this flag increases the chance for undetected upload failures.</p>
<h3 id="hashes">Hashes</h3> <h3 id="hashes">Hashes</h3>
<p>For small objects which weren't uploaded as multipart uploads (objects sized below <code>--s3-upload-cutoff</code> if uploaded with rclone) rclone uses the <code>ETag:</code> header as an MD5 checksum.</p> <p>For small objects which weren't uploaded as multipart uploads (objects sized below <code>--s3-upload-cutoff</code> if uploaded with rclone) rclone uses the <code>ETag:</code> header as an MD5 checksum.</p>
<p>However for objects which were uploaded as multipart uploads or with server side encryption (SSE-AWS or SSE-C) the <code>ETag</code> header is no longer the MD5 sum of the data, so rclone adds an additional piece of metadata <code>X-Amz-Meta-Md5chksum</code> which is a base64 encoded MD5 hash (in the same format as is required for <code>Content-MD5</code>).</p> <p>However for objects which were uploaded as multipart uploads or with server side encryption (SSE-AWS or SSE-C) the <code>ETag</code> header is no longer the MD5 sum of the data, so rclone adds an additional piece of metadata <code>X-Amz-Meta-Md5chksum</code> which is a base64 encoded MD5 hash (in the same format as is required for <code>Content-MD5</code>).</p>
@ -10735,7 +10753,7 @@ chunk_total_size = 10G</code></pre>
<h5 id="certificate-validation">Certificate Validation</h5> <h5 id="certificate-validation">Certificate Validation</h5>
<p>When the Plex server is configured to only accept secure connections, it is possible to use <code>.plex.direct</code> URLs to ensure certificate validation succeeds. These URLs are used by Plex internally to connect to the Plex server securely.</p> <p>When the Plex server is configured to only accept secure connections, it is possible to use <code>.plex.direct</code> URLs to ensure certificate validation succeeds. These URLs are used by Plex internally to connect to the Plex server securely.</p>
<p>The format for these URLs is the following:</p> <p>The format for these URLs is the following:</p>
<p>https://ip-with-dots-replaced.server-hash.plex.direct:32400/</p> <p><code>https://ip-with-dots-replaced.server-hash.plex.direct:32400/</code></p>
<p>The <code>ip-with-dots-replaced</code> part can be any IPv4 address, where the dots have been replaced with dashes, e.g. <code>127.0.0.1</code> becomes <code>127-0-0-1</code>.</p> <p>The <code>ip-with-dots-replaced</code> part can be any IPv4 address, where the dots have been replaced with dashes, e.g. <code>127.0.0.1</code> becomes <code>127-0-0-1</code>.</p>
<p>To get the <code>server-hash</code> part, the easiest way is to visit</p> <p>To get the <code>server-hash</code> part, the easiest way is to visit</p>
<p>https://plex.tv/api/resources?includeHttps=1&amp;X-Plex-Token=your-plex-token</p> <p>https://plex.tv/api/resources?includeHttps=1&amp;X-Plex-Token=your-plex-token</p>
@ -11491,7 +11509,7 @@ y/e/d&gt; y</code></pre>
<p>To use <code>crypt</code>, first set up the underlying remote. Follow the <code>rclone config</code> instructions for the specific backend.</p> <p>To use <code>crypt</code>, first set up the underlying remote. Follow the <code>rclone config</code> instructions for the specific backend.</p>
<p>Before configuring the crypt remote, check the underlying remote is working. In this example the underlying remote is called <code>remote</code>. We will configure a path <code>path</code> within this remote to contain the encrypted content. Anything inside <code>remote:path</code> will be encrypted and anything outside will not.</p> <p>Before configuring the crypt remote, check the underlying remote is working. In this example the underlying remote is called <code>remote</code>. We will configure a path <code>path</code> within this remote to contain the encrypted content. Anything inside <code>remote:path</code> will be encrypted and anything outside will not.</p>
<p>Configure <code>crypt</code> using <code>rclone config</code>. In this example the <code>crypt</code> remote is called <code>secret</code>, to differentiate it from the underlying <code>remote</code>.</p> <p>Configure <code>crypt</code> using <code>rclone config</code>. In this example the <code>crypt</code> remote is called <code>secret</code>, to differentiate it from the underlying <code>remote</code>.</p>
<p>When you are done you can use the crypt remote named <code>secret</code> just as you would with any other remote, e.g. <code>rclone copy D:\docs secret:\docs</code>, and rclone will encrypt and decrypt as needed on the fly. If you access the wrapped remote <code>remote:path</code> directly you will bypass the encryption, and anything you read will be in encrypted form, and anything you write will be undencrypted. To avoid issues it is best to configure a dedicated path for encrypted content, and access it exclusively through a crypt remote.</p> <p>When you are done you can use the crypt remote named <code>secret</code> just as you would with any other remote, e.g. <code>rclone copy D:\docs secret:\docs</code>, and rclone will encrypt and decrypt as needed on the fly. If you access the wrapped remote <code>remote:path</code> directly you will bypass the encryption, and anything you read will be in encrypted form, and anything you write will be unencrypted. To avoid issues it is best to configure a dedicated path for encrypted content, and access it exclusively through a crypt remote.</p>
<pre><code>No remotes found - make a new one <pre><code>No remotes found - make a new one
n) New remote n) New remote
s) Set configuration password s) Set configuration password
@ -12125,6 +12143,9 @@ y/e/d&gt; y</code></pre>
</ul> </ul>
<h4 id="dropbox-impersonate">--dropbox-impersonate</h4> <h4 id="dropbox-impersonate">--dropbox-impersonate</h4>
<p>Impersonate this user when using a business account.</p> <p>Impersonate this user when using a business account.</p>
<p>Note that if you want to use impersonate, you should make sure this flag is set when running "rclone config" as this will cause rclone to request the "members.read" scope which it won't normally. This is needed to lookup a members email address into the internal ID that dropbox uses in the API.</p>
<p>Using the "members.read" scope will require a Dropbox Team Admin to approve during the OAuth flow.</p>
<p>You will have to use your own App (setting your own client_id and client_secret) to use this option as currently rclone's default set of permissions doesn't include "members.read". This can be added once v1.55 or later is in use everywhere.</p>
<ul> <ul>
<li>Config: impersonate</li> <li>Config: impersonate</li>
<li>Env Var: RCLONE_DROPBOX_IMPERSONATE</li> <li>Env Var: RCLONE_DROPBOX_IMPERSONATE</li>
@ -12428,7 +12449,7 @@ y/e/d&gt; y</code></pre>
<h3 id="example-without-a-config-file">Example without a config file</h3> <h3 id="example-without-a-config-file">Example without a config file</h3>
<pre><code>rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=`rclone obscure dummy`</code></pre> <pre><code>rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=`rclone obscure dummy`</code></pre>
<h3 id="implicit-tls">Implicit TLS</h3> <h3 id="implicit-tls">Implicit TLS</h3>
<p>Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the FTP backend config for the remote, or with <code>[--ftp-tls]{#ftp-tls}</code>. The default FTPS port is <code>990</code>, not <code>21</code> and can be set with <code>[--ftp-port]{#ftp-port}</code>.</p> <p>Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the FTP backend config for the remote, or with <a href="#ftp-tls"><code>--ftp-tls</code></a>. The default FTPS port is <code>990</code>, not <code>21</code> and can be set with <a href="#ftp-port"><code>--ftp-port</code></a>.</p>
<h3 id="standard-options-13">Standard Options</h3> <h3 id="standard-options-13">Standard Options</h3>
<p>Here are the standard options specific to ftp (FTP Connection).</p> <p>Here are the standard options specific to ftp (FTP Connection).</p>
<h4 id="ftp-host">--ftp-host</h4> <h4 id="ftp-host">--ftp-host</h4>
@ -13132,7 +13153,7 @@ If your browser doesn&#39;t open automatically go to the following link: http://
Log in and authorize rclone for access Log in and authorize rclone for access
Waiting for code... Waiting for code...
Got code Got code
Configure this as a team drive? Configure this as a Shared Drive (Team Drive)?
y) Yes y) Yes
n) No n) No
y/n&gt; n y/n&gt; n
@ -13233,15 +13254,15 @@ y/n&gt; # Auto config, y
</ul></li> </ul></li>
</ul> </ul>
<p>Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using <code>--drive-impersonate</code>, do this instead: - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step #1 - use rclone without specifying the <code>--drive-impersonate</code> option, like this: <code>rclone -v foo@example.com lsf gdrive:backup</code></p> <p>Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using <code>--drive-impersonate</code>, do this instead: - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step #1 - use rclone without specifying the <code>--drive-impersonate</code> option, like this: <code>rclone -v foo@example.com lsf gdrive:backup</code></p>
<h3 id="team-drives">Team drives</h3> <h3 id="shared-drives-team-drives">Shared drives (team drives)</h3>
<p>If you want to configure the remote to point to a Google Team Drive then answer <code>y</code> to the question <code>Configure this as a team drive?</code>.</p> <p>If you want to configure the remote to point to a Google Shared Drive (previously known as Team Drives) then answer <code>y</code> to the question <code>Configure this as a Shared Drive (Team Drive)?</code>.</p>
<p>This will fetch the list of Team Drives from google and allow you to configure which one you want to use. You can also type in a team drive ID if you prefer.</p> <p>This will fetch the list of Shared Drives from google and allow you to configure which one you want to use. You can also type in a Shared Drive ID if you prefer.</p>
<p>For example:</p> <p>For example:</p>
<pre><code>Configure this as a team drive? <pre><code>Configure this as a Shared Drive (Team Drive)?
y) Yes y) Yes
n) No n) No
y/n&gt; y y/n&gt; y
Fetching team drive list... Fetching Shared Drive list...
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
1 / Rclone Test 1 / Rclone Test
\ &quot;xxxxxxxxxxxxxxxxxxxx&quot; \ &quot;xxxxxxxxxxxxxxxxxxxx&quot;
@ -13249,7 +13270,7 @@ Choose a number from below, or type in your own value
\ &quot;yyyyyyyyyyyyyyyyyyyy&quot; \ &quot;yyyyyyyyyyyyyyyyyyyy&quot;
3 / Rclone Test 3 3 / Rclone Test 3
\ &quot;zzzzzzzzzzzzzzzzzzzz&quot; \ &quot;zzzzzzzzzzzzzzzzzzzz&quot;
Enter a Team Drive ID&gt; 1 Enter a Shared Drive ID&gt; 1
-------------------- --------------------
[remote] [remote]
client_id = client_id =
@ -13659,7 +13680,7 @@ trashed=false and &#39;c&#39; in parents</code></pre>
<li>Default: ""</li> <li>Default: ""</li>
</ul> </ul>
<h4 id="drive-team-drive">--drive-team-drive</h4> <h4 id="drive-team-drive">--drive-team-drive</h4>
<p>ID of the Team Drive</p> <p>ID of the Shared Drive (Team Drive)</p>
<ul> <ul>
<li>Config: team_drive</li> <li>Config: team_drive</li>
<li>Env Var: RCLONE_DRIVE_TEAM_DRIVE</li> <li>Env Var: RCLONE_DRIVE_TEAM_DRIVE</li>
@ -13972,9 +13993,9 @@ rclone backend shortcut drive: source_item -o target=drive2: destination_shortcu
<li>"target": optional target remote for the shortcut destination</li> <li>"target": optional target remote for the shortcut destination</li>
</ul> </ul>
<h4 id="drives">drives</h4> <h4 id="drives">drives</h4>
<p>List the shared drives available to this account</p> <p>List the Shared Drives available to this account</p>
<pre><code>rclone backend drives remote: [options] [&lt;arguments&gt;+]</code></pre> <pre><code>rclone backend drives remote: [options] [&lt;arguments&gt;+]</code></pre>
<p>This command lists the shared drives (teamdrives) available to this account.</p> <p>This command lists the Shared Drives (Team Drives) available to this account.</p>
<p>Usage:</p> <p>Usage:</p>
<pre><code>rclone backend drives drive:</code></pre> <pre><code>rclone backend drives drive:</code></pre>
<p>This will return a JSON list of objects like this</p> <p>This will return a JSON list of objects like this</p>
@ -18173,7 +18194,7 @@ known_hosts_file = ~/.ssh/known_hosts</code></pre>
<p>SFTP also supports <code>about</code> if the same login has shell access and <code>df</code> are in the remote's PATH. <code>about</code> will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. <code>about</code> will fail if it does not have shell access or if <code>df</code> is not in the remote's PATH.</p> <p>SFTP also supports <code>about</code> if the same login has shell access and <code>df</code> are in the remote's PATH. <code>about</code> will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. <code>about</code> will fail if it does not have shell access or if <code>df</code> is not in the remote's PATH.</p>
<p>Note that some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using <code>disable_hashcheck</code> is a good idea.</p> <p>Note that some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using <code>disable_hashcheck</code> is a good idea.</p>
<p>The only ssh agent supported under Windows is Putty's pageant.</p> <p>The only ssh agent supported under Windows is Putty's pageant.</p>
<p>The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the <code>use_insecure_cipher</code> setting in the configuration file to <code>true</code>. Further details on the insecurity of this cipher can be found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).</p> <p>The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the <code>use_insecure_cipher</code> setting in the configuration file to <code>true</code>. Further details on the insecurity of this cipher can be found <a href="http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf">in this paper</a>.</p>
<p>SFTP isn't supported under plan9 until <a href="https://github.com/pkg/sftp/issues/156">this issue</a> is fixed.</p> <p>SFTP isn't supported under plan9 until <a href="https://github.com/pkg/sftp/issues/156">this issue</a> is fixed.</p>
<p>Note that since SFTP isn't HTTP based the following flags don't work with it: <code>--dump-headers</code>, <code>--dump-bodies</code>, <code>--dump-auth</code></p> <p>Note that since SFTP isn't HTTP based the following flags don't work with it: <code>--dump-headers</code>, <code>--dump-bodies</code>, <code>--dump-auth</code></p>
<p>Note that <code>--timeout</code> isn't supported (but <code>--contimeout</code> is).</p> <p>Note that <code>--timeout</code> isn't supported (but <code>--contimeout</code> is).</p>
@ -19267,6 +19288,22 @@ y/e/d&gt; </code></pre>
<p>Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.</p> <p>Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.</p>
<h3 id="standard-options-37">Standard Options</h3> <h3 id="standard-options-37">Standard Options</h3>
<p>Here are the standard options specific to zoho (Zoho).</p> <p>Here are the standard options specific to zoho (Zoho).</p>
<h4 id="zoho-client-id">--zoho-client-id</h4>
<p>OAuth Client Id Leave blank normally.</p>
<ul>
<li>Config: client_id</li>
<li>Env Var: RCLONE_ZOHO_CLIENT_ID</li>
<li>Type: string</li>
<li>Default: ""</li>
</ul>
<h4 id="zoho-client-secret">--zoho-client-secret</h4>
<p>OAuth Client Secret Leave blank normally.</p>
<ul>
<li>Config: client_secret</li>
<li>Env Var: RCLONE_ZOHO_CLIENT_SECRET</li>
<li>Type: string</li>
<li>Default: ""</li>
</ul>
<h4 id="zoho-region">--zoho-region</h4> <h4 id="zoho-region">--zoho-region</h4>
<p>Zoho region to connect to. You'll have to use the region you organization is registered in.</p> <p>Zoho region to connect to. You'll have to use the region you organization is registered in.</p>
<ul> <ul>
@ -19296,6 +19333,30 @@ y/e/d&gt; </code></pre>
</ul> </ul>
<h3 id="advanced-options-36">Advanced Options</h3> <h3 id="advanced-options-36">Advanced Options</h3>
<p>Here are the advanced options specific to zoho (Zoho).</p> <p>Here are the advanced options specific to zoho (Zoho).</p>
<h4 id="zoho-token">--zoho-token</h4>
<p>OAuth Access Token as a JSON blob.</p>
<ul>
<li>Config: token</li>
<li>Env Var: RCLONE_ZOHO_TOKEN</li>
<li>Type: string</li>
<li>Default: ""</li>
</ul>
<h4 id="zoho-auth-url">--zoho-auth-url</h4>
<p>Auth server URL. Leave blank to use the provider defaults.</p>
<ul>
<li>Config: auth_url</li>
<li>Env Var: RCLONE_ZOHO_AUTH_URL</li>
<li>Type: string</li>
<li>Default: ""</li>
</ul>
<h4 id="zoho-token-url">--zoho-token-url</h4>
<p>Token server url. Leave blank to use the provider defaults.</p>
<ul>
<li>Config: token_url</li>
<li>Env Var: RCLONE_ZOHO_TOKEN_URL</li>
<li>Type: string</li>
<li>Default: ""</li>
</ul>
<h4 id="zoho-encoding">--zoho-encoding</h4> <h4 id="zoho-encoding">--zoho-encoding</h4>
<p>This sets the encoding for the backend.</p> <p>This sets the encoding for the backend.</p>
<p>See: the <a href="https://rclone.org/overview/#encoding">encoding section in the overview</a> for more info.</p> <p>See: the <a href="https://rclone.org/overview/#encoding">encoding section in the overview</a> for more info.</p>
@ -19824,12 +19885,68 @@ $ tree /tmp/b
<li>"error": return an error based on option value</li> <li>"error": return an error based on option value</li>
</ul> </ul>
<h1 id="changelog">Changelog</h1> <h1 id="changelog">Changelog</h1>
<h2 id="v1.54.1---2021-03-08">v1.54.1 - 2021-03-08</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.54.0...v1.54.1">See commits</a></p>
<ul>
<li>Bug Fixes
<ul>
<li>accounting: Fix --bwlimit when up or down is off (Nick Craig-Wood)</li>
<li>docs
<ul>
<li>Fix nesting of brackets and backticks in ftp docs (edwardxml)</li>
<li>Fix broken link in sftp page (edwardxml)</li>
<li>Fix typo in crypt.md (Romeo Kienzler)</li>
<li>Changelog: Correct link to digitalis.io (Alex JOST)</li>
<li>Replace #file-caching with #vfs-file-caching (Miron Veryanskiy)</li>
<li>Convert bogus example link to code (edwardxml)</li>
<li>Remove dead link from rc.md (edwardxml)</li>
</ul></li>
<li>rc: Sync,copy,move: document createEmptySrcDirs parameter (Nick Craig-Wood)</li>
<li>lsjson: Fix unterminated JSON in the presence of errors (Nick Craig-Wood)</li>
</ul></li>
<li>Mount
<ul>
<li>Fix mount dropping on macOS by setting --daemon-timeout 10m (Nick Craig-Wood)</li>
</ul></li>
<li>VFS
<ul>
<li>Document simultaneous usage with the same cache shouldn't be used (Nick Craig-Wood)</li>
</ul></li>
<li>B2
<ul>
<li>Automatically raise upload cutoff to avoid spurious error (Nick Craig-Wood)</li>
<li>Fix failed to create file system with application key limited to a prefix (Nick Craig-Wood)</li>
</ul></li>
<li>Drive
<ul>
<li>Refer to Shared Drives instead of Team Drives (Nick Craig-Wood)</li>
</ul></li>
<li>Dropbox
<ul>
<li>Add scopes to oauth request and optionally "members.read" (Nick Craig-Wood)</li>
</ul></li>
<li>S3
<ul>
<li>Fix failed to create file system with folder level permissions policy (Nick Craig-Wood)</li>
<li>Fix Wasabi HEAD requests returning stale data by using only 1 transport (Nick Craig-Wood)</li>
<li>Fix shared_credentials_file auth (Dmitry Chepurovskiy)</li>
<li>Add --s3-no-head to reducing costs docs (Nick Craig-Wood)</li>
</ul></li>
<li>Union
<ul>
<li>Fix mkdir at root with remote:/ (Nick Craig-Wood)</li>
</ul></li>
<li>Zoho
<ul>
<li>Fix custom client id's (buengese)</li>
</ul></li>
</ul>
<h2 id="v1.54.0---2021-02-02">v1.54.0 - 2021-02-02</h2> <h2 id="v1.54.0---2021-02-02">v1.54.0 - 2021-02-02</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.53.0...v1.54.0">See commits</a></p> <p><a href="https://github.com/rclone/rclone/compare/v1.53.0...v1.54.0">See commits</a></p>
<ul> <ul>
<li>New backends <li>New backends
<ul> <ul>
<li>Compression remote (experimental)(buengese)</li> <li>Compression remote (experimental) (buengese)</li>
<li>Enterprise File Fabric (Nick Craig-Wood) <li>Enterprise File Fabric (Nick Craig-Wood)
<ul> <ul>
<li>This work was sponsored by <a href="https://storagemadeeasy.com/">Storage Made Easy</a></li> <li>This work was sponsored by <a href="https://storagemadeeasy.com/">Storage Made Easy</a></li>
@ -19842,8 +19959,8 @@ $ tree /tmp/b
<li>Deglobalise the config (Nick Craig-Wood) <li>Deglobalise the config (Nick Craig-Wood)
<ul> <ul>
<li>Global config now read from the context</li> <li>Global config now read from the context</li>
<li>Global config can be passed into the rc</li> <li>This will enable passing of global config via the rc</li>
<li>This work was sponsored by <a href="digitalis.io">Digitalis</a></li> <li>This work was sponsored by <a href="https://digitalis.io/">Digitalis</a></li>
</ul></li> </ul></li>
<li>Add <code>--bwlimit</code> for upload and download (Nick Craig-Wood) <li>Add <code>--bwlimit</code> for upload and download (Nick Craig-Wood)
<ul> <ul>
@ -19851,48 +19968,38 @@ $ tree /tmp/b
</ul></li> </ul></li>
<li>Enhance systemd integration (Hekmon) <li>Enhance systemd integration (Hekmon)
<ul> <ul>
<li>log level identification</li> <li>log level identification, manual activation with flag, automatic systemd launch detection</li>
<li>manual activation with flag</li>
<li>automatic systemd launch detection</li>
<li>Don't compile systemd log integration for non unix systems (Benjamin Gustin)</li> <li>Don't compile systemd log integration for non unix systems (Benjamin Gustin)</li>
</ul></li> </ul></li>
<li>Add a download flag to hashsum and related commands to force rclone to download and hash files locally (lostheli)</li> <li>Add a <code>--download</code> flag to md5sum/sha1sum/hashsum to force rclone to download and hash files locally (lostheli)</li>
<li>Add <code>--progress-terminal-title</code> to print ETA to terminal title (LaSombra)</li>
<li>Make backend env vars show in help as the defaults for backend flags (Nick Craig-Wood)</li>
<li>build <li>build
<ul> <ul>
<li>Raise minimum go version to go1.12 (Nick Craig-Wood)</li> <li>Raise minimum go version to go1.12 (Nick Craig-Wood)</li>
</ul></li> </ul></li>
<li>check
<ul>
<li>Make the error count match up in the log message (Nick Craig-Wood)</li>
</ul></li>
<li>cmd
<ul>
<li>Add --progress-terminal-title to print ETA to terminal title (LaSombra)</li>
<li>Make backend env vars show in help as the defaults for backend flags (Nick Craig-Wood)</li>
</ul></li>
<li>dedupe <li>dedupe
<ul> <ul>
<li>Add --by-hash to dedupe on hash not file name (Nick Craig-Wood)</li> <li>Add <code>--by-hash</code> to dedupe on content hash not file name (Nick Craig-Wood)</li>
<li>Add --dedupe-mode list to just list dupes, changing nothing (Nick Craig-Wood)</li> <li>Add <code>--dedupe-mode list</code> to just list dupes, changing nothing (Nick Craig-Wood)</li>
<li>Add warning if used on a remote which can't have duplicate names (Nick Craig-Wood)</li> <li>Add warning if used on a remote which can't have duplicate names (Nick Craig-Wood)</li>
</ul></li> </ul></li>
<li>flags: Improve error message when reading environment vars (Nick Craig-Wood)</li>
<li>fs <li>fs
<ul> <ul>
<li>Add Shutdown optional method for backends (Nick Craig-Wood)</li> <li>Add Shutdown optional method for backends (Nick Craig-Wood)</li>
<li>When using --files-from check files concurrently (zhucan)</li> <li>When using <code>--files-from</code> check files concurrently (zhucan)</li>
<li>Accumulate stats when using --dry-run (Ingo Weiss)</li> <li>Accumulate stats when using <code>--dry-run</code> (Ingo Weiss)</li>
<li>Always show stats when using --dry-run or --interactive (Nick Craig-Wood)</li> <li>Always show stats when using <code>--dry-run</code> or <code>--interactive</code> (Nick Craig-Wood)</li>
<li>Add support for flag --no-console on windows to hide the console window (albertony)</li> <li>Add support for flag <code>--no-console</code> on windows to hide the console window (albertony)</li>
</ul></li> </ul></li>
<li>genautocomplete: Add support to output to stdout (Ingo)</li> <li>genautocomplete: Add support to output to stdout (Ingo)</li>
<li>ncdu <li>ncdu
<ul> <ul>
<li>Highlight read errors instead of aborting (Claudio Bantaloukas)</li> <li>Highlight read errors instead of aborting (Claudio Bantaloukas)</li>
<li>Add sort by average size in directory (Adam Plánský)</li> <li>Add sort by average size in directory (Adam Plánský)</li>
<li>Add toggle option for average size in directory - key 'a' (Adam Plánský)</li> <li>Add toggle option for average s3ize in directory - key 'a' (Adam Plánský)</li>
<li>Add empty folder flag into ncdu browser (Adam Plánský)</li> <li>Add empty folder flag into ncdu browser (Adam Plánský)</li>
<li>Add ! (errror) and . (unreadable) file flags to go with e (empty) (Nick Craig-Wood)</li> <li>Add <code>!</code> (errror) and <code>.</code> (unreadable) file flags to go with <code>e</code> (empty) (Nick Craig-Wood)</li>
</ul></li> </ul></li>
<li>obscure: Make <code>rclone osbcure -</code> ignore newline at end of line (Nick Craig-Wood)</li> <li>obscure: Make <code>rclone osbcure -</code> ignore newline at end of line (Nick Craig-Wood)</li>
<li>operations <li>operations
@ -19919,36 +20026,32 @@ $ tree /tmp/b
</ul></li> </ul></li>
<li>Bug Fixes <li>Bug Fixes
<ul> <ul>
<li>build
<ul>
<li>Explicitly set ARM version to fix build (Nick Craig-Wood)</li>
<li>Don't explicitly set ARM version to fix ARMv5 build (Nick Craig-Wood)</li>
<li>Fix nfpm install (Nick Craig-Wood)</li>
<li>Fix docker build by upgrading ilteoood/docker_buildx (Nick Craig-Wood)</li>
<li>Temporary fix for Windows build errors (Ivan Andreev)</li>
</ul></li>
<li>fs <li>fs
<ul> <ul>
<li>Fix nil pointer on copy &amp; move operations directly to remote (Anagh Kumar Baranwal)</li> <li>Fix nil pointer on copy &amp; move operations directly to remote (Anagh Kumar Baranwal)</li>
<li>Fix parsing of .. when joining remotes (Nick Craig-Wood)</li> <li>Fix parsing of .. when joining remotes (Nick Craig-Wood)</li>
</ul></li> </ul></li>
<li>log: Fix enabling systemd logging when using --log-file (Nick Craig-Wood)</li> <li>log: Fix enabling systemd logging when using <code>--log-file</code> (Nick Craig-Wood)</li>
<li>move: Fix data loss when moving the same object (Nick Craig-Wood)</li> <li>check
<ul>
<li>Make the error count match up in the log message (Nick Craig-Wood)</li>
</ul></li>
<li>move: Fix data loss when source and destination are the same object (Nick Craig-Wood)</li>
<li>operations <li>operations
<ul> <ul>
<li>Fix --cutof-mode hard not cutting off immediately (Nick Craig-Wood)</li> <li>Fix <code>--cutof-mode</code> hard not cutting off immediately (Nick Craig-Wood)</li>
<li>Fix --immutable error message (Nick Craig-Wood)</li> <li>Fix <code>--immutable</code> error message (Nick Craig-Wood)</li>
</ul></li> </ul></li>
<li>sync <li>sync
<ul> <ul>
<li>Fix --cutoff-mode soft &amp; cautious so it doesn't end the transfer early (Nick Craig-Wood)</li> <li>Fix <code>--cutoff-mode</code> soft &amp; cautious so it doesn't end the transfer early (Nick Craig-Wood)</li>
<li>Fix --immutable errors retrying many times (Nick Craig-Wood)</li> <li>Fix <code>--immutable</code> errors retrying many times (Nick Craig-Wood)</li>
</ul></li> </ul></li>
</ul></li> </ul></li>
<li>Docs <li>Docs
<ul> <ul>
<li>Many fixes and a rewrite of the filtering docs (edwardxml)</li> <li>Many fixes and a rewrite of the filtering docs (edwardxml)</li>
<li>Many spelling and grammar problems (Josh Soref)</li> <li>Many spelling and grammar fixes (Josh Soref)</li>
<li>Doc fixes for commands delete, purge, rmdir, rmdirs and mount (albertony)</li> <li>Doc fixes for commands delete, purge, rmdir, rmdirs and mount (albertony)</li>
<li>And thanks to these people for many doc fixes too numerous to list <li>And thanks to these people for many doc fixes too numerous to list
<ul> <ul>
@ -19963,13 +20066,11 @@ $ tree /tmp/b
<li>Update systemd status with cache stats (Hekmon)</li> <li>Update systemd status with cache stats (Hekmon)</li>
<li>Disable bazil/fuse based mount on macOS (Nick Craig-Wood) <li>Disable bazil/fuse based mount on macOS (Nick Craig-Wood)
<ul> <ul>
<li>Make mount be cmount under macOS (Nick Craig-Wood)</li> <li>Make <code>rclone mount</code> actually run <code>rclone cmount</code> under macOS (Nick Craig-Wood)</li>
</ul></li> </ul></li>
<li>Implement mknod to make NFS file creation work (Nick Craig-Wood)</li> <li>Implement mknod to make NFS file creation work (Nick Craig-Wood)</li>
<li>Make sure we don't call umount more than once (Nick Craig-Wood)</li> <li>Make sure we don't call umount more than once (Nick Craig-Wood)</li>
<li>Don't call host.Umount if a signal has been received (Nick Craig-Wood)</li>
<li>More user friendly mounting as network drive on windows (albertony)</li> <li>More user friendly mounting as network drive on windows (albertony)</li>
<li>Cleanup OS specific option handling and documentation (albertony)</li>
<li>Detect if uid or gid are set in same option string: -o uid=123,gid=456 (albertony)</li> <li>Detect if uid or gid are set in same option string: -o uid=123,gid=456 (albertony)</li>
<li>Don't attempt to unmount if fs has been destroyed already (Nick Craig-Wood)</li> <li>Don't attempt to unmount if fs has been destroyed already (Nick Craig-Wood)</li>
</ul></li> </ul></li>
@ -19977,32 +20078,34 @@ $ tree /tmp/b
<ul> <ul>
<li>Fix virtual entries causing deleted files to still appear (Nick Craig-Wood)</li> <li>Fix virtual entries causing deleted files to still appear (Nick Craig-Wood)</li>
<li>Fix "file already exists" error for stale cache files (Nick Craig-Wood)</li> <li>Fix "file already exists" error for stale cache files (Nick Craig-Wood)</li>
<li>Fix file leaks with --vfs-cache-mode full and --buffer-size 0 (Nick Craig-Wood)</li> <li>Fix file leaks with <code>--vfs-cache-mode</code> full and <code>--buffer-size 0</code> (Nick Craig-Wood)</li>
<li>Fix invalid cache path on windows when using :backend: as remote (albertony)</li> <li>Fix invalid cache path on windows when using :backend: as remote (albertony)</li>
</ul></li> </ul></li>
<li>Local <li>Local
<ul> <ul>
<li>Continue listing files/folders when a circular symlink is detected (Manish Gupta)</li> <li>Continue listing files/folders when a circular symlink is detected (Manish Gupta)</li>
<li>New flag --local-zero-size-links to fix sync on some virtual filesystems (Riccardo Iaconelli)</li> <li>New flag <code>--local-zero-size-links</code> to fix sync on some virtual filesystems (Riccardo Iaconelli)</li>
</ul></li> </ul></li>
<li>Azure Blob <li>Azure Blob
<ul> <ul>
<li>Add support for service principals (James Lim)</li> <li>Add support for service principals (James Lim)</li>
<li>Utilize streaming capabilities (Denis Neuling)</li> <li>Add support for managed identities (Brad Ackerman)</li>
<li>Update SDK to v0.13.0 and fix API breakage (Nick Craig-Wood, Mitsuo Heijo)</li> <li>Add examples for access tier (Bob Pusateri)</li>
<li>Utilize the streaming capabilities from the SDK for multipart uploads (Denis Neuling)</li>
<li>Fix setting of mime types (Nick Craig-Wood)</li> <li>Fix setting of mime types (Nick Craig-Wood)</li>
<li>Fix crash when listing outside a SAS URL's root (Nick Craig-Wood)</li> <li>Fix crash when listing outside a SAS URL's root (Nick Craig-Wood)</li>
<li>Delete archive tier blobs before update if --azureblob-archive-tier-delete (Nick Craig-Wood)</li> <li>Delete archive tier blobs before update if <code>--azureblob-archive-tier-delete</code> (Nick Craig-Wood)</li>
<li>Add support for managed identities (Brad Ackerman)</li>
<li>Fix crash on startup (Nick Craig-Wood)</li> <li>Fix crash on startup (Nick Craig-Wood)</li>
<li>Add examples for access tier (Bob Pusateri)</li> <li>Fix memory usage by upgrading the SDK to v0.13.0 and implementing a TransferManager (Nick Craig-Wood)</li>
<li>Fix memory usage by upgrading the SDK and implementing a TransferManager (Nick Craig-Wood)</li>
<li>Require go1.14+ to compile due to SDK changes (Nick Craig-Wood)</li> <li>Require go1.14+ to compile due to SDK changes (Nick Craig-Wood)</li>
</ul></li> </ul></li>
<li>B2 <li>B2
<ul> <ul>
<li>Make NewObject use less expensive API calls (Nick Craig-Wood)</li> <li>Make NewObject use less expensive API calls (Nick Craig-Wood)
<li>Fixed possible crash when accessing Backblaze b2 remote (lluuaapp)</li> <ul>
<li>This will improve <code>--files-from</code> and <code>restic serve</code> in particular</li>
</ul></li>
<li>Fixed crash on an empty file name (lluuaapp)</li>
</ul></li> </ul></li>
<li>Box <li>Box
<ul> <ul>
@ -20012,12 +20115,12 @@ $ tree /tmp/b
<li>Chunker <li>Chunker
<ul> <ul>
<li>Skip long local hashing, hash in-transit (fixes) (Ivan Andreev)</li> <li>Skip long local hashing, hash in-transit (fixes) (Ivan Andreev)</li>
<li>Set Features.ReadMimeType=false as Object.MimeType not supported (Nick Craig-Wood)</li> <li>Set Features ReadMimeType to false as Object.MimeType not supported (Nick Craig-Wood)</li>
<li>Fix case-insensitive NewObject, test metadata detection (Ivan Andreev)</li> <li>Fix case-insensitive NewObject, test metadata detection (Ivan Andreev)</li>
</ul></li> </ul></li>
<li>Drive <li>Drive
<ul> <ul>
<li>Implement "rclone backend copyid" command for copying files by ID (Nick Craig-Wood)</li> <li>Implement <code>rclone backend copyid</code> command for copying files by ID (Nick Craig-Wood)</li>
<li>Added flag <code>--drive-stop-on-download-limit</code> to stop transfers when the download limit is exceeded (Anagh Kumar Baranwal)</li> <li>Added flag <code>--drive-stop-on-download-limit</code> to stop transfers when the download limit is exceeded (Anagh Kumar Baranwal)</li>
<li>Implement CleanUp workaround for team drives (buengese)</li> <li>Implement CleanUp workaround for team drives (buengese)</li>
<li>Allow shortcut resolution and creation to be retried (Nick Craig-Wood)</li> <li>Allow shortcut resolution and creation to be retried (Nick Craig-Wood)</li>
@ -20027,45 +20130,44 @@ $ tree /tmp/b
<li>Dropbox <li>Dropbox
<ul> <ul>
<li>Add support for viewing shared files and folders (buengese)</li> <li>Add support for viewing shared files and folders (buengese)</li>
<li>Implement IDer (buengese)</li> <li>Enable short lived access tokens (Nick Craig-Wood)</li>
<li>Set Features.ReadMimeType=false as Object.MimeType not supported (Nick Craig-Wood)</li> <li>Implement IDer on Objects so <code>rclone lsf</code> etc can read the IDs (buengese)</li>
<li>Tidy repeated error message (Nick Craig-Wood)</li> <li>Set Features ReadMimeType to false as Object.MimeType not supported (Nick Craig-Wood)</li>
<li>Make malformed_path errors from too long files not retriable (Nick Craig-Wood)</li> <li>Make malformed_path errors from too long files not retriable (Nick Craig-Wood)</li>
<li>Test file name length before upload to fix upload loop (Nick Craig-Wood)</li> <li>Test file name length before upload to fix upload loop (Nick Craig-Wood)</li>
<li>Enable short lived access tokens (Nick Craig-Wood)</li>
</ul></li> </ul></li>
<li>Fichier <li>Fichier
<ul> <ul>
<li>Set Features.ReadMimeType=true as Object.MimeType is supported (Nick Craig-Wood)</li> <li>Set Features ReadMimeType to true as Object.MimeType is supported (Nick Craig-Wood)</li>
</ul></li> </ul></li>
<li>FTP <li>FTP
<ul> <ul>
<li>Add --ftp-disable-msld option to ignore MLSD for really old servers (Nick Craig-Wood)</li> <li>Add <code>--ftp-disable-msld</code> option to ignore MLSD for really old servers (Nick Craig-Wood)</li>
<li>Make --tpslimit apply (Nick Craig-Wood)</li> <li>Make <code>--tpslimit apply</code> (Nick Craig-Wood)</li>
</ul></li> </ul></li>
<li>Google Cloud Storage <li>Google Cloud Storage
<ul> <ul>
<li>Storage class object header support (Laurens Janssen)</li> <li>Storage class object header support (Laurens Janssen)</li>
<li>Fix anonymous client to use rclone's HTTP client (Nick Craig-Wood)</li> <li>Fix anonymous client to use rclone's HTTP client (Nick Craig-Wood)</li>
<li>Fix Entry doesn't belong in directory "" (same as directory) - ignoring (Nick Craig-Wood)</li> <li>Fix <code>Entry doesn't belong in directory "" (same as directory) - ignoring</code> (Nick Craig-Wood)</li>
</ul></li> </ul></li>
<li>Googlephotos <li>Googlephotos
<ul> <ul>
<li>New flag --gphotos-include-archived (Nicolas Rueff)</li> <li>New flag <code>--gphotos-include-archived</code> to show archived photos as well (Nicolas Rueff)</li>
</ul></li> </ul></li>
<li>Jottacloud <li>Jottacloud
<ul> <ul>
<li>Don't erroniously report support for writing mime types (buengese)</li> <li>Don't erroneously report support for writing mime types (buengese)</li>
<li>Add support for Telia Cloud (#4930) (Patrik Nordlén)</li> <li>Add support for Telia Cloud (Patrik Nordlén)</li>
</ul></li> </ul></li>
<li>Mailru <li>Mailru
<ul> <ul>
<li>Accept special folders eg camera-upload (Ivan Andreev)</li>
<li>Avoid prehashing of large local files (Ivan Andreev)</li>
<li>Fix uploads after recent changes on server (Ivan Andreev)</li> <li>Fix uploads after recent changes on server (Ivan Andreev)</li>
<li>Fix range requests after June 2020 changes on server (Ivan Andreev)</li> <li>Fix range requests after June 2020 changes on server (Ivan Andreev)</li>
<li>Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev)</li> <li>Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev)</li>
<li>Remove deprecated protocol quirks (Ivan Andreev)</li> <li>Remove deprecated protocol quirks (Ivan Andreev)</li>
<li>Accept special folders eg camera-upload (Ivan Andreev)</li>
<li>Avoid prehashing of large local files (Ivan Andreev)</li>
</ul></li> </ul></li>
<li>Memory <li>Memory
<ul> <ul>
@ -20073,14 +20175,14 @@ $ tree /tmp/b
</ul></li> </ul></li>
<li>Onedrive <li>Onedrive
<ul> <ul>
<li>Add support for china region operated by 21vianet and other regional suppliers (#4963) (NyaMisty)</li> <li>Add support for China region operated by 21vianet and other regional suppliers (NyaMisty)</li>
<li>Warn on gateway timeout errors (Nick Craig-Wood)</li> <li>Warn on gateway timeout errors (Nick Craig-Wood)</li>
<li>Fall back to normal copy if server-side copy unavailable (#4903) (Alex Chen)</li> <li>Fall back to normal copy if server-side copy unavailable (Alex Chen)</li>
<li>Fix server-side copy completely disabled on OneDrive for Business (Cnly)</li> <li>Fix server-side copy completely disabled on OneDrive for Business (Cnly)</li>
<li>(business only) workaround to replace existing file on server-side copy (#4904) (Alex Chen)</li> <li>(business only) workaround to replace existing file on server-side copy (Alex Chen)</li>
<li>Enhance link creation with expiry, scope, type and password (Nick Craig-Wood)</li> <li>Enhance link creation with expiry, scope, type and password (Nick Craig-Wood)</li>
<li>Remove % and # from the set of encoded characters (#4909) (Alex Chen)</li> <li>Remove % and # from the set of encoded characters (Alex Chen)</li>
<li>Support addressing site by server-relative URL (#4761) (kice)</li> <li>Support addressing site by server-relative URL (kice)</li>
</ul></li> </ul></li>
<li>Opendrive <li>Opendrive
<ul> <ul>
@ -20102,14 +20204,16 @@ $ tree /tmp/b
<li>S3 <li>S3
<ul> <ul>
<li>Added <code>--s3-disable-http2</code> to disable http/2 (Anagh Kumar Baranwal)</li> <li>Added <code>--s3-disable-http2</code> to disable http/2 (Anagh Kumar Baranwal)</li>
<li>Complete SSE-C implementation (Nick Craig-Wood)</li> <li>Complete SSE-C implementation (Nick Craig-Wood)
<ul>
<li>Fix hashes on small files with AWS:KMS and SSE-C (Nick Craig-Wood)</li> <li>Fix hashes on small files with AWS:KMS and SSE-C (Nick Craig-Wood)</li>
<li>Add MD5 metadata to objects uploaded with SSE-AWS/SSE-C (Nick Craig-Wood)</li> <li>Add MD5 metadata to objects uploaded with SSE-AWS/SSE-C (Nick Craig-Wood)</li>
</ul></li>
<li>Add <code>--s3-no-head parameter</code> to minimise transactions on upload (Nick Craig-Wood)</li>
<li>Update docs with a Reducing Costs section (Nick Craig-Wood)</li> <li>Update docs with a Reducing Costs section (Nick Craig-Wood)</li>
<li>Added error handling for error code 429 indicating too many requests (Anagh Kumar Baranwal)</li> <li>Added error handling for error code 429 indicating too many requests (Anagh Kumar Baranwal)</li>
<li>Add requester pays option (kelv)</li> <li>Add requester pays option (kelv)</li>
<li>Fix copy multipart with v2 auth failing with 'SignatureDoesNotMatch' (Louis Koo)</li> <li>Fix copy multipart with v2 auth failing with 'SignatureDoesNotMatch' (Louis Koo)</li>
<li>Add --s3-no-head parameter to minimise transactions on upload (Nick Craig-Wood)</li>
</ul></li> </ul></li>
<li>SFTP <li>SFTP
<ul> <ul>
@ -20119,8 +20223,8 @@ $ tree /tmp/b
<li>Remember entered password in AskPass mode (Stephen Harris)</li> <li>Remember entered password in AskPass mode (Stephen Harris)</li>
<li>Implement Shutdown method (Nick Craig-Wood)</li> <li>Implement Shutdown method (Nick Craig-Wood)</li>
<li>Implement keyboard interactive authentication (Nick Craig-Wood)</li> <li>Implement keyboard interactive authentication (Nick Craig-Wood)</li>
<li>Make --tpslimit apply (Nick Craig-Wood)</li> <li>Make <code>--tpslimit</code> apply (Nick Craig-Wood)</li>
<li>Implement --sftp-use-fstat (Nick Craig-Wood)</li> <li>Implement <code>--sftp-use-fstat</code> for unusual SFTP servers (Nick Craig-Wood)</li>
</ul></li> </ul></li>
<li>Sugarsync <li>Sugarsync
<ul> <ul>
@ -20130,7 +20234,7 @@ $ tree /tmp/b
<li>Swift <li>Swift
<ul> <ul>
<li>Fix deletion of parts of Static Large Object (SLO) (Nguyễn Hữu Luân)</li> <li>Fix deletion of parts of Static Large Object (SLO) (Nguyễn Hữu Luân)</li>
<li>Ensure partially uploaded large files are uploaded unless --swift-leave-parts-on-error (Nguyễn Hữu Luân)</li> <li>Ensure partially uploaded large files are uploaded unless <code>--swift-leave-parts-on-error</code> (Nguyễn Hữu Luân)</li>
</ul></li> </ul></li>
<li>Tardigrade <li>Tardigrade
<ul> <ul>
@ -20142,7 +20246,7 @@ $ tree /tmp/b
</ul></li> </ul></li>
<li>Yandex <li>Yandex
<ul> <ul>
<li>Set Features.WriteMimeType=false as Yandex ignores mime types (Nick Craig-Wood)</li> <li>Set Features WriteMimeType to false as Yandex ignores mime types (Nick Craig-Wood)</li>
</ul></li> </ul></li>
</ul> </ul>
<h2 id="v1.53.4---2021-01-20">v1.53.4 - 2021-01-20</h2> <h2 id="v1.53.4---2021-01-20">v1.53.4 - 2021-01-20</h2>

345
MANUAL.md generated
View file

@ -1,6 +1,6 @@
% rclone(1) User Manual % rclone(1) User Manual
% Nick Craig-Wood % Nick Craig-Wood
% Feb 02, 2021 % Mar 08, 2021
# Rclone syncs your files to cloud storage # Rclone syncs your files to cloud storage
@ -2928,7 +2928,7 @@ Without the use of `--vfs-cache-mode` this can only write files
sequentially, it can only seek when reading. This means that many sequentially, it can only seek when reading. This means that many
applications won't work with their files on an rclone mount without applications won't work with their files on an rclone mount without
`--vfs-cache-mode writes` or `--vfs-cache-mode full`. `--vfs-cache-mode writes` or `--vfs-cache-mode full`.
See the [File Caching](#file-caching) section for more info. See the [VFS File Caching](#vfs-file-caching) section for more info.
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2, The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2,
Hubic) do not support the concept of empty directories, so empty Hubic) do not support the concept of empty directories, so empty
@ -2943,7 +2943,7 @@ File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the can't use retries in the same way without making local copies of the
uploads. Look at the [file caching](#file-caching) uploads. Look at the [VFS File Caching](#vfs-file-caching)
for solutions to make mount more reliable. for solutions to make mount more reliable.
## Attribute caching ## Attribute caching
@ -3108,6 +3108,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be `--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off ### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write
@ -3832,6 +3839,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be `--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off ### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write
@ -4145,6 +4159,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be `--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off ### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write
@ -4616,6 +4637,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be `--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off ### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write
@ -5153,6 +5181,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be `--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off ### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write
@ -5631,6 +5666,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be `--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off ### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write
@ -8636,7 +8678,7 @@ not list `dir3`, `file3` or `.ignore`.
## Common pitfalls ## Common pitfalls
The most frequent filter support issues on The most frequent filter support issues on
the [rclone forum](https://https://forum.rclone.org/) are: the [rclone forum](https://forum.rclone.org/) are:
* Not using paths relative to the root of the remote * Not using paths relative to the root of the remote
* Not using `/` to match from the root of a remote * Not using `/` to match from the root of a remote
@ -10031,6 +10073,7 @@ This takes the following parameters
- srcFs - a remote name string e.g. "drive:src" for the source - srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination - dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if set
See the [copy command](https://rclone.org/commands/rclone_copy/) command for more information on the above. See the [copy command](https://rclone.org/commands/rclone_copy/) command for more information on the above.
@ -10043,6 +10086,7 @@ This takes the following parameters
- srcFs - a remote name string e.g. "drive:src" for the source - srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination - dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if set
- deleteEmptySrcDirs - delete empty src directories if set - deleteEmptySrcDirs - delete empty src directories if set
@ -10056,6 +10100,7 @@ This takes the following parameters
- srcFs - a remote name string e.g. "drive:src" for the source - srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination - dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if set
See the [sync command](https://rclone.org/commands/rclone_sync/) command for more information on the above. See the [sync command](https://rclone.org/commands/rclone_sync/) command for more information on the above.
@ -10954,7 +10999,7 @@ These flags are available for every command.
--use-json-log Use json log format. --use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs). --use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata --use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.54.0") --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.54.1")
-v, --verbose count Print lots more stuff (repeat for more) -v, --verbose count Print lots more stuff (repeat for more)
``` ```
@ -11085,7 +11130,7 @@ and may be set in the config file.
--drive-starred-only Only show files that are starred. --drive-starred-only Only show files that are starred.
--drive-stop-on-download-limit Make download limit errors be fatal --drive-stop-on-download-limit Make download limit errors be fatal
--drive-stop-on-upload-limit Make upload limit errors be fatal --drive-stop-on-upload-limit Make upload limit errors be fatal
--drive-team-drive string ID of the Team Drive --drive-team-drive string ID of the Shared Drive (Team Drive)
--drive-token string OAuth Access Token as a JSON blob. --drive-token string OAuth Access Token as a JSON blob.
--drive-token-url string Token server url. --drive-token-url string Token server url.
--drive-trashed-only Only show files that are in the trash. --drive-trashed-only Only show files that are in the trash.
@ -11366,8 +11411,13 @@ and may be set in the config file.
--yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-token string OAuth Access Token as a JSON blob. --yandex-token string OAuth Access Token as a JSON blob.
--yandex-token-url string Token server url. --yandex-token-url string Token server url.
--zoho-auth-url string Auth server URL.
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8) --zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in. --zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in.
--zoho-token string OAuth Access Token as a JSON blob.
--zoho-token-url string Token server url.
``` ```
1Fichier 1Fichier
@ -12281,6 +12331,16 @@ You'd then do a full `rclone sync` less often.
Note that `--fast-list` isn't required in the top-up sync. Note that `--fast-list` isn't required in the top-up sync.
#### Avoiding HEAD requests after PUT
By default rclone will HEAD every object it uploads. It does this to
check the object got uploaded correctly.
You can disable this with the [--s3-no-head](#s3-no-head) option - see
there for more details.
Setting this flag increases the chance for undetected upload failures.
### Hashes ### ### Hashes ###
For small objects which weren't uploaded as multipart uploads (objects For small objects which weren't uploaded as multipart uploads (objects
@ -15706,7 +15766,7 @@ These URLs are used by Plex internally to connect to the Plex server securely.
The format for these URLs is the following: The format for these URLs is the following:
https://ip-with-dots-replaced.server-hash.plex.direct:32400/ `https://ip-with-dots-replaced.server-hash.plex.direct:32400/`
The `ip-with-dots-replaced` part can be any IPv4 address, where the dots The `ip-with-dots-replaced` part can be any IPv4 address, where the dots
have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`. have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`.
@ -16870,7 +16930,7 @@ as you would with any other remote, e.g. `rclone copy D:\docs secret:\docs`,
and rclone will encrypt and decrypt as needed on the fly. and rclone will encrypt and decrypt as needed on the fly.
If you access the wrapped remote `remote:path` directly you will bypass If you access the wrapped remote `remote:path` directly you will bypass
the encryption, and anything you read will be in encrypted form, and the encryption, and anything you read will be in encrypted form, and
anything you write will be undencrypted. To avoid issues it is best to anything you write will be unencrypted. To avoid issues it is best to
configure a dedicated path for encrypted content, and access it configure a dedicated path for encrypted content, and access it
exclusively through a crypt remote. exclusively through a crypt remote.
@ -17821,6 +17881,21 @@ memory. It can be set smaller if you are tight on memory.
Impersonate this user when using a business account. Impersonate this user when using a business account.
Note that if you want to use impersonate, you should make sure this
flag is set when running "rclone config" as this will cause rclone to
request the "members.read" scope which it won't normally. This is
needed to lookup a members email address into the internal ID that
dropbox uses in the API.
Using the "members.read" scope will require a Dropbox Team Admin
to approve during the OAuth flow.
You will have to use your own App (setting your own client_id and
client_secret) to use this option as currently rclone's default set of
permissions doesn't include "members.read". This can be added once
v1.55 or later is in use everywhere.
- Config: impersonate - Config: impersonate
- Env Var: RCLONE_DROPBOX_IMPERSONATE - Env Var: RCLONE_DROPBOX_IMPERSONATE
- Type: string - Type: string
@ -18277,8 +18352,8 @@ excess files in the directory.
Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to
be enabled in the FTP backend config for the remote, or with be enabled in the FTP backend config for the remote, or with
`[--ftp-tls]{#ftp-tls}`. The default FTPS port is `990`, not `21` and [`--ftp-tls`](#ftp-tls). The default FTPS port is `990`, not `21` and
can be set with `[--ftp-port]{#ftp-port}`. can be set with [`--ftp-port`](#ftp-port).
### Standard Options ### Standard Options
@ -19042,7 +19117,7 @@ If your browser doesn't open automatically go to the following link: http://127.
Log in and authorize rclone for access Log in and authorize rclone for access
Waiting for code... Waiting for code...
Got code Got code
Configure this as a team drive? Configure this as a Shared Drive (Team Drive)?
y) Yes y) Yes
n) No n) No
y/n> n y/n> n
@ -19249,23 +19324,24 @@ Note: in case you configured a specific root folder on gdrive and rclone is unab
`rclone -v foo@example.com lsf gdrive:backup` `rclone -v foo@example.com lsf gdrive:backup`
### Team drives ### ### Shared drives (team drives) ###
If you want to configure the remote to point to a Google Team Drive If you want to configure the remote to point to a Google Shared Drive
then answer `y` to the question `Configure this as a team drive?`. (previously known as Team Drives) then answer `y` to the question
`Configure this as a Shared Drive (Team Drive)?`.
This will fetch the list of Team Drives from google and allow you to This will fetch the list of Shared Drives from google and allow you to
configure which one you want to use. You can also type in a team configure which one you want to use. You can also type in a Shared
drive ID if you prefer. Drive ID if you prefer.
For example: For example:
``` ```
Configure this as a team drive? Configure this as a Shared Drive (Team Drive)?
y) Yes y) Yes
n) No n) No
y/n> y y/n> y
Fetching team drive list... Fetching Shared Drive list...
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
1 / Rclone Test 1 / Rclone Test
\ "xxxxxxxxxxxxxxxxxxxx" \ "xxxxxxxxxxxxxxxxxxxx"
@ -19273,7 +19349,7 @@ Choose a number from below, or type in your own value
\ "yyyyyyyyyyyyyyyyyyyy" \ "yyyyyyyyyyyyyyyyyyyy"
3 / Rclone Test 3 3 / Rclone Test 3
\ "zzzzzzzzzzzzzzzzzzzz" \ "zzzzzzzzzzzzzzzzzzzz"
Enter a Team Drive ID> 1 Enter a Shared Drive ID> 1
-------------------- --------------------
[remote] [remote]
client_id = client_id =
@ -19644,7 +19720,7 @@ Needed only if you want use SA instead of interactive login.
#### --drive-team-drive #### --drive-team-drive
ID of the Team Drive ID of the Shared Drive (Team Drive)
- Config: team_drive - Config: team_drive
- Env Var: RCLONE_DRIVE_TEAM_DRIVE - Env Var: RCLONE_DRIVE_TEAM_DRIVE
@ -20107,11 +20183,11 @@ Options:
#### drives #### drives
List the shared drives available to this account List the Shared Drives available to this account
rclone backend drives remote: [options] [<arguments>+] rclone backend drives remote: [options] [<arguments>+]
This command lists the shared drives (teamdrives) available to this This command lists the Shared Drives (Team Drives) available to this
account. account.
Usage: Usage:
@ -25747,8 +25823,8 @@ The Go SSH library disables the use of the aes128-cbc cipher by
default, due to security concerns. This can be re-enabled on a default, due to security concerns. This can be re-enabled on a
per-connection basis by setting the `use_insecure_cipher` setting in per-connection basis by setting the `use_insecure_cipher` setting in
the configuration file to `true`. Further details on the insecurity of the configuration file to `true`. Further details on the insecurity of
this cipher can be found [in this paper] this cipher can be found
(http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf). [in this paper](http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).
SFTP isn't supported under plan9 until [this SFTP isn't supported under plan9 until [this
issue](https://github.com/pkg/sftp/issues/156) is fixed. issue](https://github.com/pkg/sftp/issues/156) is fixed.
@ -27199,6 +27275,26 @@ from filenames during upload.
Here are the standard options specific to zoho (Zoho). Here are the standard options specific to zoho (Zoho).
#### --zoho-client-id
OAuth Client Id
Leave blank normally.
- Config: client_id
- Env Var: RCLONE_ZOHO_CLIENT_ID
- Type: string
- Default: ""
#### --zoho-client-secret
OAuth Client Secret
Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_ZOHO_CLIENT_SECRET
- Type: string
- Default: ""
#### --zoho-region #### --zoho-region
Zoho region to connect to. You'll have to use the region you organization is registered in. Zoho region to connect to. You'll have to use the region you organization is registered in.
@ -27221,6 +27317,35 @@ Zoho region to connect to. You'll have to use the region you organization is reg
Here are the advanced options specific to zoho (Zoho). Here are the advanced options specific to zoho (Zoho).
#### --zoho-token
OAuth Access Token as a JSON blob.
- Config: token
- Env Var: RCLONE_ZOHO_TOKEN
- Type: string
- Default: ""
#### --zoho-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
- Env Var: RCLONE_ZOHO_AUTH_URL
- Type: string
- Default: ""
#### --zoho-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
- Env Var: RCLONE_ZOHO_TOKEN_URL
- Type: string
- Default: ""
#### --zoho-encoding #### --zoho-encoding
This sets the encoding for the backend. This sets the encoding for the backend.
@ -27752,12 +27877,49 @@ Options:
# Changelog # Changelog
## v1.54.1 - 2021-03-08
[See commits](https://github.com/rclone/rclone/compare/v1.54.0...v1.54.1)
* Bug Fixes
* accounting: Fix --bwlimit when up or down is off (Nick Craig-Wood)
* docs
* Fix nesting of brackets and backticks in ftp docs (edwardxml)
* Fix broken link in sftp page (edwardxml)
* Fix typo in crypt.md (Romeo Kienzler)
* Changelog: Correct link to digitalis.io (Alex JOST)
* Replace #file-caching with #vfs-file-caching (Miron Veryanskiy)
* Convert bogus example link to code (edwardxml)
* Remove dead link from rc.md (edwardxml)
* rc: Sync,copy,move: document createEmptySrcDirs parameter (Nick Craig-Wood)
* lsjson: Fix unterminated JSON in the presence of errors (Nick Craig-Wood)
* Mount
* Fix mount dropping on macOS by setting --daemon-timeout 10m (Nick Craig-Wood)
* VFS
* Document simultaneous usage with the same cache shouldn't be used (Nick Craig-Wood)
* B2
* Automatically raise upload cutoff to avoid spurious error (Nick Craig-Wood)
* Fix failed to create file system with application key limited to a prefix (Nick Craig-Wood)
* Drive
* Refer to Shared Drives instead of Team Drives (Nick Craig-Wood)
* Dropbox
* Add scopes to oauth request and optionally "members.read" (Nick Craig-Wood)
* S3
* Fix failed to create file system with folder level permissions policy (Nick Craig-Wood)
* Fix Wasabi HEAD requests returning stale data by using only 1 transport (Nick Craig-Wood)
* Fix shared_credentials_file auth (Dmitry Chepurovskiy)
* Add --s3-no-head to reducing costs docs (Nick Craig-Wood)
* Union
* Fix mkdir at root with remote:/ (Nick Craig-Wood)
* Zoho
* Fix custom client id's (buengese)
## v1.54.0 - 2021-02-02 ## v1.54.0 - 2021-02-02
[See commits](https://github.com/rclone/rclone/compare/v1.53.0...v1.54.0) [See commits](https://github.com/rclone/rclone/compare/v1.53.0...v1.54.0)
* New backends * New backends
* Compression remote (experimental)(buengese) * Compression remote (experimental) (buengese)
* Enterprise File Fabric (Nick Craig-Wood) * Enterprise File Fabric (Nick Craig-Wood)
* This work was sponsored by [Storage Made Easy](https://storagemadeeasy.com/) * This work was sponsored by [Storage Made Easy](https://storagemadeeasy.com/)
* HDFS (Hadoop Distributed File System) (Yury Stankevich) * HDFS (Hadoop Distributed File System) (Yury Stankevich)
@ -27765,41 +27927,35 @@ Options:
* New Features * New Features
* Deglobalise the config (Nick Craig-Wood) * Deglobalise the config (Nick Craig-Wood)
* Global config now read from the context * Global config now read from the context
* Global config can be passed into the rc * This will enable passing of global config via the rc
* This work was sponsored by [Digitalis](digitalis.io) * This work was sponsored by [Digitalis](https://digitalis.io/)
* Add `--bwlimit` for upload and download (Nick Craig-Wood) * Add `--bwlimit` for upload and download (Nick Craig-Wood)
* Obey bwlimit in http Transport for better limiting * Obey bwlimit in http Transport for better limiting
* Enhance systemd integration (Hekmon) * Enhance systemd integration (Hekmon)
* log level identification * log level identification, manual activation with flag, automatic systemd launch detection
* manual activation with flag
* automatic systemd launch detection
* Don't compile systemd log integration for non unix systems (Benjamin Gustin) * Don't compile systemd log integration for non unix systems (Benjamin Gustin)
* Add a download flag to hashsum and related commands to force rclone to download and hash files locally (lostheli) * Add a `--download` flag to md5sum/sha1sum/hashsum to force rclone to download and hash files locally (lostheli)
* Add `--progress-terminal-title` to print ETA to terminal title (LaSombra)
* Make backend env vars show in help as the defaults for backend flags (Nick Craig-Wood)
* build * build
* Raise minimum go version to go1.12 (Nick Craig-Wood) * Raise minimum go version to go1.12 (Nick Craig-Wood)
* check
* Make the error count match up in the log message (Nick Craig-Wood)
* cmd
* Add --progress-terminal-title to print ETA to terminal title (LaSombra)
* Make backend env vars show in help as the defaults for backend flags (Nick Craig-Wood)
* dedupe * dedupe
* Add --by-hash to dedupe on hash not file name (Nick Craig-Wood) * Add `--by-hash` to dedupe on content hash not file name (Nick Craig-Wood)
* Add --dedupe-mode list to just list dupes, changing nothing (Nick Craig-Wood) * Add `--dedupe-mode list` to just list dupes, changing nothing (Nick Craig-Wood)
* Add warning if used on a remote which can't have duplicate names (Nick Craig-Wood) * Add warning if used on a remote which can't have duplicate names (Nick Craig-Wood)
* flags: Improve error message when reading environment vars (Nick Craig-Wood)
* fs * fs
* Add Shutdown optional method for backends (Nick Craig-Wood) * Add Shutdown optional method for backends (Nick Craig-Wood)
* When using --files-from check files concurrently (zhucan) * When using `--files-from` check files concurrently (zhucan)
* Accumulate stats when using --dry-run (Ingo Weiss) * Accumulate stats when using `--dry-run` (Ingo Weiss)
* Always show stats when using --dry-run or --interactive (Nick Craig-Wood) * Always show stats when using `--dry-run` or `--interactive` (Nick Craig-Wood)
* Add support for flag --no-console on windows to hide the console window (albertony) * Add support for flag `--no-console` on windows to hide the console window (albertony)
* genautocomplete: Add support to output to stdout (Ingo) * genautocomplete: Add support to output to stdout (Ingo)
* ncdu * ncdu
* Highlight read errors instead of aborting (Claudio Bantaloukas) * Highlight read errors instead of aborting (Claudio Bantaloukas)
* Add sort by average size in directory (Adam Plánský) * Add sort by average size in directory (Adam Plánský)
* Add toggle option for average size in directory - key 'a' (Adam Plánský) * Add toggle option for average s3ize in directory - key 'a' (Adam Plánský)
* Add empty folder flag into ncdu browser (Adam Plánský) * Add empty folder flag into ncdu browser (Adam Plánský)
* Add ! (errror) and . (unreadable) file flags to go with e (empty) (Nick Craig-Wood) * Add `!` (errror) and `.` (unreadable) file flags to go with `e` (empty) (Nick Craig-Wood)
* obscure: Make `rclone osbcure -` ignore newline at end of line (Nick Craig-Wood) * obscure: Make `rclone osbcure -` ignore newline at end of line (Nick Craig-Wood)
* operations * operations
* Add logs when need to upload files to set mod times (Nick Craig-Wood) * Add logs when need to upload files to set mod times (Nick Craig-Wood)
@ -27817,26 +27973,22 @@ Options:
* Prompt user for updating webui if an update is available (Chaitanya Bankanhal) * Prompt user for updating webui if an update is available (Chaitanya Bankanhal)
* Fix plugins initialization (negative0) * Fix plugins initialization (negative0)
* Bug Fixes * Bug Fixes
* build
* Explicitly set ARM version to fix build (Nick Craig-Wood)
* Don't explicitly set ARM version to fix ARMv5 build (Nick Craig-Wood)
* Fix nfpm install (Nick Craig-Wood)
* Fix docker build by upgrading ilteoood/docker_buildx (Nick Craig-Wood)
* Temporary fix for Windows build errors (Ivan Andreev)
* fs * fs
* Fix nil pointer on copy & move operations directly to remote (Anagh Kumar Baranwal) * Fix nil pointer on copy & move operations directly to remote (Anagh Kumar Baranwal)
* Fix parsing of .. when joining remotes (Nick Craig-Wood) * Fix parsing of .. when joining remotes (Nick Craig-Wood)
* log: Fix enabling systemd logging when using --log-file (Nick Craig-Wood) * log: Fix enabling systemd logging when using `--log-file` (Nick Craig-Wood)
* move: Fix data loss when moving the same object (Nick Craig-Wood) * check
* Make the error count match up in the log message (Nick Craig-Wood)
* move: Fix data loss when source and destination are the same object (Nick Craig-Wood)
* operations * operations
* Fix --cutof-mode hard not cutting off immediately (Nick Craig-Wood) * Fix `--cutof-mode` hard not cutting off immediately (Nick Craig-Wood)
* Fix --immutable error message (Nick Craig-Wood) * Fix `--immutable` error message (Nick Craig-Wood)
* sync * sync
* Fix --cutoff-mode soft & cautious so it doesn't end the transfer early (Nick Craig-Wood) * Fix `--cutoff-mode` soft & cautious so it doesn't end the transfer early (Nick Craig-Wood)
* Fix --immutable errors retrying many times (Nick Craig-Wood) * Fix `--immutable` errors retrying many times (Nick Craig-Wood)
* Docs * Docs
* Many fixes and a rewrite of the filtering docs (edwardxml) * Many fixes and a rewrite of the filtering docs (edwardxml)
* Many spelling and grammar problems (Josh Soref) * Many spelling and grammar fixes (Josh Soref)
* Doc fixes for commands delete, purge, rmdir, rmdirs and mount (albertony) * Doc fixes for commands delete, purge, rmdir, rmdirs and mount (albertony)
* And thanks to these people for many doc fixes too numerous to list * And thanks to these people for many doc fixes too numerous to list
* Ameer Dawood, Antoine GIRARD, Bob Bagwill, Christopher Stewart * Ameer Dawood, Antoine GIRARD, Bob Bagwill, Christopher Stewart
@ -27846,46 +27998,44 @@ Options:
* Mount * Mount
* Update systemd status with cache stats (Hekmon) * Update systemd status with cache stats (Hekmon)
* Disable bazil/fuse based mount on macOS (Nick Craig-Wood) * Disable bazil/fuse based mount on macOS (Nick Craig-Wood)
* Make mount be cmount under macOS (Nick Craig-Wood) * Make `rclone mount` actually run `rclone cmount` under macOS (Nick Craig-Wood)
* Implement mknod to make NFS file creation work (Nick Craig-Wood) * Implement mknod to make NFS file creation work (Nick Craig-Wood)
* Make sure we don't call umount more than once (Nick Craig-Wood) * Make sure we don't call umount more than once (Nick Craig-Wood)
* Don't call host.Umount if a signal has been received (Nick Craig-Wood)
* More user friendly mounting as network drive on windows (albertony) * More user friendly mounting as network drive on windows (albertony)
* Cleanup OS specific option handling and documentation (albertony)
* Detect if uid or gid are set in same option string: -o uid=123,gid=456 (albertony) * Detect if uid or gid are set in same option string: -o uid=123,gid=456 (albertony)
* Don't attempt to unmount if fs has been destroyed already (Nick Craig-Wood) * Don't attempt to unmount if fs has been destroyed already (Nick Craig-Wood)
* VFS * VFS
* Fix virtual entries causing deleted files to still appear (Nick Craig-Wood) * Fix virtual entries causing deleted files to still appear (Nick Craig-Wood)
* Fix "file already exists" error for stale cache files (Nick Craig-Wood) * Fix "file already exists" error for stale cache files (Nick Craig-Wood)
* Fix file leaks with --vfs-cache-mode full and --buffer-size 0 (Nick Craig-Wood) * Fix file leaks with `--vfs-cache-mode` full and `--buffer-size 0` (Nick Craig-Wood)
* Fix invalid cache path on windows when using :backend: as remote (albertony) * Fix invalid cache path on windows when using :backend: as remote (albertony)
* Local * Local
* Continue listing files/folders when a circular symlink is detected (Manish Gupta) * Continue listing files/folders when a circular symlink is detected (Manish Gupta)
* New flag --local-zero-size-links to fix sync on some virtual filesystems (Riccardo Iaconelli) * New flag `--local-zero-size-links` to fix sync on some virtual filesystems (Riccardo Iaconelli)
* Azure Blob * Azure Blob
* Add support for service principals (James Lim) * Add support for service principals (James Lim)
* Utilize streaming capabilities (Denis Neuling) * Add support for managed identities (Brad Ackerman)
* Update SDK to v0.13.0 and fix API breakage (Nick Craig-Wood, Mitsuo Heijo) * Add examples for access tier (Bob Pusateri)
* Utilize the streaming capabilities from the SDK for multipart uploads (Denis Neuling)
* Fix setting of mime types (Nick Craig-Wood) * Fix setting of mime types (Nick Craig-Wood)
* Fix crash when listing outside a SAS URL's root (Nick Craig-Wood) * Fix crash when listing outside a SAS URL's root (Nick Craig-Wood)
* Delete archive tier blobs before update if --azureblob-archive-tier-delete (Nick Craig-Wood) * Delete archive tier blobs before update if `--azureblob-archive-tier-delete` (Nick Craig-Wood)
* Add support for managed identities (Brad Ackerman)
* Fix crash on startup (Nick Craig-Wood) * Fix crash on startup (Nick Craig-Wood)
* Add examples for access tier (Bob Pusateri) * Fix memory usage by upgrading the SDK to v0.13.0 and implementing a TransferManager (Nick Craig-Wood)
* Fix memory usage by upgrading the SDK and implementing a TransferManager (Nick Craig-Wood)
* Require go1.14+ to compile due to SDK changes (Nick Craig-Wood) * Require go1.14+ to compile due to SDK changes (Nick Craig-Wood)
* B2 * B2
* Make NewObject use less expensive API calls (Nick Craig-Wood) * Make NewObject use less expensive API calls (Nick Craig-Wood)
* Fixed possible crash when accessing Backblaze b2 remote (lluuaapp) * This will improve `--files-from` and `restic serve` in particular
* Fixed crash on an empty file name (lluuaapp)
* Box * Box
* Fix NewObject for files that differ in case (Nick Craig-Wood) * Fix NewObject for files that differ in case (Nick Craig-Wood)
* Fix finding directories in a case insentive way (Nick Craig-Wood) * Fix finding directories in a case insentive way (Nick Craig-Wood)
* Chunker * Chunker
* Skip long local hashing, hash in-transit (fixes) (Ivan Andreev) * Skip long local hashing, hash in-transit (fixes) (Ivan Andreev)
* Set Features.ReadMimeType=false as Object.MimeType not supported (Nick Craig-Wood) * Set Features ReadMimeType to false as Object.MimeType not supported (Nick Craig-Wood)
* Fix case-insensitive NewObject, test metadata detection (Ivan Andreev) * Fix case-insensitive NewObject, test metadata detection (Ivan Andreev)
* Drive * Drive
* Implement "rclone backend copyid" command for copying files by ID (Nick Craig-Wood) * Implement `rclone backend copyid` command for copying files by ID (Nick Craig-Wood)
* Added flag `--drive-stop-on-download-limit` to stop transfers when the download limit is exceeded (Anagh Kumar Baranwal) * Added flag `--drive-stop-on-download-limit` to stop transfers when the download limit is exceeded (Anagh Kumar Baranwal)
* Implement CleanUp workaround for team drives (buengese) * Implement CleanUp workaround for team drives (buengese)
* Allow shortcut resolution and creation to be retried (Nick Craig-Wood) * Allow shortcut resolution and creation to be retried (Nick Craig-Wood)
@ -27893,44 +28043,43 @@ Options:
* Add xdg office icons to xdg desktop files (Pau Rodriguez-Estivill) * Add xdg office icons to xdg desktop files (Pau Rodriguez-Estivill)
* Dropbox * Dropbox
* Add support for viewing shared files and folders (buengese) * Add support for viewing shared files and folders (buengese)
* Implement IDer (buengese) * Enable short lived access tokens (Nick Craig-Wood)
* Set Features.ReadMimeType=false as Object.MimeType not supported (Nick Craig-Wood) * Implement IDer on Objects so `rclone lsf` etc can read the IDs (buengese)
* Tidy repeated error message (Nick Craig-Wood) * Set Features ReadMimeType to false as Object.MimeType not supported (Nick Craig-Wood)
* Make malformed_path errors from too long files not retriable (Nick Craig-Wood) * Make malformed_path errors from too long files not retriable (Nick Craig-Wood)
* Test file name length before upload to fix upload loop (Nick Craig-Wood) * Test file name length before upload to fix upload loop (Nick Craig-Wood)
* Enable short lived access tokens (Nick Craig-Wood)
* Fichier * Fichier
* Set Features.ReadMimeType=true as Object.MimeType is supported (Nick Craig-Wood) * Set Features ReadMimeType to true as Object.MimeType is supported (Nick Craig-Wood)
* FTP * FTP
* Add --ftp-disable-msld option to ignore MLSD for really old servers (Nick Craig-Wood) * Add `--ftp-disable-msld` option to ignore MLSD for really old servers (Nick Craig-Wood)
* Make --tpslimit apply (Nick Craig-Wood) * Make `--tpslimit apply` (Nick Craig-Wood)
* Google Cloud Storage * Google Cloud Storage
* Storage class object header support (Laurens Janssen) * Storage class object header support (Laurens Janssen)
* Fix anonymous client to use rclone's HTTP client (Nick Craig-Wood) * Fix anonymous client to use rclone's HTTP client (Nick Craig-Wood)
* Fix Entry doesn't belong in directory "" (same as directory) - ignoring (Nick Craig-Wood) * Fix `Entry doesn't belong in directory "" (same as directory) - ignoring` (Nick Craig-Wood)
* Googlephotos * Googlephotos
* New flag --gphotos-include-archived (Nicolas Rueff) * New flag `--gphotos-include-archived` to show archived photos as well (Nicolas Rueff)
* Jottacloud * Jottacloud
* Don't erroniously report support for writing mime types (buengese) * Don't erroneously report support for writing mime types (buengese)
* Add support for Telia Cloud (#4930) (Patrik Nordlén) * Add support for Telia Cloud (Patrik Nordlén)
* Mailru * Mailru
* Accept special folders eg camera-upload (Ivan Andreev)
* Avoid prehashing of large local files (Ivan Andreev)
* Fix uploads after recent changes on server (Ivan Andreev) * Fix uploads after recent changes on server (Ivan Andreev)
* Fix range requests after June 2020 changes on server (Ivan Andreev) * Fix range requests after June 2020 changes on server (Ivan Andreev)
* Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev) * Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev)
* Remove deprecated protocol quirks (Ivan Andreev) * Remove deprecated protocol quirks (Ivan Andreev)
* Accept special folders eg camera-upload (Ivan Andreev)
* Avoid prehashing of large local files (Ivan Andreev)
* Memory * Memory
* Fix setting of mime types (Nick Craig-Wood) * Fix setting of mime types (Nick Craig-Wood)
* Onedrive * Onedrive
* Add support for china region operated by 21vianet and other regional suppliers (#4963) (NyaMisty) * Add support for China region operated by 21vianet and other regional suppliers (NyaMisty)
* Warn on gateway timeout errors (Nick Craig-Wood) * Warn on gateway timeout errors (Nick Craig-Wood)
* Fall back to normal copy if server-side copy unavailable (#4903) (Alex Chen) * Fall back to normal copy if server-side copy unavailable (Alex Chen)
* Fix server-side copy completely disabled on OneDrive for Business (Cnly) * Fix server-side copy completely disabled on OneDrive for Business (Cnly)
* (business only) workaround to replace existing file on server-side copy (#4904) (Alex Chen) * (business only) workaround to replace existing file on server-side copy (Alex Chen)
* Enhance link creation with expiry, scope, type and password (Nick Craig-Wood) * Enhance link creation with expiry, scope, type and password (Nick Craig-Wood)
* Remove % and # from the set of encoded characters (#4909) (Alex Chen) * Remove % and # from the set of encoded characters (Alex Chen)
* Support addressing site by server-relative URL (#4761) (kice) * Support addressing site by server-relative URL (kice)
* Opendrive * Opendrive
* Fix finding directories in a case insensitive way (Nick Craig-Wood) * Fix finding directories in a case insensitive way (Nick Craig-Wood)
* Pcloud * Pcloud
@ -27943,13 +28092,13 @@ Options:
* S3 * S3
* Added `--s3-disable-http2` to disable http/2 (Anagh Kumar Baranwal) * Added `--s3-disable-http2` to disable http/2 (Anagh Kumar Baranwal)
* Complete SSE-C implementation (Nick Craig-Wood) * Complete SSE-C implementation (Nick Craig-Wood)
* Fix hashes on small files with AWS:KMS and SSE-C (Nick Craig-Wood) * Fix hashes on small files with AWS:KMS and SSE-C (Nick Craig-Wood)
* Add MD5 metadata to objects uploaded with SSE-AWS/SSE-C (Nick Craig-Wood) * Add MD5 metadata to objects uploaded with SSE-AWS/SSE-C (Nick Craig-Wood)
* Add `--s3-no-head parameter` to minimise transactions on upload (Nick Craig-Wood)
* Update docs with a Reducing Costs section (Nick Craig-Wood) * Update docs with a Reducing Costs section (Nick Craig-Wood)
* Added error handling for error code 429 indicating too many requests (Anagh Kumar Baranwal) * Added error handling for error code 429 indicating too many requests (Anagh Kumar Baranwal)
* Add requester pays option (kelv) * Add requester pays option (kelv)
* Fix copy multipart with v2 auth failing with 'SignatureDoesNotMatch' (Louis Koo) * Fix copy multipart with v2 auth failing with 'SignatureDoesNotMatch' (Louis Koo)
* Add --s3-no-head parameter to minimise transactions on upload (Nick Craig-Wood)
* SFTP * SFTP
* Allow cert based auth via optional pubkey (Stephen Harris) * Allow cert based auth via optional pubkey (Stephen Harris)
* Allow user to optionally check server hosts key to add security (Stephen Harris) * Allow user to optionally check server hosts key to add security (Stephen Harris)
@ -27957,20 +28106,20 @@ Options:
* Remember entered password in AskPass mode (Stephen Harris) * Remember entered password in AskPass mode (Stephen Harris)
* Implement Shutdown method (Nick Craig-Wood) * Implement Shutdown method (Nick Craig-Wood)
* Implement keyboard interactive authentication (Nick Craig-Wood) * Implement keyboard interactive authentication (Nick Craig-Wood)
* Make --tpslimit apply (Nick Craig-Wood) * Make `--tpslimit` apply (Nick Craig-Wood)
* Implement --sftp-use-fstat (Nick Craig-Wood) * Implement `--sftp-use-fstat` for unusual SFTP servers (Nick Craig-Wood)
* Sugarsync * Sugarsync
* Fix NewObject for files that differ in case (Nick Craig-Wood) * Fix NewObject for files that differ in case (Nick Craig-Wood)
* Fix finding directories in a case insentive way (Nick Craig-Wood) * Fix finding directories in a case insentive way (Nick Craig-Wood)
* Swift * Swift
* Fix deletion of parts of Static Large Object (SLO) (Nguyễn Hữu Luân) * Fix deletion of parts of Static Large Object (SLO) (Nguyễn Hữu Luân)
* Ensure partially uploaded large files are uploaded unless --swift-leave-parts-on-error (Nguyễn Hữu Luân) * Ensure partially uploaded large files are uploaded unless `--swift-leave-parts-on-error` (Nguyễn Hữu Luân)
* Tardigrade * Tardigrade
* Upgrade to uplink v1.4.1 (Caleb Case) * Upgrade to uplink v1.4.1 (Caleb Case)
* WebDAV * WebDAV
* Updated docs to show streaming to nextcloud is working (Durval Menezes) * Updated docs to show streaming to nextcloud is working (Durval Menezes)
* Yandex * Yandex
* Set Features.WriteMimeType=false as Yandex ignores mime types (Nick Craig-Wood) * Set Features WriteMimeType to false as Yandex ignores mime types (Nick Craig-Wood)
## v1.53.4 - 2021-01-20 ## v1.53.4 - 2021-01-20

358
MANUAL.txt generated
View file

@ -1,6 +1,6 @@
rclone(1) User Manual rclone(1) User Manual
Nick Craig-Wood Nick Craig-Wood
Feb 02, 2021 Mar 08, 2021
@ -2962,8 +2962,8 @@ Limitations
Without the use of --vfs-cache-mode this can only write files Without the use of --vfs-cache-mode this can only write files
sequentially, it can only seek when reading. This means that many sequentially, it can only seek when reading. This means that many
applications won't work with their files on an rclone mount without applications won't work with their files on an rclone mount without
--vfs-cache-mode writes or --vfs-cache-mode full. See the File Caching --vfs-cache-mode writes or --vfs-cache-mode full. See the VFS File
section for more info. Caching section for more info.
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2, The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2,
Hubic) do not support the concept of empty directories, so empty Hubic) do not support the concept of empty directories, so empty
@ -2979,7 +2979,7 @@ File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy commands systems are a long way from 100% reliable. The rclone sync/copy commands
cope with this with lots of retries. However rclone mount can't use cope with this with lots of retries. However rclone mount can't use
retries in the same way without making local copies of the uploads. Look retries in the same way without making local copies of the uploads. Look
at the file caching for solutions to make mount more reliable. at the VFS File Caching for solutions to make mount more reliable.
Attribute caching Attribute caching
@ -3150,6 +3150,12 @@ for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be evicted --vfs-cache-poll-interval. Secondly because open files cannot be evicted
from the cache. from the cache.
You SHOULD NOT run two copies of rclone using the same VFS cache with
the same or overlapping remotes if using --vfs-cache-mode > off. This
can potentially cause data corruption if you do. You can work around
this by giving each rclone its own cache hierarchy with --cache-dir. You
don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote In this mode (the default) the cache will read directly from the remote
@ -3878,6 +3884,12 @@ for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be evicted --vfs-cache-poll-interval. Secondly because open files cannot be evicted
from the cache. from the cache.
You SHOULD NOT run two copies of rclone using the same VFS cache with
the same or overlapping remotes if using --vfs-cache-mode > off. This
can potentially cause data corruption if you do. You can work around
this by giving each rclone its own cache hierarchy with --cache-dir. You
don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote In this mode (the default) the cache will read directly from the remote
@ -4200,6 +4212,12 @@ for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be evicted --vfs-cache-poll-interval. Secondly because open files cannot be evicted
from the cache. from the cache.
You SHOULD NOT run two copies of rclone using the same VFS cache with
the same or overlapping remotes if using --vfs-cache-mode > off. This
can potentially cause data corruption if you do. You can work around
this by giving each rclone its own cache hierarchy with --cache-dir. You
don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote In this mode (the default) the cache will read directly from the remote
@ -4697,6 +4715,12 @@ for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be evicted --vfs-cache-poll-interval. Secondly because open files cannot be evicted
from the cache. from the cache.
You SHOULD NOT run two copies of rclone using the same VFS cache with
the same or overlapping remotes if using --vfs-cache-mode > off. This
can potentially cause data corruption if you do. You can work around
this by giving each rclone its own cache hierarchy with --cache-dir. You
don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote In this mode (the default) the cache will read directly from the remote
@ -5268,6 +5292,12 @@ for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be evicted --vfs-cache-poll-interval. Secondly because open files cannot be evicted
from the cache. from the cache.
You SHOULD NOT run two copies of rclone using the same VFS cache with
the same or overlapping remotes if using --vfs-cache-mode > off. This
can potentially cause data corruption if you do. You can work around
this by giving each rclone its own cache hierarchy with --cache-dir. You
don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote In this mode (the default) the cache will read directly from the remote
@ -5771,6 +5801,12 @@ for two reasons. Firstly because it is only checked every
--vfs-cache-poll-interval. Secondly because open files cannot be evicted --vfs-cache-poll-interval. Secondly because open files cannot be evicted
from the cache. from the cache.
You SHOULD NOT run two copies of rclone using the same VFS cache with
the same or overlapping remotes if using --vfs-cache-mode > off. This
can potentially cause data corruption if you do. You can work around
this by giving each rclone its own cache hierarchy with --cache-dir. You
don't need to worry about this if the remotes in use don't overlap.
--vfs-cache-mode off --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote In this mode (the default) the cache will read directly from the remote
@ -10156,6 +10192,8 @@ This takes the following parameters
- srcFs - a remote name string e.g. "drive:src" for the source - srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination - dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if
set
See the copy command command for more information on the above. See the copy command command for more information on the above.
@ -10167,6 +10205,8 @@ This takes the following parameters
- srcFs - a remote name string e.g. "drive:src" for the source - srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination - dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if
set
- deleteEmptySrcDirs - delete empty src directories if set - deleteEmptySrcDirs - delete empty src directories if set
See the move command command for more information on the above. See the move command command for more information on the above.
@ -10179,6 +10219,8 @@ This takes the following parameters
- srcFs - a remote name string e.g. "drive:src" for the source - srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination - dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if
set
See the sync command command for more information on the above. See the sync command command for more information on the above.
@ -11048,7 +11090,7 @@ These flags are available for every command.
--use-json-log Use json log format. --use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs). --use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata --use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.54.0") --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.54.1")
-v, --verbose count Print lots more stuff (repeat for more) -v, --verbose count Print lots more stuff (repeat for more)
@ -11178,7 +11220,7 @@ and may be set in the config file.
--drive-starred-only Only show files that are starred. --drive-starred-only Only show files that are starred.
--drive-stop-on-download-limit Make download limit errors be fatal --drive-stop-on-download-limit Make download limit errors be fatal
--drive-stop-on-upload-limit Make upload limit errors be fatal --drive-stop-on-upload-limit Make upload limit errors be fatal
--drive-team-drive string ID of the Team Drive --drive-team-drive string ID of the Shared Drive (Team Drive)
--drive-token string OAuth Access Token as a JSON blob. --drive-token string OAuth Access Token as a JSON blob.
--drive-token-url string Token server url. --drive-token-url string Token server url.
--drive-trashed-only Only show files that are in the trash. --drive-trashed-only Only show files that are in the trash.
@ -11459,8 +11501,13 @@ and may be set in the config file.
--yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-token string OAuth Access Token as a JSON blob. --yandex-token string OAuth Access Token as a JSON blob.
--yandex-token-url string Token server url. --yandex-token-url string Token server url.
--zoho-auth-url string Auth server URL.
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8) --zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in. --zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in.
--zoho-token string OAuth Access Token as a JSON blob.
--zoho-token-url string Token server url.
1Fichier 1Fichier
@ -12350,6 +12397,16 @@ You'd then do a full rclone sync less often.
Note that --fast-list isn't required in the top-up sync. Note that --fast-list isn't required in the top-up sync.
Avoiding HEAD requests after PUT
By default rclone will HEAD every object it uploads. It does this to
check the object got uploaded correctly.
You can disable this with the --s3-no-head option - see there for more
details.
Setting this flag increases the chance for undetected upload failures.
Hashes Hashes
For small objects which weren't uploaded as multipart uploads (objects For small objects which weren't uploaded as multipart uploads (objects
@ -16882,9 +16939,8 @@ would with any other remote, e.g. rclone copy D:\docs secret:\docs, and
rclone will encrypt and decrypt as needed on the fly. If you access the rclone will encrypt and decrypt as needed on the fly. If you access the
wrapped remote remote:path directly you will bypass the encryption, and wrapped remote remote:path directly you will bypass the encryption, and
anything you read will be in encrypted form, and anything you write will anything you read will be in encrypted form, and anything you write will
be undencrypted. To avoid issues it is best to configure a dedicated be unencrypted. To avoid issues it is best to configure a dedicated path
path for encrypted content, and access it exclusively through a crypt for encrypted content, and access it exclusively through a crypt remote.
remote.
No remotes found - make a new one No remotes found - make a new one
n) New remote n) New remote
@ -17814,6 +17870,20 @@ can be set smaller if you are tight on memory.
Impersonate this user when using a business account. Impersonate this user when using a business account.
Note that if you want to use impersonate, you should make sure this flag
is set when running "rclone config" as this will cause rclone to request
the "members.read" scope which it won't normally. This is needed to
lookup a members email address into the internal ID that dropbox uses in
the API.
Using the "members.read" scope will require a Dropbox Team Admin to
approve during the OAuth flow.
You will have to use your own App (setting your own client_id and
client_secret) to use this option as currently rclone's default set of
permissions doesn't include "members.read". This can be added once v1.55
or later is in use everywhere.
- Config: impersonate - Config: impersonate
- Env Var: RCLONE_DROPBOX_IMPERSONATE - Env Var: RCLONE_DROPBOX_IMPERSONATE
- Type: string - Type: string
@ -18257,9 +18327,8 @@ Example without a config file
Implicit TLS Implicit TLS
Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to be Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to be
enabled in the FTP backend config for the remote, or with enabled in the FTP backend config for the remote, or with --ftp-tls. The
[--ftp-tls]{#ftp-tls}. The default FTPS port is 990, not 21 and can be default FTPS port is 990, not 21 and can be set with --ftp-port.
set with [--ftp-port]{#ftp-port}.
Standard Options Standard Options
@ -19009,7 +19078,7 @@ This will guide you through an interactive setup process:
Log in and authorize rclone for access Log in and authorize rclone for access
Waiting for code... Waiting for code...
Got code Got code
Configure this as a team drive? Configure this as a Shared Drive (Team Drive)?
y) Yes y) Yes
n) No n) No
y/n> n y/n> n
@ -19212,22 +19281,23 @@ you created/selected at step #1 - use rclone without specifying the
--drive-impersonate option, like this: --drive-impersonate option, like this:
rclone -v foo@example.com lsf gdrive:backup rclone -v foo@example.com lsf gdrive:backup
Team drives Shared drives (team drives)
If you want to configure the remote to point to a Google Team Drive then If you want to configure the remote to point to a Google Shared Drive
answer y to the question Configure this as a team drive?. (previously known as Team Drives) then answer y to the question
Configure this as a Shared Drive (Team Drive)?.
This will fetch the list of Team Drives from google and allow you to This will fetch the list of Shared Drives from google and allow you to
configure which one you want to use. You can also type in a team drive configure which one you want to use. You can also type in a Shared Drive
ID if you prefer. ID if you prefer.
For example: For example:
Configure this as a team drive? Configure this as a Shared Drive (Team Drive)?
y) Yes y) Yes
n) No n) No
y/n> y y/n> y
Fetching team drive list... Fetching Shared Drive list...
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
1 / Rclone Test 1 / Rclone Test
\ "xxxxxxxxxxxxxxxxxxxx" \ "xxxxxxxxxxxxxxxxxxxx"
@ -19235,7 +19305,7 @@ For example:
\ "yyyyyyyyyyyyyyyyyyyy" \ "yyyyyyyyyyyyyyyyyyyy"
3 / Rclone Test 3 3 / Rclone Test 3
\ "zzzzzzzzzzzzzzzzzzzz" \ "zzzzzzzzzzzzzzzzzzzz"
Enter a Team Drive ID> 1 Enter a Shared Drive ID> 1
-------------------- --------------------
[remote] [remote]
client_id = client_id =
@ -19635,7 +19705,7 @@ if you want use SA instead of interactive login.
--drive-team-drive --drive-team-drive
ID of the Team Drive ID of the Shared Drive (Team Drive)
- Config: team_drive - Config: team_drive
- Env Var: RCLONE_DRIVE_TEAM_DRIVE - Env Var: RCLONE_DRIVE_TEAM_DRIVE
@ -20093,11 +20163,11 @@ Options:
drives drives
List the shared drives available to this account List the Shared Drives available to this account
rclone backend drives remote: [options] [<arguments>+] rclone backend drives remote: [options] [<arguments>+]
This command lists the shared drives (teamdrives) available to this This command lists the Shared Drives (Team Drives) available to this
account. account.
Usage: Usage:
@ -25693,7 +25763,7 @@ The Go SSH library disables the use of the aes128-cbc cipher by default,
due to security concerns. This can be re-enabled on a per-connection due to security concerns. This can be re-enabled on a per-connection
basis by setting the use_insecure_cipher setting in the configuration basis by setting the use_insecure_cipher setting in the configuration
file to true. Further details on the insecurity of this cipher can be file to true. Further details on the insecurity of this cipher can be
found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf). found in this paper.
SFTP isn't supported under plan9 until this issue is fixed. SFTP isn't supported under plan9 until this issue is fixed.
@ -27183,6 +27253,24 @@ Standard Options
Here are the standard options specific to zoho (Zoho). Here are the standard options specific to zoho (Zoho).
--zoho-client-id
OAuth Client Id Leave blank normally.
- Config: client_id
- Env Var: RCLONE_ZOHO_CLIENT_ID
- Type: string
- Default: ""
--zoho-client-secret
OAuth Client Secret Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_ZOHO_CLIENT_SECRET
- Type: string
- Default: ""
--zoho-region --zoho-region
Zoho region to connect to. You'll have to use the region you Zoho region to connect to. You'll have to use the region you
@ -27206,6 +27294,33 @@ Advanced Options
Here are the advanced options specific to zoho (Zoho). Here are the advanced options specific to zoho (Zoho).
--zoho-token
OAuth Access Token as a JSON blob.
- Config: token
- Env Var: RCLONE_ZOHO_TOKEN
- Type: string
- Default: ""
--zoho-auth-url
Auth server URL. Leave blank to use the provider defaults.
- Config: auth_url
- Env Var: RCLONE_ZOHO_AUTH_URL
- Type: string
- Default: ""
--zoho-token-url
Token server url. Leave blank to use the provider defaults.
- Config: token_url
- Env Var: RCLONE_ZOHO_TOKEN_URL
- Type: string
- Default: ""
--zoho-encoding --zoho-encoding
This sets the encoding for the backend. This sets the encoding for the backend.
@ -27707,12 +27822,62 @@ Options:
CHANGELOG CHANGELOG
v1.54.1 - 2021-03-08
See commits
- Bug Fixes
- accounting: Fix --bwlimit when up or down is off (Nick
Craig-Wood)
- docs
- Fix nesting of brackets and backticks in ftp docs
(edwardxml)
- Fix broken link in sftp page (edwardxml)
- Fix typo in crypt.md (Romeo Kienzler)
- Changelog: Correct link to digitalis.io (Alex JOST)
- Replace #file-caching with #vfs-file-caching (Miron
Veryanskiy)
- Convert bogus example link to code (edwardxml)
- Remove dead link from rc.md (edwardxml)
- rc: Sync,copy,move: document createEmptySrcDirs parameter (Nick
Craig-Wood)
- lsjson: Fix unterminated JSON in the presence of errors (Nick
Craig-Wood)
- Mount
- Fix mount dropping on macOS by setting --daemon-timeout 10m
(Nick Craig-Wood)
- VFS
- Document simultaneous usage with the same cache shouldn't be
used (Nick Craig-Wood)
- B2
- Automatically raise upload cutoff to avoid spurious error (Nick
Craig-Wood)
- Fix failed to create file system with application key limited to
a prefix (Nick Craig-Wood)
- Drive
- Refer to Shared Drives instead of Team Drives (Nick Craig-Wood)
- Dropbox
- Add scopes to oauth request and optionally "members.read" (Nick
Craig-Wood)
- S3
- Fix failed to create file system with folder level permissions
policy (Nick Craig-Wood)
- Fix Wasabi HEAD requests returning stale data by using only 1
transport (Nick Craig-Wood)
- Fix shared_credentials_file auth (Dmitry Chepurovskiy)
- Add --s3-no-head to reducing costs docs (Nick Craig-Wood)
- Union
- Fix mkdir at root with remote:/ (Nick Craig-Wood)
- Zoho
- Fix custom client id's (buengese)
v1.54.0 - 2021-02-02 v1.54.0 - 2021-02-02
See commits See commits
- New backends - New backends
- Compression remote (experimental)(buengese) - Compression remote (experimental) (buengese)
- Enterprise File Fabric (Nick Craig-Wood) - Enterprise File Fabric (Nick Craig-Wood)
- This work was sponsored by Storage Made Easy - This work was sponsored by Storage Made Easy
- HDFS (Hadoop Distributed File System) (Yury Stankevich) - HDFS (Hadoop Distributed File System) (Yury Stankevich)
@ -27720,37 +27885,30 @@ See commits
- New Features - New Features
- Deglobalise the config (Nick Craig-Wood) - Deglobalise the config (Nick Craig-Wood)
- Global config now read from the context - Global config now read from the context
- Global config can be passed into the rc - This will enable passing of global config via the rc
- This work was sponsored by Digitalis - This work was sponsored by Digitalis
- Add --bwlimit for upload and download (Nick Craig-Wood) - Add --bwlimit for upload and download (Nick Craig-Wood)
- Obey bwlimit in http Transport for better limiting - Obey bwlimit in http Transport for better limiting
- Enhance systemd integration (Hekmon) - Enhance systemd integration (Hekmon)
- log level identification - log level identification, manual activation with flag,
- manual activation with flag automatic systemd launch detection
- automatic systemd launch detection
- Don't compile systemd log integration for non unix systems - Don't compile systemd log integration for non unix systems
(Benjamin Gustin) (Benjamin Gustin)
- Add a download flag to hashsum and related commands to force - Add a --download flag to md5sum/sha1sum/hashsum to force rclone
rclone to download and hash files locally (lostheli) to download and hash files locally (lostheli)
- Add --progress-terminal-title to print ETA to terminal title
(LaSombra)
- Make backend env vars show in help as the defaults for backend
flags (Nick Craig-Wood)
- build - build
- Raise minimum go version to go1.12 (Nick Craig-Wood) - Raise minimum go version to go1.12 (Nick Craig-Wood)
- check
- Make the error count match up in the log message (Nick
Craig-Wood)
- cmd
- Add --progress-terminal-title to print ETA to terminal title
(LaSombra)
- Make backend env vars show in help as the defaults for
backend flags (Nick Craig-Wood)
- dedupe - dedupe
- Add --by-hash to dedupe on hash not file name (Nick - Add --by-hash to dedupe on content hash not file name (Nick
Craig-Wood) Craig-Wood)
- Add --dedupe-mode list to just list dupes, changing nothing - Add --dedupe-mode list to just list dupes, changing nothing
(Nick Craig-Wood) (Nick Craig-Wood)
- Add warning if used on a remote which can't have duplicate - Add warning if used on a remote which can't have duplicate
names (Nick Craig-Wood) names (Nick Craig-Wood)
- flags: Improve error message when reading environment vars (Nick
Craig-Wood)
- fs - fs
- Add Shutdown optional method for backends (Nick Craig-Wood) - Add Shutdown optional method for backends (Nick Craig-Wood)
- When using --files-from check files concurrently (zhucan) - When using --files-from check files concurrently (zhucan)
@ -27764,7 +27922,7 @@ See commits
- Highlight read errors instead of aborting (Claudio - Highlight read errors instead of aborting (Claudio
Bantaloukas) Bantaloukas)
- Add sort by average size in directory (Adam Plánský) - Add sort by average size in directory (Adam Plánský)
- Add toggle option for average size in directory - key 'a' - Add toggle option for average s3ize in directory - key 'a'
(Adam Plánský) (Adam Plánský)
- Add empty folder flag into ncdu browser (Adam Plánský) - Add empty folder flag into ncdu browser (Adam Plánský)
- Add ! (errror) and . (unreadable) file flags to go with e - Add ! (errror) and . (unreadable) file flags to go with e
@ -27794,22 +27952,17 @@ See commits
(Chaitanya Bankanhal) (Chaitanya Bankanhal)
- Fix plugins initialization (negative0) - Fix plugins initialization (negative0)
- Bug Fixes - Bug Fixes
- build
- Explicitly set ARM version to fix build (Nick Craig-Wood)
- Don't explicitly set ARM version to fix ARMv5 build (Nick
Craig-Wood)
- Fix nfpm install (Nick Craig-Wood)
- Fix docker build by upgrading ilteoood/docker_buildx (Nick
Craig-Wood)
- Temporary fix for Windows build errors (Ivan Andreev)
- fs - fs
- Fix nil pointer on copy & move operations directly to remote - Fix nil pointer on copy & move operations directly to remote
(Anagh Kumar Baranwal) (Anagh Kumar Baranwal)
- Fix parsing of .. when joining remotes (Nick Craig-Wood) - Fix parsing of .. when joining remotes (Nick Craig-Wood)
- log: Fix enabling systemd logging when using --log-file (Nick - log: Fix enabling systemd logging when using --log-file (Nick
Craig-Wood) Craig-Wood)
- move: Fix data loss when moving the same object (Nick - check
Craig-Wood) - Make the error count match up in the log message (Nick
Craig-Wood)
- move: Fix data loss when source and destination are the same
object (Nick Craig-Wood)
- operations - operations
- Fix --cutof-mode hard not cutting off immediately (Nick - Fix --cutof-mode hard not cutting off immediately (Nick
Craig-Wood) Craig-Wood)
@ -27820,7 +27973,7 @@ See commits
- Fix --immutable errors retrying many times (Nick Craig-Wood) - Fix --immutable errors retrying many times (Nick Craig-Wood)
- Docs - Docs
- Many fixes and a rewrite of the filtering docs (edwardxml) - Many fixes and a rewrite of the filtering docs (edwardxml)
- Many spelling and grammar problems (Josh Soref) - Many spelling and grammar fixes (Josh Soref)
- Doc fixes for commands delete, purge, rmdir, rmdirs and mount - Doc fixes for commands delete, purge, rmdir, rmdirs and mount
(albertony) (albertony)
- And thanks to these people for many doc fixes too numerous to - And thanks to these people for many doc fixes too numerous to
@ -27834,15 +27987,12 @@ See commits
- Mount - Mount
- Update systemd status with cache stats (Hekmon) - Update systemd status with cache stats (Hekmon)
- Disable bazil/fuse based mount on macOS (Nick Craig-Wood) - Disable bazil/fuse based mount on macOS (Nick Craig-Wood)
- Make mount be cmount under macOS (Nick Craig-Wood) - Make rclone mount actually run rclone cmount under macOS
(Nick Craig-Wood)
- Implement mknod to make NFS file creation work (Nick Craig-Wood) - Implement mknod to make NFS file creation work (Nick Craig-Wood)
- Make sure we don't call umount more than once (Nick Craig-Wood) - Make sure we don't call umount more than once (Nick Craig-Wood)
- Don't call host.Umount if a signal has been received (Nick
Craig-Wood)
- More user friendly mounting as network drive on windows - More user friendly mounting as network drive on windows
(albertony) (albertony)
- Cleanup OS specific option handling and documentation
(albertony)
- Detect if uid or gid are set in same option string: -o - Detect if uid or gid are set in same option string: -o
uid=123,gid=456 (albertony) uid=123,gid=456 (albertony)
- Don't attempt to unmount if fs has been destroyed already (Nick - Don't attempt to unmount if fs has been destroyed already (Nick
@ -27863,37 +28013,37 @@ See commits
filesystems (Riccardo Iaconelli) filesystems (Riccardo Iaconelli)
- Azure Blob - Azure Blob
- Add support for service principals (James Lim) - Add support for service principals (James Lim)
- Utilize streaming capabilities (Denis Neuling) - Add support for managed identities (Brad Ackerman)
- Update SDK to v0.13.0 and fix API breakage (Nick Craig-Wood, - Add examples for access tier (Bob Pusateri)
Mitsuo Heijo) - Utilize the streaming capabilities from the SDK for multipart
uploads (Denis Neuling)
- Fix setting of mime types (Nick Craig-Wood) - Fix setting of mime types (Nick Craig-Wood)
- Fix crash when listing outside a SAS URL's root (Nick - Fix crash when listing outside a SAS URL's root (Nick
Craig-Wood) Craig-Wood)
- Delete archive tier blobs before update if - Delete archive tier blobs before update if
--azureblob-archive-tier-delete (Nick Craig-Wood) --azureblob-archive-tier-delete (Nick Craig-Wood)
- Add support for managed identities (Brad Ackerman)
- Fix crash on startup (Nick Craig-Wood) - Fix crash on startup (Nick Craig-Wood)
- Add examples for access tier (Bob Pusateri) - Fix memory usage by upgrading the SDK to v0.13.0 and
- Fix memory usage by upgrading the SDK and implementing a implementing a TransferManager (Nick Craig-Wood)
TransferManager (Nick Craig-Wood)
- Require go1.14+ to compile due to SDK changes (Nick Craig-Wood) - Require go1.14+ to compile due to SDK changes (Nick Craig-Wood)
- B2 - B2
- Make NewObject use less expensive API calls (Nick Craig-Wood) - Make NewObject use less expensive API calls (Nick Craig-Wood)
- Fixed possible crash when accessing Backblaze b2 remote - This will improve --files-from and restic serve in
(lluuaapp) particular
- Fixed crash on an empty file name (lluuaapp)
- Box - Box
- Fix NewObject for files that differ in case (Nick Craig-Wood) - Fix NewObject for files that differ in case (Nick Craig-Wood)
- Fix finding directories in a case insentive way (Nick - Fix finding directories in a case insentive way (Nick
Craig-Wood) Craig-Wood)
- Chunker - Chunker
- Skip long local hashing, hash in-transit (fixes) (Ivan Andreev) - Skip long local hashing, hash in-transit (fixes) (Ivan Andreev)
- Set Features.ReadMimeType=false as Object.MimeType not supported - Set Features ReadMimeType to false as Object.MimeType not
(Nick Craig-Wood) supported (Nick Craig-Wood)
- Fix case-insensitive NewObject, test metadata detection (Ivan - Fix case-insensitive NewObject, test metadata detection (Ivan
Andreev) Andreev)
- Drive - Drive
- Implement "rclone backend copyid" command for copying files by - Implement rclone backend copyid command for copying files by ID
ID (Nick Craig-Wood) (Nick Craig-Wood)
- Added flag --drive-stop-on-download-limit to stop transfers when - Added flag --drive-stop-on-download-limit to stop transfers when
the download limit is exceeded (Anagh Kumar Baranwal) the download limit is exceeded (Anagh Kumar Baranwal)
- Implement CleanUp workaround for team drives (buengese) - Implement CleanUp workaround for team drives (buengese)
@ -27904,18 +28054,18 @@ See commits
Rodriguez-Estivill) Rodriguez-Estivill)
- Dropbox - Dropbox
- Add support for viewing shared files and folders (buengese) - Add support for viewing shared files and folders (buengese)
- Implement IDer (buengese) - Enable short lived access tokens (Nick Craig-Wood)
- Set Features.ReadMimeType=false as Object.MimeType not supported - Implement IDer on Objects so rclone lsf etc can read the IDs
(Nick Craig-Wood) (buengese)
- Tidy repeated error message (Nick Craig-Wood) - Set Features ReadMimeType to false as Object.MimeType not
supported (Nick Craig-Wood)
- Make malformed_path errors from too long files not retriable - Make malformed_path errors from too long files not retriable
(Nick Craig-Wood) (Nick Craig-Wood)
- Test file name length before upload to fix upload loop (Nick - Test file name length before upload to fix upload loop (Nick
Craig-Wood) Craig-Wood)
- Enable short lived access tokens (Nick Craig-Wood)
- Fichier - Fichier
- Set Features.ReadMimeType=true as Object.MimeType is supported - Set Features ReadMimeType to true as Object.MimeType is
(Nick Craig-Wood) supported (Nick Craig-Wood)
- FTP - FTP
- Add --ftp-disable-msld option to ignore MLSD for really old - Add --ftp-disable-msld option to ignore MLSD for really old
servers (Nick Craig-Wood) servers (Nick Craig-Wood)
@ -27924,39 +28074,40 @@ See commits
- Storage class object header support (Laurens Janssen) - Storage class object header support (Laurens Janssen)
- Fix anonymous client to use rclone's HTTP client (Nick - Fix anonymous client to use rclone's HTTP client (Nick
Craig-Wood) Craig-Wood)
- Fix Entry doesn't belong in directory "" (same as directory) - - Fix
ignoring (Nick Craig-Wood) Entry doesn't belong in directory "" (same as directory) - ignoring
(Nick Craig-Wood)
- Googlephotos - Googlephotos
- New flag --gphotos-include-archived (Nicolas Rueff) - New flag --gphotos-include-archived to show archived photos as
well (Nicolas Rueff)
- Jottacloud - Jottacloud
- Don't erroniously report support for writing mime types - Don't erroneously report support for writing mime types
(buengese) (buengese)
- Add support for Telia Cloud (#4930) (Patrik Nordlén) - Add support for Telia Cloud (Patrik Nordlén)
- Mailru - Mailru
- Accept special folders eg camera-upload (Ivan Andreev)
- Avoid prehashing of large local files (Ivan Andreev)
- Fix uploads after recent changes on server (Ivan Andreev) - Fix uploads after recent changes on server (Ivan Andreev)
- Fix range requests after June 2020 changes on server (Ivan - Fix range requests after June 2020 changes on server (Ivan
Andreev) Andreev)
- Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev) - Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev)
- Remove deprecated protocol quirks (Ivan Andreev) - Remove deprecated protocol quirks (Ivan Andreev)
- Accept special folders eg camera-upload (Ivan Andreev)
- Avoid prehashing of large local files (Ivan Andreev)
- Memory - Memory
- Fix setting of mime types (Nick Craig-Wood) - Fix setting of mime types (Nick Craig-Wood)
- Onedrive - Onedrive
- Add support for china region operated by 21vianet and other - Add support for China region operated by 21vianet and other
regional suppliers (#4963) (NyaMisty) regional suppliers (NyaMisty)
- Warn on gateway timeout errors (Nick Craig-Wood) - Warn on gateway timeout errors (Nick Craig-Wood)
- Fall back to normal copy if server-side copy unavailable (#4903) - Fall back to normal copy if server-side copy unavailable (Alex
(Alex Chen) Chen)
- Fix server-side copy completely disabled on OneDrive for - Fix server-side copy completely disabled on OneDrive for
Business (Cnly) Business (Cnly)
- (business only) workaround to replace existing file on - (business only) workaround to replace existing file on
server-side copy (#4904) (Alex Chen) server-side copy (Alex Chen)
- Enhance link creation with expiry, scope, type and password - Enhance link creation with expiry, scope, type and password
(Nick Craig-Wood) (Nick Craig-Wood)
- Remove % and # from the set of encoded characters (#4909) (Alex - Remove % and # from the set of encoded characters (Alex Chen)
Chen) - Support addressing site by server-relative URL (kice)
- Support addressing site by server-relative URL (#4761) (kice)
- Opendrive - Opendrive
- Fix finding directories in a case insensitive way (Nick - Fix finding directories in a case insensitive way (Nick
Craig-Wood) Craig-Wood)
@ -27972,18 +28123,18 @@ See commits
- Added --s3-disable-http2 to disable http/2 (Anagh Kumar - Added --s3-disable-http2 to disable http/2 (Anagh Kumar
Baranwal) Baranwal)
- Complete SSE-C implementation (Nick Craig-Wood) - Complete SSE-C implementation (Nick Craig-Wood)
- Fix hashes on small files with AWS:KMS and SSE-C (Nick - Fix hashes on small files with AWS:KMS and SSE-C (Nick
Craig-Wood) Craig-Wood)
- Add MD5 metadata to objects uploaded with SSE-AWS/SSE-C (Nick - Add MD5 metadata to objects uploaded with SSE-AWS/SSE-C
Craig-Wood) (Nick Craig-Wood)
- Add --s3-no-head parameter to minimise transactions on upload
(Nick Craig-Wood)
- Update docs with a Reducing Costs section (Nick Craig-Wood) - Update docs with a Reducing Costs section (Nick Craig-Wood)
- Added error handling for error code 429 indicating too many - Added error handling for error code 429 indicating too many
requests (Anagh Kumar Baranwal) requests (Anagh Kumar Baranwal)
- Add requester pays option (kelv) - Add requester pays option (kelv)
- Fix copy multipart with v2 auth failing with - Fix copy multipart with v2 auth failing with
'SignatureDoesNotMatch' (Louis Koo) 'SignatureDoesNotMatch' (Louis Koo)
- Add --s3-no-head parameter to minimise transactions on upload
(Nick Craig-Wood)
- SFTP - SFTP
- Allow cert based auth via optional pubkey (Stephen Harris) - Allow cert based auth via optional pubkey (Stephen Harris)
- Allow user to optionally check server hosts key to add security - Allow user to optionally check server hosts key to add security
@ -27994,7 +28145,8 @@ See commits
- Implement Shutdown method (Nick Craig-Wood) - Implement Shutdown method (Nick Craig-Wood)
- Implement keyboard interactive authentication (Nick Craig-Wood) - Implement keyboard interactive authentication (Nick Craig-Wood)
- Make --tpslimit apply (Nick Craig-Wood) - Make --tpslimit apply (Nick Craig-Wood)
- Implement --sftp-use-fstat (Nick Craig-Wood) - Implement --sftp-use-fstat for unusual SFTP servers (Nick
Craig-Wood)
- Sugarsync - Sugarsync
- Fix NewObject for files that differ in case (Nick Craig-Wood) - Fix NewObject for files that differ in case (Nick Craig-Wood)
- Fix finding directories in a case insentive way (Nick - Fix finding directories in a case insentive way (Nick
@ -28010,7 +28162,7 @@ See commits
- Updated docs to show streaming to nextcloud is working (Durval - Updated docs to show streaming to nextcloud is working (Durval
Menezes) Menezes)
- Yandex - Yandex
- Set Features.WriteMimeType=false as Yandex ignores mime types - Set Features WriteMimeType to false as Yandex ignores mime types
(Nick Craig-Wood) (Nick Craig-Wood)

View file

@ -93,8 +93,7 @@ build_dep:
# Get the release dependencies we only install on linux # Get the release dependencies we only install on linux
release_dep_linux: release_dep_linux:
cd /tmp && go get github.com/goreleaser/nfpm/... go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64\.tar\.gz'
cd /tmp && go get github.com/github-release/github-release
# Get the release dependencies we only install on Windows # Get the release dependencies we only install on Windows
release_dep_windows: release_dep_windows:

View file

@ -1 +1 @@
v1.54.0 v1.54.2

View file

@ -403,6 +403,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil { if err != nil {
return nil, err return nil, err
} }
if opt.UploadCutoff < opt.ChunkSize {
opt.UploadCutoff = opt.ChunkSize
fs.Infof(nil, "b2: raising upload cutoff to chunk size: %v", opt.UploadCutoff)
}
err = checkUploadCutoff(opt, opt.UploadCutoff) err = checkUploadCutoff(opt, opt.UploadCutoff)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "b2: upload cutoff") return nil, errors.Wrap(err, "b2: upload cutoff")
@ -475,12 +479,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.setRoot(newRoot) f.setRoot(newRoot)
_, err := f.NewObject(ctx, leaf) _, err := f.NewObject(ctx, leaf)
if err != nil { if err != nil {
if err == fs.ErrorObjectNotFound { // File doesn't exist so return old f
// File doesn't exist so return old f f.setRoot(oldRoot)
f.setRoot(oldRoot) return f, nil
return f, nil
}
return nil, err
} }
// return an error with an fs which points to the parent // return an error with an fs which points to the parent
return f, fs.ErrorIsFile return f, fs.ErrorIsFile

View file

@ -1034,7 +1034,7 @@ func (r *run) updateObjectRemote(t *testing.T, f fs.Fs, remote string, data1 []b
objInfo1 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data1)), true, nil, f) objInfo1 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data1)), true, nil, f)
objInfo2 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data2)), true, nil, f) objInfo2 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data2)), true, nil, f)
obj, err = f.Put(context.Background(), in1, objInfo1) _, err = f.Put(context.Background(), in1, objInfo1)
require.NoError(t, err) require.NoError(t, err)
obj, err = f.NewObject(context.Background(), remote) obj, err = f.NewObject(context.Background(), remote)
require.NoError(t, err) require.NoError(t, err)

View file

@ -487,11 +487,11 @@ func testPreventCorruption(t *testing.T, f *Fs) {
// accessing chunks in strict mode is prohibited // accessing chunks in strict mode is prohibited
f.opt.FailHard = true f.opt.FailHard = true
billyChunk4Name := billyChunkName(4) billyChunk4Name := billyChunkName(4)
billyChunk4, err := f.NewObject(ctx, billyChunk4Name) _, err = f.NewObject(ctx, billyChunk4Name)
assertOverlapError(err) assertOverlapError(err)
f.opt.FailHard = false f.opt.FailHard = false
billyChunk4, err = f.NewObject(ctx, billyChunk4Name) billyChunk4, err := f.NewObject(ctx, billyChunk4Name)
assert.NoError(t, err) assert.NoError(t, err)
require.NotNil(t, billyChunk4) require.NotNil(t, billyChunk4)

View file

@ -207,7 +207,7 @@ func init() {
} }
err = configTeamDrive(ctx, opt, m, name) err = configTeamDrive(ctx, opt, m, name)
if err != nil { if err != nil {
log.Fatalf("Failed to configure team drive: %v", err) log.Fatalf("Failed to configure Shared Drive: %v", err)
} }
}, },
Options: append(driveOAuthOptions(), []fs.Option{{ Options: append(driveOAuthOptions(), []fs.Option{{
@ -247,7 +247,7 @@ a non root folder as its starting point.
Advanced: true, Advanced: true,
}, { }, {
Name: "team_drive", Name: "team_drive",
Help: "ID of the Team Drive", Help: "ID of the Shared Drive (Team Drive)",
Hide: fs.OptionHideConfigurator, Hide: fs.OptionHideConfigurator,
Advanced: true, Advanced: true,
}, { }, {
@ -666,7 +666,7 @@ func (f *Fs) shouldRetry(err error) (bool, error) {
fs.Errorf(f, "Received download limit error: %v", err) fs.Errorf(f, "Received download limit error: %v", err)
return false, fserrors.FatalError(err) return false, fserrors.FatalError(err)
} else if f.opt.StopOnUploadLimit && reason == "teamDriveFileLimitExceeded" { } else if f.opt.StopOnUploadLimit && reason == "teamDriveFileLimitExceeded" {
fs.Errorf(f, "Received team drive file limit error: %v", err) fs.Errorf(f, "Received Shared Drive file limit error: %v", err)
return false, fserrors.FatalError(err) return false, fserrors.FatalError(err)
} }
} }
@ -955,24 +955,24 @@ func configTeamDrive(ctx context.Context, opt *Options, m configmap.Mapper, name
return nil return nil
} }
if opt.TeamDriveID == "" { if opt.TeamDriveID == "" {
fmt.Printf("Configure this as a team drive?\n") fmt.Printf("Configure this as a Shared Drive (Team Drive)?\n")
} else { } else {
fmt.Printf("Change current team drive ID %q?\n", opt.TeamDriveID) fmt.Printf("Change current Shared Drive (Team Drive) ID %q?\n", opt.TeamDriveID)
} }
if !config.Confirm(false) { if !config.Confirm(false) {
return nil return nil
} }
f, err := newFs(ctx, name, "", m) f, err := newFs(ctx, name, "", m)
if err != nil { if err != nil {
return errors.Wrap(err, "failed to make Fs to list teamdrives") return errors.Wrap(err, "failed to make Fs to list Shared Drives")
} }
fmt.Printf("Fetching team drive list...\n") fmt.Printf("Fetching Shared Drive list...\n")
teamDrives, err := f.listTeamDrives(ctx) teamDrives, err := f.listTeamDrives(ctx)
if err != nil { if err != nil {
return err return err
} }
if len(teamDrives) == 0 { if len(teamDrives) == 0 {
fmt.Printf("No team drives found in your account") fmt.Printf("No Shared Drives found in your account")
return nil return nil
} }
var driveIDs, driveNames []string var driveIDs, driveNames []string
@ -980,7 +980,7 @@ func configTeamDrive(ctx context.Context, opt *Options, m configmap.Mapper, name
driveIDs = append(driveIDs, teamDrive.Id) driveIDs = append(driveIDs, teamDrive.Id)
driveNames = append(driveNames, teamDrive.Name) driveNames = append(driveNames, teamDrive.Name)
} }
driveID := config.Choose("Enter a Team Drive ID", driveIDs, driveNames, true) driveID := config.Choose("Enter a Shared Drive ID", driveIDs, driveNames, true)
m.Set("team_drive", driveID) m.Set("team_drive", driveID)
m.Set("root_folder_id", "") m.Set("root_folder_id", "")
opt.TeamDriveID = driveID opt.TeamDriveID = driveID
@ -2475,9 +2475,9 @@ func (f *Fs) teamDriveOK(ctx context.Context) (err error) {
return f.shouldRetry(err) return f.shouldRetry(err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to get Team/Shared Drive info") return errors.Wrap(err, "failed to get Shared Drive info")
} }
fs.Debugf(f, "read info from team drive %q", td.Name) fs.Debugf(f, "read info from Shared Drive %q", td.Name)
return err return err
} }
@ -2963,7 +2963,7 @@ func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.TeamDrive, err
return defaultFs.shouldRetry(err) return defaultFs.shouldRetry(err)
}) })
if err != nil { if err != nil {
return drives, errors.Wrap(err, "listing team drives failed") return drives, errors.Wrap(err, "listing Team Drives failed")
} }
drives = append(drives, teamDrives.TeamDrives...) drives = append(drives, teamDrives.TeamDrives...)
if teamDrives.NextPageToken == "" { if teamDrives.NextPageToken == "" {
@ -3131,8 +3131,8 @@ authenticated with "drive2:" can't read files from "drive:".
}, },
}, { }, {
Name: "drives", Name: "drives",
Short: "List the shared drives available to this account", Short: "List the Shared Drives available to this account",
Long: `This command lists the shared drives (teamdrives) available to this Long: `This command lists the Shared Drives (Team Drives) available to this
account. account.
Usage: Usage:

View file

@ -94,7 +94,14 @@ const (
var ( var (
// Description of how to auth for this app // Description of how to auth for this app
dropboxConfig = &oauth2.Config{ dropboxConfig = &oauth2.Config{
Scopes: []string{}, Scopes: []string{
"files.metadata.write",
"files.content.write",
"files.content.read",
"sharing.write",
// "file_requests.write",
// "members.read", // needed for impersonate - but causes app to need to be approved by Dropbox Team Admin during the flow
},
// Endpoint: oauth2.Endpoint{ // Endpoint: oauth2.Endpoint{
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize", // AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
// TokenURL: "https://api.dropboxapi.com/1/oauth2/token", // TokenURL: "https://api.dropboxapi.com/1/oauth2/token",
@ -115,6 +122,19 @@ var (
errNotSupportedInSharedMode = fserrors.NoRetryError(errors.New("not supported in shared files mode")) errNotSupportedInSharedMode = fserrors.NoRetryError(errors.New("not supported in shared files mode"))
) )
// Gets an oauth config with the right scopes
func getOauthConfig(m configmap.Mapper) *oauth2.Config {
// If not impersonating, use standard scopes
if impersonate, _ := m.Get("impersonate"); impersonate == "" {
return dropboxConfig
}
// Make a copy of the config
config := *dropboxConfig
// Make a copy of the scopes with "members.read" appended
config.Scopes = append(config.Scopes, "members.read")
return &config
}
// Register with Fs // Register with Fs
func init() { func init() {
DbHashType = hash.RegisterHash("DropboxHash", 64, dbhash.New) DbHashType = hash.RegisterHash("DropboxHash", 64, dbhash.New)
@ -129,7 +149,7 @@ func init() {
oauth2.SetAuthURLParam("token_access_type", "offline"), oauth2.SetAuthURLParam("token_access_type", "offline"),
}, },
} }
err := oauthutil.Config(ctx, "dropbox", name, m, dropboxConfig, &opt) err := oauthutil.Config(ctx, "dropbox", name, m, getOauthConfig(m), &opt)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token: %v", err) log.Fatalf("Failed to configure token: %v", err)
} }
@ -147,8 +167,23 @@ memory. It can be set smaller if you are tight on memory.`, maxChunkSize),
Default: defaultChunkSize, Default: defaultChunkSize,
Advanced: true, Advanced: true,
}, { }, {
Name: "impersonate", Name: "impersonate",
Help: "Impersonate this user when using a business account.", Help: `Impersonate this user when using a business account.
Note that if you want to use impersonate, you should make sure this
flag is set when running "rclone config" as this will cause rclone to
request the "members.read" scope which it won't normally. This is
needed to lookup a members email address into the internal ID that
dropbox uses in the API.
Using the "members.read" scope will require a Dropbox Team Admin
to approve during the OAuth flow.
You will have to use your own App (setting your own client_id and
client_secret) to use this option as currently rclone's default set of
permissions doesn't include "members.read". This can be added once
v1.55 or later is in use everywhere.
`,
Default: "", Default: "",
Advanced: true, Advanced: true,
}, { }, {
@ -327,7 +362,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
} }
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, dropboxConfig) oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, getOauthConfig(m))
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to configure dropbox") return nil, errors.Wrap(err, "failed to configure dropbox")
} }

View file

@ -109,7 +109,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
dirname := path.Dir(realpath) dirname := path.Dir(realpath)
fs.Debugf(o.fs, "update [%s]", realpath) fs.Debugf(o.fs, "update [%s]", realpath)
err := o.fs.client.MkdirAll(dirname, 755) err := o.fs.client.MkdirAll(dirname, 0755)
if err != nil { if err != nil {
return err return err
} }

View file

@ -1462,7 +1462,7 @@ func getClient(ctx context.Context, opt *Options) *http.Client {
} }
// s3Connection makes a connection to s3 // s3Connection makes a connection to s3
func s3Connection(ctx context.Context, opt *Options) (*s3.S3, *session.Session, error) { func s3Connection(ctx context.Context, opt *Options, client *http.Client) (*s3.S3, *session.Session, error) {
// Make the auth // Make the auth
v := credentials.Value{ v := credentials.Value{
AccessKeyID: opt.AccessKeyID, AccessKeyID: opt.AccessKeyID,
@ -1540,7 +1540,7 @@ func s3Connection(ctx context.Context, opt *Options) (*s3.S3, *session.Session,
awsConfig := aws.NewConfig(). awsConfig := aws.NewConfig().
WithMaxRetries(0). // Rely on rclone's retry logic WithMaxRetries(0). // Rely on rclone's retry logic
WithCredentials(cred). WithCredentials(cred).
WithHTTPClient(getClient(ctx, opt)). WithHTTPClient(client).
WithS3ForcePathStyle(opt.ForcePathStyle). WithS3ForcePathStyle(opt.ForcePathStyle).
WithS3UseAccelerate(opt.UseAccelerateEndpoint). WithS3UseAccelerate(opt.UseAccelerateEndpoint).
WithS3UsEast1RegionalEndpoint(endpoints.RegionalS3UsEast1Endpoint) WithS3UsEast1RegionalEndpoint(endpoints.RegionalS3UsEast1Endpoint)
@ -1559,9 +1559,6 @@ func s3Connection(ctx context.Context, opt *Options) (*s3.S3, *session.Session,
if opt.EnvAuth && opt.AccessKeyID == "" && opt.SecretAccessKey == "" { if opt.EnvAuth && opt.AccessKeyID == "" && opt.SecretAccessKey == "" {
// Enable loading config options from ~/.aws/config (selected by AWS_PROFILE env) // Enable loading config options from ~/.aws/config (selected by AWS_PROFILE env)
awsSessionOpts.SharedConfigState = session.SharedConfigEnable awsSessionOpts.SharedConfigState = session.SharedConfigEnable
// The session constructor (aws/session/mergeConfigSrcs) will only use the user's preferred credential source
// (from the shared config file) if the passed-in Options.Config.Credentials is nil.
awsSessionOpts.Config.Credentials = nil
} }
ses, err := session.NewSessionWithOptions(awsSessionOpts) ses, err := session.NewSessionWithOptions(awsSessionOpts)
if err != nil { if err != nil {
@ -1647,7 +1644,8 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
md5sumBinary := md5.Sum([]byte(opt.SSECustomerKey)) md5sumBinary := md5.Sum([]byte(opt.SSECustomerKey))
opt.SSECustomerKeyMD5 = base64.StdEncoding.EncodeToString(md5sumBinary[:]) opt.SSECustomerKeyMD5 = base64.StdEncoding.EncodeToString(md5sumBinary[:])
} }
c, ses, err := s3Connection(ctx, opt) srv := getClient(ctx, opt)
c, ses, err := s3Connection(ctx, opt, srv)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -1662,7 +1660,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
ses: ses, ses: ses,
pacer: fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep))), pacer: fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep))),
cache: bucket.NewCache(), cache: bucket.NewCache(),
srv: getClient(ctx, opt), srv: srv,
pool: pool.New( pool: pool.New(
time.Duration(opt.MemoryPoolFlushTime), time.Duration(opt.MemoryPoolFlushTime),
int(opt.ChunkSize), int(opt.ChunkSize),
@ -1697,12 +1695,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.setRoot(newRoot) f.setRoot(newRoot)
_, err := f.NewObject(ctx, leaf) _, err := f.NewObject(ctx, leaf)
if err != nil { if err != nil {
if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile { // File doesn't exist or is a directory so return old f
// File doesn't exist or is a directory so return old f f.setRoot(oldRoot)
f.setRoot(oldRoot) return f, nil
return f, nil
}
return nil, err
} }
// return an error with an fs which points to the parent // return an error with an fs which points to the parent
return f, fs.ErrorIsFile return f, fs.ErrorIsFile
@ -1779,7 +1774,7 @@ func (f *Fs) updateRegionForBucket(bucket string) error {
// Make a new session with the new region // Make a new session with the new region
oldRegion := f.opt.Region oldRegion := f.opt.Region
f.opt.Region = region f.opt.Region = region
c, ses, err := s3Connection(f.ctx, &f.opt) c, ses, err := s3Connection(f.ctx, &f.opt, f.srv)
if err != nil { if err != nil {
return errors.Wrap(err, "creating new session failed") return errors.Wrap(err, "creating new session failed")
} }

View file

@ -67,7 +67,7 @@ func New(ctx context.Context, remote, root string, cacheTime time.Duration) (*Fs
return nil, err return nil, err
} }
f := &Fs{ f := &Fs{
RootPath: root, RootPath: strings.TrimRight(root, "/"),
writable: true, writable: true,
creatable: true, creatable: true,
cacheExpiry: time.Now().Unix(), cacheExpiry: time.Now().Unix(),

View file

@ -100,7 +100,7 @@ func init() {
log.Fatalf("Failed to configure root directory: %v", err) log.Fatalf("Failed to configure root directory: %v", err)
} }
}, },
Options: []fs.Option{{ Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "region", Name: "region",
Help: "Zoho region to connect to. You'll have to use the region you organization is registered in.", Help: "Zoho region to connect to. You'll have to use the region you organization is registered in.",
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
@ -123,7 +123,7 @@ func init() {
encoder.EncodeCtl | encoder.EncodeCtl |
encoder.EncodeDel | encoder.EncodeDel |
encoder.EncodeInvalidUtf8), encoder.EncodeInvalidUtf8),
}}, }}...),
}) })
} }

View file

@ -2,49 +2,45 @@
# #
# Upload a release # Upload a release
# #
# Needs github-release from https://github.com/aktau/github-release # Needs the gh tool from https://github.com/cli/cli
set -e set -e
REPO="rclone" REPO="rclone/rclone"
if [ "$1" == "" ]; then if [ "$1" == "" ]; then
echo "Syntax: $0 Version" echo "Syntax: $0 Version"
exit 1 exit 1
fi fi
VERSION="$1" VERSION="$1"
if [ "$GITHUB_USER" == "" ]; then ANCHOR=$(grep '^## v' docs/content/changelog.md | head -1 | sed 's/^## //; s/[^A-Za-z0-9-]/-/g; s/--*/-/g')
echo 1>&2 "Need GITHUB_USER environment variable"
exit 1
fi
if [ "$GITHUB_TOKEN" == "" ]; then
echo 1>&2 "Need GITHUB_TOKEN environment variable"
exit 1
fi
echo "Making release ${VERSION}" cat > "/tmp/${VERSION}-release-notes" <<EOF
github-release release \ This is the ${VERSION} release of rclone.
Full details of the changes can be found in [the changelog](https://rclone.org/changelog/#${ANCHOR}).
EOF
echo "Making release ${VERSION} anchor ${ANCHOR} to repo ${REPO}"
gh release create "${VERSION}" \
--repo ${REPO} \ --repo ${REPO} \
--tag ${VERSION} \ --title "rclone ${VERSION}" \
--name "rclone" \ --notes-file "/tmp/${VERSION}-release-notes"
--description "Rclone - rsync for cloud storage. Sync files to and from many cloud storage providers."
for build in `ls build | grep -v current | grep -v testbuilds`; do for build in build/*; do
echo "Uploading ${build}" case $build in
base="${build%.*}" *current*) continue ;;
parts=(${base//-/ }) *testbuilds*) continue ;;
os=${parts[3]} esac
arch=${parts[4]} echo "Uploading ${build} "
gh release upload "${VERSION}" \
github-release upload \ --clobber \
--repo ${REPO} \ --repo ${REPO} \
--tag ${VERSION} \ "${build}"
--name "${build}" \
--file build/${build}
done done
github-release info \ gh release view "${VERSION}" \
--repo ${REPO} \ --repo ${REPO}
--tag ${VERSION}
echo "Done" echo "Done"

View file

@ -121,14 +121,11 @@ can be processed line by line as each item is written one to a line.
} }
return nil return nil
}) })
if err != nil {
return err
}
if !first { if !first {
fmt.Println() fmt.Println()
} }
fmt.Println("]") fmt.Println("]")
return nil return err
}) })
}, },
} }

View file

@ -71,7 +71,7 @@ const (
func init() { func init() {
// DaemonTimeout defaults to non zero for macOS // DaemonTimeout defaults to non zero for macOS
if runtime.GOOS == "darwin" { if runtime.GOOS == "darwin" {
DefaultOpt.DaemonTimeout = 15 * time.Minute DefaultOpt.DaemonTimeout = 10 * time.Minute
} }
} }
@ -348,7 +348,7 @@ Without the use of |--vfs-cache-mode| this can only write files
sequentially, it can only seek when reading. This means that many sequentially, it can only seek when reading. This means that many
applications won't work with their files on an rclone mount without applications won't work with their files on an rclone mount without
|--vfs-cache-mode writes| or |--vfs-cache-mode full|. |--vfs-cache-mode writes| or |--vfs-cache-mode full|.
See the [File Caching](#file-caching) section for more info. See the [VFS File Caching](#vfs-file-caching) section for more info.
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2, The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2,
Hubic) do not support the concept of empty directories, so empty Hubic) do not support the concept of empty directories, so empty
@ -363,7 +363,7 @@ File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone @ commands cope with this with lots of retries. However rclone @
can't use retries in the same way without making local copies of the can't use retries in the same way without making local copies of the
uploads. Look at the [file caching](#file-caching) uploads. Look at the [VFS File Caching](#vfs-file-caching)
for solutions to make @ more reliable. for solutions to make @ more reliable.
### Attribute caching ### Attribute caching

View file

@ -205,7 +205,7 @@ These URLs are used by Plex internally to connect to the Plex server securely.
The format for these URLs is the following: The format for these URLs is the following:
https://ip-with-dots-replaced.server-hash.plex.direct:32400/ `https://ip-with-dots-replaced.server-hash.plex.direct:32400/`
The `ip-with-dots-replaced` part can be any IPv4 address, where the dots The `ip-with-dots-replaced` part can be any IPv4 address, where the dots
have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`. have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`.

View file

@ -5,6 +5,43 @@ description: "Rclone Changelog"
# Changelog # Changelog
## v1.54.1 - 2021-03-08
[See commits](https://github.com/rclone/rclone/compare/v1.54.0...v1.54.1)
* Bug Fixes
* accounting: Fix --bwlimit when up or down is off (Nick Craig-Wood)
* docs
* Fix nesting of brackets and backticks in ftp docs (edwardxml)
* Fix broken link in sftp page (edwardxml)
* Fix typo in crypt.md (Romeo Kienzler)
* Changelog: Correct link to digitalis.io (Alex JOST)
* Replace #file-caching with #vfs-file-caching (Miron Veryanskiy)
* Convert bogus example link to code (edwardxml)
* Remove dead link from rc.md (edwardxml)
* rc: Sync,copy,move: document createEmptySrcDirs parameter (Nick Craig-Wood)
* lsjson: Fix unterminated JSON in the presence of errors (Nick Craig-Wood)
* Mount
* Fix mount dropping on macOS by setting --daemon-timeout 10m (Nick Craig-Wood)
* VFS
* Document simultaneous usage with the same cache shouldn't be used (Nick Craig-Wood)
* B2
* Automatically raise upload cutoff to avoid spurious error (Nick Craig-Wood)
* Fix failed to create file system with application key limited to a prefix (Nick Craig-Wood)
* Drive
* Refer to Shared Drives instead of Team Drives (Nick Craig-Wood)
* Dropbox
* Add scopes to oauth request and optionally "members.read" (Nick Craig-Wood)
* S3
* Fix failed to create file system with folder level permissions policy (Nick Craig-Wood)
* Fix Wasabi HEAD requests returning stale data by using only 1 transport (Nick Craig-Wood)
* Fix shared_credentials_file auth (Dmitry Chepurovskiy)
* Add --s3-no-head to reducing costs docs (Nick Craig-Wood)
* Union
* Fix mkdir at root with remote:/ (Nick Craig-Wood)
* Zoho
* Fix custom client id's (buengese)
## v1.54.0 - 2021-02-02 ## v1.54.0 - 2021-02-02
[See commits](https://github.com/rclone/rclone/compare/v1.53.0...v1.54.0) [See commits](https://github.com/rclone/rclone/compare/v1.53.0...v1.54.0)
@ -19,7 +56,7 @@ description: "Rclone Changelog"
* Deglobalise the config (Nick Craig-Wood) * Deglobalise the config (Nick Craig-Wood)
* Global config now read from the context * Global config now read from the context
* This will enable passing of global config via the rc * This will enable passing of global config via the rc
* This work was sponsored by [Digitalis](digitalis.io) * This work was sponsored by [Digitalis](https://digitalis.io/)
* Add `--bwlimit` for upload and download (Nick Craig-Wood) * Add `--bwlimit` for upload and download (Nick Craig-Wood)
* Obey bwlimit in http Transport for better limiting * Obey bwlimit in http Transport for better limiting
* Enhance systemd integration (Hekmon) * Enhance systemd integration (Hekmon)

View file

@ -198,7 +198,7 @@ Without the use of `--vfs-cache-mode` this can only write files
sequentially, it can only seek when reading. This means that many sequentially, it can only seek when reading. This means that many
applications won't work with their files on an rclone mount without applications won't work with their files on an rclone mount without
`--vfs-cache-mode writes` or `--vfs-cache-mode full`. `--vfs-cache-mode writes` or `--vfs-cache-mode full`.
See the [File Caching](#file-caching) section for more info. See the [VFS File Caching](#vfs-file-caching) section for more info.
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2, The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2,
Hubic) do not support the concept of empty directories, so empty Hubic) do not support the concept of empty directories, so empty
@ -213,7 +213,7 @@ File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. The rclone sync/copy systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the can't use retries in the same way without making local copies of the
uploads. Look at the [file caching](#file-caching) uploads. Look at the [VFS File Caching](#vfs-file-caching)
for solutions to make mount more reliable. for solutions to make mount more reliable.
## Attribute caching ## Attribute caching
@ -378,6 +378,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be `--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off ### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write

View file

@ -134,6 +134,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be `--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off ### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write

View file

@ -133,6 +133,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be `--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off ### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write

View file

@ -205,6 +205,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be `--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off ### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write

View file

@ -144,6 +144,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be `--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off ### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write

View file

@ -213,6 +213,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be `--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache. evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off ### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write

View file

@ -82,7 +82,7 @@ as you would with any other remote, e.g. `rclone copy D:\docs secret:\docs`,
and rclone will encrypt and decrypt as needed on the fly. and rclone will encrypt and decrypt as needed on the fly.
If you access the wrapped remote `remote:path` directly you will bypass If you access the wrapped remote `remote:path` directly you will bypass
the encryption, and anything you read will be in encrypted form, and the encryption, and anything you read will be in encrypted form, and
anything you write will be undencrypted. To avoid issues it is best to anything you write will be unencrypted. To avoid issues it is best to
configure a dedicated path for encrypted content, and access it configure a dedicated path for encrypted content, and access it
exclusively through a crypt remote. exclusively through a crypt remote.

View file

@ -72,7 +72,7 @@ If your browser doesn't open automatically go to the following link: http://127.
Log in and authorize rclone for access Log in and authorize rclone for access
Waiting for code... Waiting for code...
Got code Got code
Configure this as a team drive? Configure this as a Shared Drive (Team Drive)?
y) Yes y) Yes
n) No n) No
y/n> n y/n> n
@ -279,23 +279,24 @@ Note: in case you configured a specific root folder on gdrive and rclone is unab
`rclone -v foo@example.com lsf gdrive:backup` `rclone -v foo@example.com lsf gdrive:backup`
### Team drives ### ### Shared drives (team drives) ###
If you want to configure the remote to point to a Google Team Drive If you want to configure the remote to point to a Google Shared Drive
then answer `y` to the question `Configure this as a team drive?`. (previously known as Team Drives) then answer `y` to the question
`Configure this as a Shared Drive (Team Drive)?`.
This will fetch the list of Team Drives from google and allow you to This will fetch the list of Shared Drives from google and allow you to
configure which one you want to use. You can also type in a team configure which one you want to use. You can also type in a Shared
drive ID if you prefer. Drive ID if you prefer.
For example: For example:
``` ```
Configure this as a team drive? Configure this as a Shared Drive (Team Drive)?
y) Yes y) Yes
n) No n) No
y/n> y y/n> y
Fetching team drive list... Fetching Shared Drive list...
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
1 / Rclone Test 1 / Rclone Test
\ "xxxxxxxxxxxxxxxxxxxx" \ "xxxxxxxxxxxxxxxxxxxx"
@ -303,7 +304,7 @@ Choose a number from below, or type in your own value
\ "yyyyyyyyyyyyyyyyyyyy" \ "yyyyyyyyyyyyyyyyyyyy"
3 / Rclone Test 3 3 / Rclone Test 3
\ "zzzzzzzzzzzzzzzzzzzz" \ "zzzzzzzzzzzzzzzzzzzz"
Enter a Team Drive ID> 1 Enter a Shared Drive ID> 1
-------------------- --------------------
[remote] [remote]
client_id = client_id =
@ -674,7 +675,7 @@ Needed only if you want use SA instead of interactive login.
#### --drive-team-drive #### --drive-team-drive
ID of the Team Drive ID of the Shared Drive (Team Drive)
- Config: team_drive - Config: team_drive
- Env Var: RCLONE_DRIVE_TEAM_DRIVE - Env Var: RCLONE_DRIVE_TEAM_DRIVE
@ -1137,11 +1138,11 @@ Options:
#### drives #### drives
List the shared drives available to this account List the Shared Drives available to this account
rclone backend drives remote: [options] [<arguments>+] rclone backend drives remote: [options] [<arguments>+]
This command lists the shared drives (teamdrives) available to this This command lists the Shared Drives (Team Drives) available to this
account. account.
Usage: Usage:

View file

@ -197,6 +197,21 @@ memory. It can be set smaller if you are tight on memory.
Impersonate this user when using a business account. Impersonate this user when using a business account.
Note that if you want to use impersonate, you should make sure this
flag is set when running "rclone config" as this will cause rclone to
request the "members.read" scope which it won't normally. This is
needed to lookup a members email address into the internal ID that
dropbox uses in the API.
Using the "members.read" scope will require a Dropbox Team Admin
to approve during the OAuth flow.
You will have to use your own App (setting your own client_id and
client_secret) to use this option as currently rclone's default set of
permissions doesn't include "members.read". This can be added once
v1.55 or later is in use everywhere.
- Config: impersonate - Config: impersonate
- Env Var: RCLONE_DROPBOX_IMPERSONATE - Env Var: RCLONE_DROPBOX_IMPERSONATE
- Type: string - Type: string

View file

@ -643,7 +643,7 @@ not list `dir3`, `file3` or `.ignore`.
## Common pitfalls ## Common pitfalls
The most frequent filter support issues on The most frequent filter support issues on
the [rclone forum](https://https://forum.rclone.org/) are: the [rclone forum](https://forum.rclone.org/) are:
* Not using paths relative to the root of the remote * Not using paths relative to the root of the remote
* Not using `/` to match from the root of a remote * Not using `/` to match from the root of a remote

View file

@ -150,7 +150,7 @@ These flags are available for every command.
--use-json-log Use json log format. --use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs). --use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata --use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.54.0") --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.54.1")
-v, --verbose count Print lots more stuff (repeat for more) -v, --verbose count Print lots more stuff (repeat for more)
``` ```
@ -281,7 +281,7 @@ and may be set in the config file.
--drive-starred-only Only show files that are starred. --drive-starred-only Only show files that are starred.
--drive-stop-on-download-limit Make download limit errors be fatal --drive-stop-on-download-limit Make download limit errors be fatal
--drive-stop-on-upload-limit Make upload limit errors be fatal --drive-stop-on-upload-limit Make upload limit errors be fatal
--drive-team-drive string ID of the Team Drive --drive-team-drive string ID of the Shared Drive (Team Drive)
--drive-token string OAuth Access Token as a JSON blob. --drive-token string OAuth Access Token as a JSON blob.
--drive-token-url string Token server url. --drive-token-url string Token server url.
--drive-trashed-only Only show files that are in the trash. --drive-trashed-only Only show files that are in the trash.
@ -562,6 +562,11 @@ and may be set in the config file.
--yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-token string OAuth Access Token as a JSON blob. --yandex-token string OAuth Access Token as a JSON blob.
--yandex-token-url string Token server url. --yandex-token-url string Token server url.
--zoho-auth-url string Auth server URL.
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8) --zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in. --zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in.
--zoho-token string OAuth Access Token as a JSON blob.
--zoho-token-url string Token server url.
``` ```

View file

@ -109,8 +109,8 @@ excess files in the directory.
Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to
be enabled in the FTP backend config for the remote, or with be enabled in the FTP backend config for the remote, or with
`[--ftp-tls]{#ftp-tls}`. The default FTPS port is `990`, not `21` and [`--ftp-tls`](#ftp-tls). The default FTPS port is `990`, not `21` and
can be set with `[--ftp-port]{#ftp-port}`. can be set with [`--ftp-port`](#ftp-port).
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/ftp/ftp.go then run make backenddocs" >}} {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/ftp/ftp.go then run make backenddocs" >}}
### Standard Options ### Standard Options

View file

@ -1288,6 +1288,7 @@ This takes the following parameters
- srcFs - a remote name string e.g. "drive:src" for the source - srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination - dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if set
See the [copy command](/commands/rclone_copy/) command for more information on the above. See the [copy command](/commands/rclone_copy/) command for more information on the above.
@ -1300,6 +1301,7 @@ This takes the following parameters
- srcFs - a remote name string e.g. "drive:src" for the source - srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination - dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if set
- deleteEmptySrcDirs - delete empty src directories if set - deleteEmptySrcDirs - delete empty src directories if set
@ -1313,6 +1315,7 @@ This takes the following parameters
- srcFs - a remote name string e.g. "drive:src" for the source - srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination - dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if set
See the [sync command](/commands/rclone_sync/) command for more information on the above. See the [sync command](/commands/rclone_sync/) command for more information on the above.

View file

@ -342,6 +342,16 @@ You'd then do a full `rclone sync` less often.
Note that `--fast-list` isn't required in the top-up sync. Note that `--fast-list` isn't required in the top-up sync.
#### Avoiding HEAD requests after PUT
By default rclone will HEAD every object it uploads. It does this to
check the object got uploaded correctly.
You can disable this with the [--s3-no-head](#s3-no-head) option - see
there for more details.
Setting this flag increases the chance for undetected upload failures.
### Hashes ### ### Hashes ###
For small objects which weren't uploaded as multipart uploads (objects For small objects which weren't uploaded as multipart uploads (objects

View file

@ -526,8 +526,8 @@ The Go SSH library disables the use of the aes128-cbc cipher by
default, due to security concerns. This can be re-enabled on a default, due to security concerns. This can be re-enabled on a
per-connection basis by setting the `use_insecure_cipher` setting in per-connection basis by setting the `use_insecure_cipher` setting in
the configuration file to `true`. Further details on the insecurity of the configuration file to `true`. Further details on the insecurity of
this cipher can be found [in this paper] this cipher can be found
(http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf). [in this paper](http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).
SFTP isn't supported under plan9 until [this SFTP isn't supported under plan9 until [this
issue](https://github.com/pkg/sftp/issues/156) is fixed. issue](https://github.com/pkg/sftp/issues/156) is fixed.

View file

@ -127,6 +127,26 @@ from filenames during upload.
Here are the standard options specific to zoho (Zoho). Here are the standard options specific to zoho (Zoho).
#### --zoho-client-id
OAuth Client Id
Leave blank normally.
- Config: client_id
- Env Var: RCLONE_ZOHO_CLIENT_ID
- Type: string
- Default: ""
#### --zoho-client-secret
OAuth Client Secret
Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_ZOHO_CLIENT_SECRET
- Type: string
- Default: ""
#### --zoho-region #### --zoho-region
Zoho region to connect to. You'll have to use the region you organization is registered in. Zoho region to connect to. You'll have to use the region you organization is registered in.
@ -149,6 +169,35 @@ Zoho region to connect to. You'll have to use the region you organization is reg
Here are the advanced options specific to zoho (Zoho). Here are the advanced options specific to zoho (Zoho).
#### --zoho-token
OAuth Access Token as a JSON blob.
- Config: token
- Env Var: RCLONE_ZOHO_TOKEN
- Type: string
- Default: ""
#### --zoho-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
- Env Var: RCLONE_ZOHO_AUTH_URL
- Type: string
- Default: ""
#### --zoho-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
- Env Var: RCLONE_ZOHO_TOKEN_URL
- Type: string
- Default: ""
#### --zoho-encoding #### --zoho-encoding
This sets the encoding for the backend. This sets the encoding for the backend.

View file

@ -1 +1 @@
v1.54.0 v1.54.2

View file

@ -68,7 +68,8 @@ func newTokenBucket(bandwidth fs.BwPair) (tbs buckets) {
bandwidthAccounting = bandwidth.Rx bandwidthAccounting = bandwidth.Rx
} }
} }
if bandwidthAccounting > 0 { // Limit core bandwidth to max of Rx and Tx if both are limited
if bandwidth.Tx > 0 && bandwidth.Rx > 0 {
tbs[TokenBucketSlotAccounting] = rate.NewLimiter(rate.Limit(bandwidthAccounting), maxBurstSize) tbs[TokenBucketSlotAccounting] = rate.NewLimiter(rate.Limit(bandwidthAccounting), maxBurstSize)
} }
for _, tb := range tbs { for _, tb := range tbs {

View file

@ -24,6 +24,7 @@ func init() {
- srcFs - a remote name string e.g. "drive:src" for the source - srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination - dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if set
` + moveHelp + ` ` + moveHelp + `
See the [` + name + ` command](/commands/rclone_` + name + `/) command for more information on the above.`, See the [` + name + ` command](/commands/rclone_` + name + `/) command for more information on the above.`,

View file

@ -1,4 +1,4 @@
package fs package fs
// Version of rclone // Version of rclone
var Version = "v1.54.0-DEV" var Version = "v1.54.2-DEV"

View file

@ -639,6 +639,7 @@ func (s *authServer) Init() error {
http.Error(w, "State did not match - please try again", http.StatusForbidden) http.Error(w, "State did not match - please try again", http.StatusForbidden)
return return
} }
fs.Debugf(nil, "Redirecting browser to: %s", s.authURL)
http.Redirect(w, req, s.authURL, http.StatusTemporaryRedirect) http.Redirect(w, req, s.authURL, http.StatusTemporaryRedirect)
return return
}) })

519
rclone.1 generated
View file

@ -1,7 +1,7 @@
.\"t .\"t
.\" Automatically generated by Pandoc 2.5 .\" Automatically generated by Pandoc 2.5
.\" .\"
.TH "rclone" "1" "Feb 02, 2021" "User Manual" "" .TH "rclone" "1" "Mar 08, 2021" "User Manual" ""
.hy .hy
.SH Rclone syncs your files to cloud storage .SH Rclone syncs your files to cloud storage
.PP .PP
@ -3663,7 +3663,7 @@ files sequentially, it can only seek when reading.
This means that many applications won\[aq]t work with their files on an This means that many applications won\[aq]t work with their files on an
rclone mount without \f[C]\-\-vfs\-cache\-mode writes\f[R] or rclone mount without \f[C]\-\-vfs\-cache\-mode writes\f[R] or
\f[C]\-\-vfs\-cache\-mode full\f[R]. \f[C]\-\-vfs\-cache\-mode full\f[R].
See the File Caching section for more info. See the VFS File Caching section for more info.
.PP .PP
The bucket based remotes (e.g. The bucket based remotes (e.g.
Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept
@ -3678,7 +3678,7 @@ systems are a long way from 100% reliable.
The rclone sync/copy commands cope with this with lots of retries. The rclone sync/copy commands cope with this with lots of retries.
However rclone mount can\[aq]t use retries in the same way without However rclone mount can\[aq]t use retries in the same way without
making local copies of the uploads. making local copies of the uploads.
Look at the file caching for solutions to make mount more reliable. Look at the VFS File Caching for solutions to make mount more reliable.
.SS Attribute caching .SS Attribute caching
.PP .PP
You can use the flag \f[C]\-\-attr\-timeout\f[R] to set the time the You can use the flag \f[C]\-\-attr\-timeout\f[R] to set the time the
@ -3875,6 +3875,15 @@ exceed this size for two reasons.
Firstly because it is only checked every Firstly because it is only checked every
\f[C]\-\-vfs\-cache\-poll\-interval\f[R]. \f[C]\-\-vfs\-cache\-poll\-interval\f[R].
Secondly because open files cannot be evicted from the cache. Secondly because open files cannot be evicted from the cache.
.PP
You \f[B]should not\f[R] run two copies of rclone using the same VFS
cache with the same or overlapping remotes if using
\f[C]\-\-vfs\-cache\-mode > off\f[R].
This can potentially cause data corruption if you do.
You can work around this by giving each rclone its own cache hierarchy
with \f[C]\-\-cache\-dir\f[R].
You don\[aq]t need to worry about this if the remotes in use don\[aq]t
overlap.
.SS \-\-vfs\-cache\-mode off .SS \-\-vfs\-cache\-mode off
.PP .PP
In this mode (the default) the cache will read directly from the remote In this mode (the default) the cache will read directly from the remote
@ -4730,6 +4739,15 @@ exceed this size for two reasons.
Firstly because it is only checked every Firstly because it is only checked every
\f[C]\-\-vfs\-cache\-poll\-interval\f[R]. \f[C]\-\-vfs\-cache\-poll\-interval\f[R].
Secondly because open files cannot be evicted from the cache. Secondly because open files cannot be evicted from the cache.
.PP
You \f[B]should not\f[R] run two copies of rclone using the same VFS
cache with the same or overlapping remotes if using
\f[C]\-\-vfs\-cache\-mode > off\f[R].
This can potentially cause data corruption if you do.
You can work around this by giving each rclone its own cache hierarchy
with \f[C]\-\-cache\-dir\f[R].
You don\[aq]t need to worry about this if the remotes in use don\[aq]t
overlap.
.SS \-\-vfs\-cache\-mode off .SS \-\-vfs\-cache\-mode off
.PP .PP
In this mode (the default) the cache will read directly from the remote In this mode (the default) the cache will read directly from the remote
@ -5102,6 +5120,15 @@ exceed this size for two reasons.
Firstly because it is only checked every Firstly because it is only checked every
\f[C]\-\-vfs\-cache\-poll\-interval\f[R]. \f[C]\-\-vfs\-cache\-poll\-interval\f[R].
Secondly because open files cannot be evicted from the cache. Secondly because open files cannot be evicted from the cache.
.PP
You \f[B]should not\f[R] run two copies of rclone using the same VFS
cache with the same or overlapping remotes if using
\f[C]\-\-vfs\-cache\-mode > off\f[R].
This can potentially cause data corruption if you do.
You can work around this by giving each rclone its own cache hierarchy
with \f[C]\-\-cache\-dir\f[R].
You don\[aq]t need to worry about this if the remotes in use don\[aq]t
overlap.
.SS \-\-vfs\-cache\-mode off .SS \-\-vfs\-cache\-mode off
.PP .PP
In this mode (the default) the cache will read directly from the remote In this mode (the default) the cache will read directly from the remote
@ -5726,6 +5753,15 @@ exceed this size for two reasons.
Firstly because it is only checked every Firstly because it is only checked every
\f[C]\-\-vfs\-cache\-poll\-interval\f[R]. \f[C]\-\-vfs\-cache\-poll\-interval\f[R].
Secondly because open files cannot be evicted from the cache. Secondly because open files cannot be evicted from the cache.
.PP
You \f[B]should not\f[R] run two copies of rclone using the same VFS
cache with the same or overlapping remotes if using
\f[C]\-\-vfs\-cache\-mode > off\f[R].
This can potentially cause data corruption if you do.
You can work around this by giving each rclone its own cache hierarchy
with \f[C]\-\-cache\-dir\f[R].
You don\[aq]t need to worry about this if the remotes in use don\[aq]t
overlap.
.SS \-\-vfs\-cache\-mode off .SS \-\-vfs\-cache\-mode off
.PP .PP
In this mode (the default) the cache will read directly from the remote In this mode (the default) the cache will read directly from the remote
@ -6418,6 +6454,15 @@ exceed this size for two reasons.
Firstly because it is only checked every Firstly because it is only checked every
\f[C]\-\-vfs\-cache\-poll\-interval\f[R]. \f[C]\-\-vfs\-cache\-poll\-interval\f[R].
Secondly because open files cannot be evicted from the cache. Secondly because open files cannot be evicted from the cache.
.PP
You \f[B]should not\f[R] run two copies of rclone using the same VFS
cache with the same or overlapping remotes if using
\f[C]\-\-vfs\-cache\-mode > off\f[R].
This can potentially cause data corruption if you do.
You can work around this by giving each rclone its own cache hierarchy
with \f[C]\-\-cache\-dir\f[R].
You don\[aq]t need to worry about this if the remotes in use don\[aq]t
overlap.
.SS \-\-vfs\-cache\-mode off .SS \-\-vfs\-cache\-mode off
.PP .PP
In this mode (the default) the cache will read directly from the remote In this mode (the default) the cache will read directly from the remote
@ -7044,6 +7089,15 @@ exceed this size for two reasons.
Firstly because it is only checked every Firstly because it is only checked every
\f[C]\-\-vfs\-cache\-poll\-interval\f[R]. \f[C]\-\-vfs\-cache\-poll\-interval\f[R].
Secondly because open files cannot be evicted from the cache. Secondly because open files cannot be evicted from the cache.
.PP
You \f[B]should not\f[R] run two copies of rclone using the same VFS
cache with the same or overlapping remotes if using
\f[C]\-\-vfs\-cache\-mode > off\f[R].
This can potentially cause data corruption if you do.
You can work around this by giving each rclone its own cache hierarchy
with \f[C]\-\-cache\-dir\f[R].
You don\[aq]t need to worry about this if the remotes in use don\[aq]t
overlap.
.SS \-\-vfs\-cache\-mode off .SS \-\-vfs\-cache\-mode off
.PP .PP
In this mode (the default) the cache will read directly from the remote In this mode (the default) the cache will read directly from the remote
@ -10623,7 +10677,7 @@ command.
.SS Common pitfalls .SS Common pitfalls
.PP .PP
The most frequent filter support issues on the rclone The most frequent filter support issues on the rclone
forum (https://https://forum.rclone.org/) are: forum (https://forum.rclone.org/) are:
.IP \[bu] 2 .IP \[bu] 2
Not using paths relative to the root of the remote Not using paths relative to the root of the remote
.IP \[bu] 2 .IP \[bu] 2
@ -12274,6 +12328,8 @@ srcFs \- a remote name string e.g.
.IP \[bu] 2 .IP \[bu] 2
dstFs \- a remote name string e.g. dstFs \- a remote name string e.g.
\[dq]drive:dst\[dq] for the destination \[dq]drive:dst\[dq] for the destination
.IP \[bu] 2
createEmptySrcDirs \- create empty src directories on destination if set
.PP .PP
See the copy command (https://rclone.org/commands/rclone_copy/) command See the copy command (https://rclone.org/commands/rclone_copy/) command
for more information on the above. for more information on the above.
@ -12289,6 +12345,8 @@ srcFs \- a remote name string e.g.
dstFs \- a remote name string e.g. dstFs \- a remote name string e.g.
\[dq]drive:dst\[dq] for the destination \[dq]drive:dst\[dq] for the destination
.IP \[bu] 2 .IP \[bu] 2
createEmptySrcDirs \- create empty src directories on destination if set
.IP \[bu] 2
deleteEmptySrcDirs \- delete empty src directories if set deleteEmptySrcDirs \- delete empty src directories if set
.PP .PP
See the move command (https://rclone.org/commands/rclone_move/) command See the move command (https://rclone.org/commands/rclone_move/) command
@ -12304,6 +12362,8 @@ srcFs \- a remote name string e.g.
.IP \[bu] 2 .IP \[bu] 2
dstFs \- a remote name string e.g. dstFs \- a remote name string e.g.
\[dq]drive:dst\[dq] for the destination \[dq]drive:dst\[dq] for the destination
.IP \[bu] 2
createEmptySrcDirs \- create empty src directories on destination if set
.PP .PP
See the sync command (https://rclone.org/commands/rclone_sync/) command See the sync command (https://rclone.org/commands/rclone_sync/) command
for more information on the above. for more information on the above.
@ -14886,7 +14946,7 @@ These flags are available for every command.
\-\-use\-json\-log Use json log format. \-\-use\-json\-log Use json log format.
\-\-use\-mmap Use mmap allocator (see docs). \-\-use\-mmap Use mmap allocator (see docs).
\-\-use\-server\-modtime Use server modified time instead of object metadata \-\-use\-server\-modtime Use server modified time instead of object metadata
\-\-user\-agent string Set the user\-agent to a specified string. The default is rclone/ version (default \[dq]rclone/v1.54.0\[dq]) \-\-user\-agent string Set the user\-agent to a specified string. The default is rclone/ version (default \[dq]rclone/v1.54.1\[dq])
\-v, \-\-verbose count Print lots more stuff (repeat for more) \-v, \-\-verbose count Print lots more stuff (repeat for more)
\f[R] \f[R]
.fi .fi
@ -15018,7 +15078,7 @@ They control the backends and may be set in the config file.
\-\-drive\-starred\-only Only show files that are starred. \-\-drive\-starred\-only Only show files that are starred.
\-\-drive\-stop\-on\-download\-limit Make download limit errors be fatal \-\-drive\-stop\-on\-download\-limit Make download limit errors be fatal
\-\-drive\-stop\-on\-upload\-limit Make upload limit errors be fatal \-\-drive\-stop\-on\-upload\-limit Make upload limit errors be fatal
\-\-drive\-team\-drive string ID of the Team Drive \-\-drive\-team\-drive string ID of the Shared Drive (Team Drive)
\-\-drive\-token string OAuth Access Token as a JSON blob. \-\-drive\-token string OAuth Access Token as a JSON blob.
\-\-drive\-token\-url string Token server url. \-\-drive\-token\-url string Token server url.
\-\-drive\-trashed\-only Only show files that are in the trash. \-\-drive\-trashed\-only Only show files that are in the trash.
@ -15299,8 +15359,13 @@ They control the backends and may be set in the config file.
\-\-yandex\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot) \-\-yandex\-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
\-\-yandex\-token string OAuth Access Token as a JSON blob. \-\-yandex\-token string OAuth Access Token as a JSON blob.
\-\-yandex\-token\-url string Token server url. \-\-yandex\-token\-url string Token server url.
\-\-zoho\-auth\-url string Auth server URL.
\-\-zoho\-client\-id string OAuth Client Id
\-\-zoho\-client\-secret string OAuth Client Secret
\-\-zoho\-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8) \-\-zoho\-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
\-\-zoho\-region string Zoho region to connect to. You\[aq]ll have to use the region you organization is registered in. \-\-zoho\-region string Zoho region to connect to. You\[aq]ll have to use the region you organization is registered in.
\-\-zoho\-token string OAuth Access Token as a JSON blob.
\-\-zoho\-token\-url string Token server url.
\f[R] \f[R]
.fi .fi
.SS 1Fichier .SS 1Fichier
@ -16452,6 +16517,15 @@ You\[aq]d then do a full \f[C]rclone sync\f[R] less often.
.PP .PP
Note that \f[C]\-\-fast\-list\f[R] isn\[aq]t required in the top\-up Note that \f[C]\-\-fast\-list\f[R] isn\[aq]t required in the top\-up
sync. sync.
.SS Avoiding HEAD requests after PUT
.PP
By default rclone will HEAD every object it uploads.
It does this to check the object got uploaded correctly.
.PP
You can disable this with the \-\-s3\-no\-head option \- see there for
more details.
.PP
Setting this flag increases the chance for undetected upload failures.
.SS Hashes .SS Hashes
.PP .PP
For small objects which weren\[aq]t uploaded as multipart uploads For small objects which weren\[aq]t uploaded as multipart uploads
@ -21620,7 +21694,7 @@ securely.
.PP .PP
The format for these URLs is the following: The format for these URLs is the following:
.PP .PP
https://ip\-with\-dots\-replaced.server\-hash.plex.direct:32400/ \f[C]https://ip\-with\-dots\-replaced.server\-hash.plex.direct:32400/\f[R]
.PP .PP
The \f[C]ip\-with\-dots\-replaced\f[R] part can be any IPv4 address, The \f[C]ip\-with\-dots\-replaced\f[R] part can be any IPv4 address,
where the dots have been replaced with dashes, e.g. where the dots have been replaced with dashes, e.g.
@ -23160,7 +23234,7 @@ just as you would with any other remote, e.g.
encrypt and decrypt as needed on the fly. encrypt and decrypt as needed on the fly.
If you access the wrapped remote \f[C]remote:path\f[R] directly you will If you access the wrapped remote \f[C]remote:path\f[R] directly you will
bypass the encryption, and anything you read will be in encrypted form, bypass the encryption, and anything you read will be in encrypted form,
and anything you write will be undencrypted. and anything you write will be unencrypted.
To avoid issues it is best to configure a dedicated path for encrypted To avoid issues it is best to configure a dedicated path for encrypted
content, and access it exclusively through a crypt remote. content, and access it exclusively through a crypt remote.
.IP .IP
@ -24321,6 +24395,20 @@ Default: 48M
.SS \-\-dropbox\-impersonate .SS \-\-dropbox\-impersonate
.PP .PP
Impersonate this user when using a business account. Impersonate this user when using a business account.
.PP
Note that if you want to use impersonate, you should make sure this flag
is set when running \[dq]rclone config\[dq] as this will cause rclone to
request the \[dq]members.read\[dq] scope which it won\[aq]t normally.
This is needed to lookup a members email address into the internal ID
that dropbox uses in the API.
.PP
Using the \[dq]members.read\[dq] scope will require a Dropbox Team Admin
to approve during the OAuth flow.
.PP
You will have to use your own App (setting your own client_id and
client_secret) to use this option as currently rclone\[aq]s default set
of permissions doesn\[aq]t include \[dq]members.read\[dq].
This can be added once v1.55 or later is in use everywhere.
.IP \[bu] 2 .IP \[bu] 2
Config: impersonate Config: impersonate
.IP \[bu] 2 .IP \[bu] 2
@ -24866,9 +24954,9 @@ rclone lsf :ftp: \-\-ftp\-host=speedtest.tele2.net \-\-ftp\-user=anonymous \-\-f
.PP .PP
Rlone FTP supports implicit FTP over TLS servers (FTPS). Rlone FTP supports implicit FTP over TLS servers (FTPS).
This has to be enabled in the FTP backend config for the remote, or with This has to be enabled in the FTP backend config for the remote, or with
\f[C][\-\-ftp\-tls]{#ftp\-tls}\f[R]. \f[C]\-\-ftp\-tls\f[R].
The default FTPS port is \f[C]990\f[R], not \f[C]21\f[R] and can be set The default FTPS port is \f[C]990\f[R], not \f[C]21\f[R] and can be set
with \f[C][\-\-ftp\-port]{#ftp\-port}\f[R]. with \f[C]\-\-ftp\-port\f[R].
.SS Standard Options .SS Standard Options
.PP .PP
Here are the standard options specific to ftp (FTP Connection). Here are the standard options specific to ftp (FTP Connection).
@ -25961,7 +26049,7 @@ If your browser doesn\[aq]t open automatically go to the following link: http://
Log in and authorize rclone for access Log in and authorize rclone for access
Waiting for code... Waiting for code...
Got code Got code
Configure this as a team drive? Configure this as a Shared Drive (Team Drive)?
y) Yes y) Yes
n) No n) No
y/n> n y/n> n
@ -26200,25 +26288,25 @@ interface, share your root folder with the user/email of the new Service
Account you created/selected at step #1 \- use rclone without specifying Account you created/selected at step #1 \- use rclone without specifying
the \f[C]\-\-drive\-impersonate\f[R] option, like this: the \f[C]\-\-drive\-impersonate\f[R] option, like this:
\f[C]rclone \-v foo\[at]example.com lsf gdrive:backup\f[R] \f[C]rclone \-v foo\[at]example.com lsf gdrive:backup\f[R]
.SS Team drives .SS Shared drives (team drives)
.PP .PP
If you want to configure the remote to point to a Google Team Drive then If you want to configure the remote to point to a Google Shared Drive
answer \f[C]y\f[R] to the question (previously known as Team Drives) then answer \f[C]y\f[R] to the
\f[C]Configure this as a team drive?\f[R]. question \f[C]Configure this as a Shared Drive (Team Drive)?\f[R].
.PP .PP
This will fetch the list of Team Drives from google and allow you to This will fetch the list of Shared Drives from google and allow you to
configure which one you want to use. configure which one you want to use.
You can also type in a team drive ID if you prefer. You can also type in a Shared Drive ID if you prefer.
.PP .PP
For example: For example:
.IP .IP
.nf .nf
\f[C] \f[C]
Configure this as a team drive? Configure this as a Shared Drive (Team Drive)?
y) Yes y) Yes
n) No n) No
y/n> y y/n> y
Fetching team drive list... Fetching Shared Drive list...
Choose a number from below, or type in your own value Choose a number from below, or type in your own value
1 / Rclone Test 1 / Rclone Test
\[rs] \[dq]xxxxxxxxxxxxxxxxxxxx\[dq] \[rs] \[dq]xxxxxxxxxxxxxxxxxxxx\[dq]
@ -26226,7 +26314,7 @@ Choose a number from below, or type in your own value
\[rs] \[dq]yyyyyyyyyyyyyyyyyyyy\[dq] \[rs] \[dq]yyyyyyyyyyyyyyyyyyyy\[dq]
3 / Rclone Test 3 3 / Rclone Test 3
\[rs] \[dq]zzzzzzzzzzzzzzzzzzzz\[dq] \[rs] \[dq]zzzzzzzzzzzzzzzzzzzz\[dq]
Enter a Team Drive ID> 1 Enter a Shared Drive ID> 1
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote] [remote]
client_id = client_id =
@ -26927,7 +27015,7 @@ Type: string
Default: \[dq]\[dq] Default: \[dq]\[dq]
.SS \-\-drive\-team\-drive .SS \-\-drive\-team\-drive
.PP .PP
ID of the Team Drive ID of the Shared Drive (Team Drive)
.IP \[bu] 2 .IP \[bu] 2
Config: team_drive Config: team_drive
.IP \[bu] 2 .IP \[bu] 2
@ -27487,7 +27575,7 @@ Options:
\[dq]target\[dq]: optional target remote for the shortcut destination \[dq]target\[dq]: optional target remote for the shortcut destination
.SS drives .SS drives
.PP .PP
List the shared drives available to this account List the Shared Drives available to this account
.IP .IP
.nf .nf
\f[C] \f[C]
@ -27495,7 +27583,7 @@ rclone backend drives remote: [options] [<arguments>+]
\f[R] \f[R]
.fi .fi
.PP .PP
This command lists the shared drives (teamdrives) available to this This command lists the Shared Drives (Team Drives) available to this
account. account.
.PP .PP
Usage: Usage:
@ -35000,8 +35088,8 @@ default, due to security concerns.
This can be re\-enabled on a per\-connection basis by setting the This can be re\-enabled on a per\-connection basis by setting the
\f[C]use_insecure_cipher\f[R] setting in the configuration file to \f[C]use_insecure_cipher\f[R] setting in the configuration file to
\f[C]true\f[R]. \f[C]true\f[R].
Further details on the insecurity of this cipher can be found [in this Further details on the insecurity of this cipher can be found in this
paper] (http://www.isg.rhul.ac.uk/\[ti]kp/SandPfinal.pdf). paper (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).
.PP .PP
SFTP isn\[aq]t supported under plan9 until this SFTP isn\[aq]t supported under plan9 until this
issue (https://github.com/pkg/sftp/issues/156) is fixed. issue (https://github.com/pkg/sftp/issues/156) is fixed.
@ -36941,6 +37029,28 @@ and will be removed from filenames during upload.
.SS Standard Options .SS Standard Options
.PP .PP
Here are the standard options specific to zoho (Zoho). Here are the standard options specific to zoho (Zoho).
.SS \-\-zoho\-client\-id
.PP
OAuth Client Id Leave blank normally.
.IP \[bu] 2
Config: client_id
.IP \[bu] 2
Env Var: RCLONE_ZOHO_CLIENT_ID
.IP \[bu] 2
Type: string
.IP \[bu] 2
Default: \[dq]\[dq]
.SS \-\-zoho\-client\-secret
.PP
OAuth Client Secret Leave blank normally.
.IP \[bu] 2
Config: client_secret
.IP \[bu] 2
Env Var: RCLONE_ZOHO_CLIENT_SECRET
.IP \[bu] 2
Type: string
.IP \[bu] 2
Default: \[dq]\[dq]
.SS \-\-zoho\-region .SS \-\-zoho\-region
.PP .PP
Zoho region to connect to. Zoho region to connect to.
@ -36984,6 +37094,41 @@ Australia
.SS Advanced Options .SS Advanced Options
.PP .PP
Here are the advanced options specific to zoho (Zoho). Here are the advanced options specific to zoho (Zoho).
.SS \-\-zoho\-token
.PP
OAuth Access Token as a JSON blob.
.IP \[bu] 2
Config: token
.IP \[bu] 2
Env Var: RCLONE_ZOHO_TOKEN
.IP \[bu] 2
Type: string
.IP \[bu] 2
Default: \[dq]\[dq]
.SS \-\-zoho\-auth\-url
.PP
Auth server URL.
Leave blank to use the provider defaults.
.IP \[bu] 2
Config: auth_url
.IP \[bu] 2
Env Var: RCLONE_ZOHO_AUTH_URL
.IP \[bu] 2
Type: string
.IP \[bu] 2
Default: \[dq]\[dq]
.SS \-\-zoho\-token\-url
.PP
Token server url.
Leave blank to use the provider defaults.
.IP \[bu] 2
Config: token_url
.IP \[bu] 2
Env Var: RCLONE_ZOHO_TOKEN_URL
.IP \[bu] 2
Type: string
.IP \[bu] 2
Default: \[dq]\[dq]
.SS \-\-zoho\-encoding .SS \-\-zoho\-encoding
.PP .PP
This sets the encoding for the backend. This sets the encoding for the backend.
@ -37910,6 +38055,102 @@ Options:
.IP \[bu] 2 .IP \[bu] 2
\[dq]error\[dq]: return an error based on option value \[dq]error\[dq]: return an error based on option value
.SH Changelog .SH Changelog
.SS v1.54.1 \- 2021\-03\-08
.PP
See commits (https://github.com/rclone/rclone/compare/v1.54.0...v1.54.1)
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
accounting: Fix \-\-bwlimit when up or down is off (Nick Craig\-Wood)
.IP \[bu] 2
docs
.RS 2
.IP \[bu] 2
Fix nesting of brackets and backticks in ftp docs (edwardxml)
.IP \[bu] 2
Fix broken link in sftp page (edwardxml)
.IP \[bu] 2
Fix typo in crypt.md (Romeo Kienzler)
.IP \[bu] 2
Changelog: Correct link to digitalis.io (Alex JOST)
.IP \[bu] 2
Replace #file\-caching with #vfs\-file\-caching (Miron Veryanskiy)
.IP \[bu] 2
Convert bogus example link to code (edwardxml)
.IP \[bu] 2
Remove dead link from rc.md (edwardxml)
.RE
.IP \[bu] 2
rc: Sync,copy,move: document createEmptySrcDirs parameter (Nick
Craig\-Wood)
.IP \[bu] 2
lsjson: Fix unterminated JSON in the presence of errors (Nick
Craig\-Wood)
.RE
.IP \[bu] 2
Mount
.RS 2
.IP \[bu] 2
Fix mount dropping on macOS by setting \-\-daemon\-timeout 10m (Nick
Craig\-Wood)
.RE
.IP \[bu] 2
VFS
.RS 2
.IP \[bu] 2
Document simultaneous usage with the same cache shouldn\[aq]t be used
(Nick Craig\-Wood)
.RE
.IP \[bu] 2
B2
.RS 2
.IP \[bu] 2
Automatically raise upload cutoff to avoid spurious error (Nick
Craig\-Wood)
.IP \[bu] 2
Fix failed to create file system with application key limited to a
prefix (Nick Craig\-Wood)
.RE
.IP \[bu] 2
Drive
.RS 2
.IP \[bu] 2
Refer to Shared Drives instead of Team Drives (Nick Craig\-Wood)
.RE
.IP \[bu] 2
Dropbox
.RS 2
.IP \[bu] 2
Add scopes to oauth request and optionally \[dq]members.read\[dq] (Nick
Craig\-Wood)
.RE
.IP \[bu] 2
S3
.RS 2
.IP \[bu] 2
Fix failed to create file system with folder level permissions policy
(Nick Craig\-Wood)
.IP \[bu] 2
Fix Wasabi HEAD requests returning stale data by using only 1 transport
(Nick Craig\-Wood)
.IP \[bu] 2
Fix shared_credentials_file auth (Dmitry Chepurovskiy)
.IP \[bu] 2
Add \-\-s3\-no\-head to reducing costs docs (Nick Craig\-Wood)
.RE
.IP \[bu] 2
Union
.RS 2
.IP \[bu] 2
Fix mkdir at root with remote:/ (Nick Craig\-Wood)
.RE
.IP \[bu] 2
Zoho
.RS 2
.IP \[bu] 2
Fix custom client id\[aq]s (buengese)
.RE
.SS v1.54.0 \- 2021\-02\-02 .SS v1.54.0 \- 2021\-02\-02
.PP .PP
See commits (https://github.com/rclone/rclone/compare/v1.53.0...v1.54.0) See commits (https://github.com/rclone/rclone/compare/v1.53.0...v1.54.0)
@ -37917,7 +38158,7 @@ See commits (https://github.com/rclone/rclone/compare/v1.53.0...v1.54.0)
New backends New backends
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Compression remote (experimental)(buengese) Compression remote (experimental) (buengese)
.IP \[bu] 2 .IP \[bu] 2
Enterprise File Fabric (Nick Craig\-Wood) Enterprise File Fabric (Nick Craig\-Wood)
.RS 2 .RS 2
@ -37939,9 +38180,9 @@ Deglobalise the config (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Global config now read from the context Global config now read from the context
.IP \[bu] 2 .IP \[bu] 2
Global config can be passed into the rc This will enable passing of global config via the rc
.IP \[bu] 2 .IP \[bu] 2
This work was sponsored by Digitalis (digitalis.io) This work was sponsored by Digitalis (https://digitalis.io/)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Add \f[C]\-\-bwlimit\f[R] for upload and download (Nick Craig\-Wood) Add \f[C]\-\-bwlimit\f[R] for upload and download (Nick Craig\-Wood)
@ -37953,18 +38194,21 @@ Obey bwlimit in http Transport for better limiting
Enhance systemd integration (Hekmon) Enhance systemd integration (Hekmon)
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
log level identification log level identification, manual activation with flag, automatic systemd
.IP \[bu] 2 launch detection
manual activation with flag
.IP \[bu] 2
automatic systemd launch detection
.IP \[bu] 2 .IP \[bu] 2
Don\[aq]t compile systemd log integration for non unix systems (Benjamin Don\[aq]t compile systemd log integration for non unix systems (Benjamin
Gustin) Gustin)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Add a download flag to hashsum and related commands to force rclone to Add a \f[C]\-\-download\f[R] flag to md5sum/sha1sum/hashsum to force
download and hash files locally (lostheli) rclone to download and hash files locally (lostheli)
.IP \[bu] 2
Add \f[C]\-\-progress\-terminal\-title\f[R] to print ETA to terminal
title (LaSombra)
.IP \[bu] 2
Make backend env vars show in help as the defaults for backend flags
(Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
build build
.RS 2 .RS 2
@ -37972,51 +38216,33 @@ build
Raise minimum go version to go1.12 (Nick Craig\-Wood) Raise minimum go version to go1.12 (Nick Craig\-Wood)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
check
.RS 2
.IP \[bu] 2
Make the error count match up in the log message (Nick Craig\-Wood)
.RE
.IP \[bu] 2
cmd
.RS 2
.IP \[bu] 2
Add \-\-progress\-terminal\-title to print ETA to terminal title
(LaSombra)
.IP \[bu] 2
Make backend env vars show in help as the defaults for backend flags
(Nick Craig\-Wood)
.RE
.IP \[bu] 2
dedupe dedupe
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Add \-\-by\-hash to dedupe on hash not file name (Nick Craig\-Wood) Add \f[C]\-\-by\-hash\f[R] to dedupe on content hash not file name (Nick
.IP \[bu] 2
Add \-\-dedupe\-mode list to just list dupes, changing nothing (Nick
Craig\-Wood) Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Add \f[C]\-\-dedupe\-mode list\f[R] to just list dupes, changing nothing
(Nick Craig\-Wood)
.IP \[bu] 2
Add warning if used on a remote which can\[aq]t have duplicate names Add warning if used on a remote which can\[aq]t have duplicate names
(Nick Craig\-Wood) (Nick Craig\-Wood)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
flags: Improve error message when reading environment vars (Nick
Craig\-Wood)
.IP \[bu] 2
fs fs
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Add Shutdown optional method for backends (Nick Craig\-Wood) Add Shutdown optional method for backends (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
When using \-\-files\-from check files concurrently (zhucan) When using \f[C]\-\-files\-from\f[R] check files concurrently (zhucan)
.IP \[bu] 2 .IP \[bu] 2
Accumulate stats when using \-\-dry\-run (Ingo Weiss) Accumulate stats when using \f[C]\-\-dry\-run\f[R] (Ingo Weiss)
.IP \[bu] 2 .IP \[bu] 2
Always show stats when using \-\-dry\-run or \-\-interactive (Nick Always show stats when using \f[C]\-\-dry\-run\f[R] or
Craig\-Wood) \f[C]\-\-interactive\f[R] (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Add support for flag \-\-no\-console on windows to hide the console Add support for flag \f[C]\-\-no\-console\f[R] on windows to hide the
window (albertony) console window (albertony)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
genautocomplete: Add support to output to stdout (Ingo) genautocomplete: Add support to output to stdout (Ingo)
@ -38028,13 +38254,13 @@ Highlight read errors instead of aborting (Claudio Bantaloukas)
.IP \[bu] 2 .IP \[bu] 2
Add sort by average size in directory (Adam Pl\['a]nsk\['y]) Add sort by average size in directory (Adam Pl\['a]nsk\['y])
.IP \[bu] 2 .IP \[bu] 2
Add toggle option for average size in directory \- key \[aq]a\[aq] (Adam Add toggle option for average s3ize in directory \- key \[aq]a\[aq]
Pl\['a]nsk\['y]) (Adam Pl\['a]nsk\['y])
.IP \[bu] 2 .IP \[bu] 2
Add empty folder flag into ncdu browser (Adam Pl\['a]nsk\['y]) Add empty folder flag into ncdu browser (Adam Pl\['a]nsk\['y])
.IP \[bu] 2 .IP \[bu] 2
Add ! (errror) and . Add \f[C]!\f[R] (errror) and \f[C].\f[R] (unreadable) file flags to go
(unreadable) file flags to go with e (empty) (Nick Craig\-Wood) with \f[C]e\f[R] (empty) (Nick Craig\-Wood)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
obscure: Make \f[C]rclone osbcure \-\f[R] ignore newline at end of line obscure: Make \f[C]rclone osbcure \-\f[R] ignore newline at end of line
@ -38085,21 +38311,6 @@ Fix plugins initialization (negative0)
Bug Fixes Bug Fixes
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
build
.RS 2
.IP \[bu] 2
Explicitly set ARM version to fix build (Nick Craig\-Wood)
.IP \[bu] 2
Don\[aq]t explicitly set ARM version to fix ARMv5 build (Nick
Craig\-Wood)
.IP \[bu] 2
Fix nfpm install (Nick Craig\-Wood)
.IP \[bu] 2
Fix docker build by upgrading ilteoood/docker_buildx (Nick Craig\-Wood)
.IP \[bu] 2
Temporary fix for Windows build errors (Ivan Andreev)
.RE
.IP \[bu] 2
fs fs
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
@ -38110,26 +38321,35 @@ Fix parsing of ..
when joining remotes (Nick Craig\-Wood) when joining remotes (Nick Craig\-Wood)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
log: Fix enabling systemd logging when using \-\-log\-file (Nick log: Fix enabling systemd logging when using \f[C]\-\-log\-file\f[R]
Craig\-Wood) (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
move: Fix data loss when moving the same object (Nick Craig\-Wood) check
.RS 2
.IP \[bu] 2
Make the error count match up in the log message (Nick Craig\-Wood)
.RE
.IP \[bu] 2
move: Fix data loss when source and destination are the same object
(Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
operations operations
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Fix \-\-cutof\-mode hard not cutting off immediately (Nick Craig\-Wood) Fix \f[C]\-\-cutof\-mode\f[R] hard not cutting off immediately (Nick
Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Fix \-\-immutable error message (Nick Craig\-Wood) Fix \f[C]\-\-immutable\f[R] error message (Nick Craig\-Wood)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
sync sync
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Fix \-\-cutoff\-mode soft & cautious so it doesn\[aq]t end the transfer Fix \f[C]\-\-cutoff\-mode\f[R] soft & cautious so it doesn\[aq]t end the
early (Nick Craig\-Wood) transfer early (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Fix \-\-immutable errors retrying many times (Nick Craig\-Wood) Fix \f[C]\-\-immutable\f[R] errors retrying many times (Nick
Craig\-Wood)
.RE .RE
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
@ -38138,7 +38358,7 @@ Docs
.IP \[bu] 2 .IP \[bu] 2
Many fixes and a rewrite of the filtering docs (edwardxml) Many fixes and a rewrite of the filtering docs (edwardxml)
.IP \[bu] 2 .IP \[bu] 2
Many spelling and grammar problems (Josh Soref) Many spelling and grammar fixes (Josh Soref)
.IP \[bu] 2 .IP \[bu] 2
Doc fixes for commands delete, purge, rmdir, rmdirs and mount Doc fixes for commands delete, purge, rmdir, rmdirs and mount
(albertony) (albertony)
@ -38164,20 +38384,16 @@ Update systemd status with cache stats (Hekmon)
Disable bazil/fuse based mount on macOS (Nick Craig\-Wood) Disable bazil/fuse based mount on macOS (Nick Craig\-Wood)
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Make mount be cmount under macOS (Nick Craig\-Wood) Make \f[C]rclone mount\f[R] actually run \f[C]rclone cmount\f[R] under
macOS (Nick Craig\-Wood)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Implement mknod to make NFS file creation work (Nick Craig\-Wood) Implement mknod to make NFS file creation work (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Make sure we don\[aq]t call umount more than once (Nick Craig\-Wood) Make sure we don\[aq]t call umount more than once (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Don\[aq]t call host.Umount if a signal has been received (Nick
Craig\-Wood)
.IP \[bu] 2
More user friendly mounting as network drive on windows (albertony) More user friendly mounting as network drive on windows (albertony)
.IP \[bu] 2 .IP \[bu] 2
Cleanup OS specific option handling and documentation (albertony)
.IP \[bu] 2
Detect if uid or gid are set in same option string: \-o uid=123,gid=456 Detect if uid or gid are set in same option string: \-o uid=123,gid=456
(albertony) (albertony)
.IP \[bu] 2 .IP \[bu] 2
@ -38194,8 +38410,8 @@ Craig\-Wood)
Fix \[dq]file already exists\[dq] error for stale cache files (Nick Fix \[dq]file already exists\[dq] error for stale cache files (Nick
Craig\-Wood) Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Fix file leaks with \-\-vfs\-cache\-mode full and \-\-buffer\-size 0 Fix file leaks with \f[C]\-\-vfs\-cache\-mode\f[R] full and
(Nick Craig\-Wood) \f[C]\-\-buffer\-size 0\f[R] (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Fix invalid cache path on windows when using :backend: as remote Fix invalid cache path on windows when using :backend: as remote
(albertony) (albertony)
@ -38207,8 +38423,8 @@ Local
Continue listing files/folders when a circular symlink is detected Continue listing files/folders when a circular symlink is detected
(Manish Gupta) (Manish Gupta)
.IP \[bu] 2 .IP \[bu] 2
New flag \-\-local\-zero\-size\-links to fix sync on some virtual New flag \f[C]\-\-local\-zero\-size\-links\f[R] to fix sync on some
filesystems (Riccardo Iaconelli) virtual filesystems (Riccardo Iaconelli)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Azure Blob Azure Blob
@ -38216,26 +38432,24 @@ Azure Blob
.IP \[bu] 2 .IP \[bu] 2
Add support for service principals (James Lim) Add support for service principals (James Lim)
.IP \[bu] 2 .IP \[bu] 2
Utilize streaming capabilities (Denis Neuling) Add support for managed identities (Brad Ackerman)
.IP \[bu] 2 .IP \[bu] 2
Update SDK to v0.13.0 and fix API breakage (Nick Craig\-Wood, Mitsuo Add examples for access tier (Bob Pusateri)
Heijo) .IP \[bu] 2
Utilize the streaming capabilities from the SDK for multipart uploads
(Denis Neuling)
.IP \[bu] 2 .IP \[bu] 2
Fix setting of mime types (Nick Craig\-Wood) Fix setting of mime types (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Fix crash when listing outside a SAS URL\[aq]s root (Nick Craig\-Wood) Fix crash when listing outside a SAS URL\[aq]s root (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Delete archive tier blobs before update if Delete archive tier blobs before update if
\-\-azureblob\-archive\-tier\-delete (Nick Craig\-Wood) \f[C]\-\-azureblob\-archive\-tier\-delete\f[R] (Nick Craig\-Wood)
.IP \[bu] 2
Add support for managed identities (Brad Ackerman)
.IP \[bu] 2 .IP \[bu] 2
Fix crash on startup (Nick Craig\-Wood) Fix crash on startup (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Add examples for access tier (Bob Pusateri) Fix memory usage by upgrading the SDK to v0.13.0 and implementing a
.IP \[bu] 2 TransferManager (Nick Craig\-Wood)
Fix memory usage by upgrading the SDK and implementing a TransferManager
(Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Require go1.14+ to compile due to SDK changes (Nick Craig\-Wood) Require go1.14+ to compile due to SDK changes (Nick Craig\-Wood)
.RE .RE
@ -38244,8 +38458,13 @@ B2
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Make NewObject use less expensive API calls (Nick Craig\-Wood) Make NewObject use less expensive API calls (Nick Craig\-Wood)
.RS 2
.IP \[bu] 2 .IP \[bu] 2
Fixed possible crash when accessing Backblaze b2 remote (lluuaapp) This will improve \f[C]\-\-files\-from\f[R] and \f[C]restic serve\f[R]
in particular
.RE
.IP \[bu] 2
Fixed crash on an empty file name (lluuaapp)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Box Box
@ -38261,8 +38480,8 @@ Chunker
.IP \[bu] 2 .IP \[bu] 2
Skip long local hashing, hash in\-transit (fixes) (Ivan Andreev) Skip long local hashing, hash in\-transit (fixes) (Ivan Andreev)
.IP \[bu] 2 .IP \[bu] 2
Set Features.ReadMimeType=false as Object.MimeType not supported (Nick Set Features ReadMimeType to false as Object.MimeType not supported
Craig\-Wood) (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Fix case\-insensitive NewObject, test metadata detection (Ivan Andreev) Fix case\-insensitive NewObject, test metadata detection (Ivan Andreev)
.RE .RE
@ -38270,7 +38489,7 @@ Fix case\-insensitive NewObject, test metadata detection (Ivan Andreev)
Drive Drive
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Implement \[dq]rclone backend copyid\[dq] command for copying files by Implement \f[C]rclone backend copyid\f[R] command for copying files by
ID (Nick Craig\-Wood) ID (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Added flag \f[C]\-\-drive\-stop\-on\-download\-limit\f[R] to stop Added flag \f[C]\-\-drive\-stop\-on\-download\-limit\f[R] to stop
@ -38290,36 +38509,35 @@ Dropbox
.IP \[bu] 2 .IP \[bu] 2
Add support for viewing shared files and folders (buengese) Add support for viewing shared files and folders (buengese)
.IP \[bu] 2 .IP \[bu] 2
Implement IDer (buengese) Enable short lived access tokens (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Set Features.ReadMimeType=false as Object.MimeType not supported (Nick Implement IDer on Objects so \f[C]rclone lsf\f[R] etc can read the IDs
Craig\-Wood) (buengese)
.IP \[bu] 2 .IP \[bu] 2
Tidy repeated error message (Nick Craig\-Wood) Set Features ReadMimeType to false as Object.MimeType not supported
(Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Make malformed_path errors from too long files not retriable (Nick Make malformed_path errors from too long files not retriable (Nick
Craig\-Wood) Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Test file name length before upload to fix upload loop (Nick Test file name length before upload to fix upload loop (Nick
Craig\-Wood) Craig\-Wood)
.IP \[bu] 2
Enable short lived access tokens (Nick Craig\-Wood)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Fichier Fichier
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Set Features.ReadMimeType=true as Object.MimeType is supported (Nick Set Features ReadMimeType to true as Object.MimeType is supported (Nick
Craig\-Wood) Craig\-Wood)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
FTP FTP
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Add \-\-ftp\-disable\-msld option to ignore MLSD for really old servers Add \f[C]\-\-ftp\-disable\-msld\f[R] option to ignore MLSD for really
(Nick Craig\-Wood) old servers (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Make \-\-tpslimit apply (Nick Craig\-Wood) Make \f[C]\-\-tpslimit apply\f[R] (Nick Craig\-Wood)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Google Cloud Storage Google Cloud Storage
@ -38329,27 +38547,33 @@ Storage class object header support (Laurens Janssen)
.IP \[bu] 2 .IP \[bu] 2
Fix anonymous client to use rclone\[aq]s HTTP client (Nick Craig\-Wood) Fix anonymous client to use rclone\[aq]s HTTP client (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Fix Entry doesn\[aq]t belong in directory \[dq]\[dq] (same as directory) Fix
\- ignoring (Nick Craig\-Wood) \f[C]Entry doesn\[aq]t belong in directory \[dq]\[dq] (same as directory) \- ignoring\f[R]
(Nick Craig\-Wood)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Googlephotos Googlephotos
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
New flag \-\-gphotos\-include\-archived (Nicolas Rueff) New flag \f[C]\-\-gphotos\-include\-archived\f[R] to show archived
photos as well (Nicolas Rueff)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Jottacloud Jottacloud
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Don\[aq]t erroniously report support for writing mime types (buengese) Don\[aq]t erroneously report support for writing mime types (buengese)
.IP \[bu] 2 .IP \[bu] 2
Add support for Telia Cloud (#4930) (Patrik Nordl\['e]n) Add support for Telia Cloud (Patrik Nordl\['e]n)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Mailru Mailru
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Accept special folders eg camera\-upload (Ivan Andreev)
.IP \[bu] 2
Avoid prehashing of large local files (Ivan Andreev)
.IP \[bu] 2
Fix uploads after recent changes on server (Ivan Andreev) Fix uploads after recent changes on server (Ivan Andreev)
.IP \[bu] 2 .IP \[bu] 2
Fix range requests after June 2020 changes on server (Ivan Andreev) Fix range requests after June 2020 changes on server (Ivan Andreev)
@ -38357,10 +38581,6 @@ Fix range requests after June 2020 changes on server (Ivan Andreev)
Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev) Fix invalid timestamp on corrupted files (fixes) (Ivan Andreev)
.IP \[bu] 2 .IP \[bu] 2
Remove deprecated protocol quirks (Ivan Andreev) Remove deprecated protocol quirks (Ivan Andreev)
.IP \[bu] 2
Accept special folders eg camera\-upload (Ivan Andreev)
.IP \[bu] 2
Avoid prehashing of large local files (Ivan Andreev)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Memory Memory
@ -38372,26 +38592,25 @@ Fix setting of mime types (Nick Craig\-Wood)
Onedrive Onedrive
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Add support for china region operated by 21vianet and other regional Add support for China region operated by 21vianet and other regional
suppliers (#4963) (NyaMisty) suppliers (NyaMisty)
.IP \[bu] 2 .IP \[bu] 2
Warn on gateway timeout errors (Nick Craig\-Wood) Warn on gateway timeout errors (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Fall back to normal copy if server\-side copy unavailable (#4903) (Alex Fall back to normal copy if server\-side copy unavailable (Alex Chen)
Chen)
.IP \[bu] 2 .IP \[bu] 2
Fix server\-side copy completely disabled on OneDrive for Business Fix server\-side copy completely disabled on OneDrive for Business
(Cnly) (Cnly)
.IP \[bu] 2 .IP \[bu] 2
(business only) workaround to replace existing file on server\-side copy (business only) workaround to replace existing file on server\-side copy
(#4904) (Alex Chen) (Alex Chen)
.IP \[bu] 2 .IP \[bu] 2
Enhance link creation with expiry, scope, type and password (Nick Enhance link creation with expiry, scope, type and password (Nick
Craig\-Wood) Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Remove % and # from the set of encoded characters (#4909) (Alex Chen) Remove % and # from the set of encoded characters (Alex Chen)
.IP \[bu] 2 .IP \[bu] 2
Support addressing site by server\-relative URL (#4761) (kice) Support addressing site by server\-relative URL (kice)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Opendrive Opendrive
@ -38427,11 +38646,16 @@ Added \f[C]\-\-s3\-disable\-http2\f[R] to disable http/2 (Anagh Kumar
Baranwal) Baranwal)
.IP \[bu] 2 .IP \[bu] 2
Complete SSE\-C implementation (Nick Craig\-Wood) Complete SSE\-C implementation (Nick Craig\-Wood)
.RS 2
.IP \[bu] 2 .IP \[bu] 2
Fix hashes on small files with AWS:KMS and SSE\-C (Nick Craig\-Wood) Fix hashes on small files with AWS:KMS and SSE\-C (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Add MD5 metadata to objects uploaded with SSE\-AWS/SSE\-C (Nick Add MD5 metadata to objects uploaded with SSE\-AWS/SSE\-C (Nick
Craig\-Wood) Craig\-Wood)
.RE
.IP \[bu] 2
Add \f[C]\-\-s3\-no\-head parameter\f[R] to minimise transactions on
upload (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Update docs with a Reducing Costs section (Nick Craig\-Wood) Update docs with a Reducing Costs section (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
@ -38442,9 +38666,6 @@ Add requester pays option (kelv)
.IP \[bu] 2 .IP \[bu] 2
Fix copy multipart with v2 auth failing with Fix copy multipart with v2 auth failing with
\[aq]SignatureDoesNotMatch\[aq] (Louis Koo) \[aq]SignatureDoesNotMatch\[aq] (Louis Koo)
.IP \[bu] 2
Add \-\-s3\-no\-head parameter to minimise transactions on upload (Nick
Craig\-Wood)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
SFTP SFTP
@ -38464,9 +38685,10 @@ Implement Shutdown method (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Implement keyboard interactive authentication (Nick Craig\-Wood) Implement keyboard interactive authentication (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Make \-\-tpslimit apply (Nick Craig\-Wood) Make \f[C]\-\-tpslimit\f[R] apply (Nick Craig\-Wood)
.IP \[bu] 2 .IP \[bu] 2
Implement \-\-sftp\-use\-fstat (Nick Craig\-Wood) Implement \f[C]\-\-sftp\-use\-fstat\f[R] for unusual SFTP servers (Nick
Craig\-Wood)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Sugarsync Sugarsync
@ -38484,7 +38706,8 @@ Fix deletion of parts of Static Large Object (SLO) (Nguy\[u1EC5]n
H\[u1EEF]u Lu\[^a]n) H\[u1EEF]u Lu\[^a]n)
.IP \[bu] 2 .IP \[bu] 2
Ensure partially uploaded large files are uploaded unless Ensure partially uploaded large files are uploaded unless
\-\-swift\-leave\-parts\-on\-error (Nguy\[u1EC5]n H\[u1EEF]u Lu\[^a]n) \f[C]\-\-swift\-leave\-parts\-on\-error\f[R] (Nguy\[u1EC5]n H\[u1EEF]u
Lu\[^a]n)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
Tardigrade Tardigrade
@ -38502,7 +38725,7 @@ Updated docs to show streaming to nextcloud is working (Durval Menezes)
Yandex Yandex
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Set Features.WriteMimeType=false as Yandex ignores mime types (Nick Set Features WriteMimeType to false as Yandex ignores mime types (Nick
Craig\-Wood) Craig\-Wood)
.RE .RE
.SS v1.53.4 \- 2021\-01\-20 .SS v1.53.4 \- 2021\-01\-20

View file

@ -110,6 +110,13 @@ for two reasons. Firstly because it is only checked every
!--vfs-cache-poll-interval!. Secondly because open files cannot be !--vfs-cache-poll-interval!. Secondly because open files cannot be
evicted from the cache. evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using !--vfs-cache-mode > off!.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
!--cache-dir!. You don't need to worry about this if the remotes in
use don't overlap.
#### --vfs-cache-mode off #### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write In this mode (the default) the cache will read directly from the remote and write