diff --git a/MANUAL.html b/MANUAL.html index f95ca6dd8..b74934154 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -1,19 +1,24 @@ - - + +
- - + +Apr 13, 2019
+Rclone is a command line program to sync files and directories to and from:
@@ -34,6 +39,7 @@curl https://rclone.org/install.sh | sudo bash
For beta installation, run:
curl https://rclone.org/install.sh | sudo bash -s beta
-Note that this script checks the version of rclone installed first and won't re-download if not needed.
+Note that this script checks the version of rclone installed first and won’t re-download if not needed.
Fetch and unpack
curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
@@ -132,9 +137,9 @@ go build
go get -u -v github.com/ncw/rclone
and this will build the binary in $GOPATH/bin
(~/go/bin/rclone
by default) after downloading the source to $GOPATH/src/github.com/ncw/rclone
(~/go/src/github.com/ncw/rclone
by default).
Installation with Ansible
-This can be done with Stefan Weichinger's ansible role.
+This can be done with Stefan Weichinger’s ansible role.
Instructions
-
+
git clone https://github.com/stefangweichinger/ansible-rclone.git
into your local roles-directory
- add the role to the hosts you want rclone installed to:
@@ -142,7 +147,7 @@ go build
roles:
- rclone
First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config
entry for how to find the config file and choose its location.)
First, you’ll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config
entry for how to find the config file and choose its location.)
The easiest way to make the config is to run rclone with the config option:
rclone config
See the following for detailed instructions for
@@ -162,6 +167,7 @@ go buildRclone syncs a directory tree from one storage system to another.
Its syntax is like this
Syntax: [options] subcommand <parameters> <parameters...>
-Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg "drive:myfolder" to look at "myfolder" in Google drive.
+Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg “drive:myfolder” to look at “myfolder” in Google drive.
You can define as many storage paths as you like in the config file.
rclone uses a system of subcommands. For example
-rclone ls remote:path # lists a re +
rclone copyrclone ls remote:path # lists a remote rclone copy /local/path remote:path # copies /local/path to the remote rclone sync /local/path remote:path # syncs /local/path to the remote
rclone config
@@ -196,12 +202,12 @@ rclone sync /local/path remote:path # syncs /local/path to the remoteCopy files from source to dest, skipping already copied
Synopsis
-Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination.
-Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.
-If dest:path doesn't exist, it is created and the source:path contents go there.
+Copy the source to the destination. Doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. Doesn’t delete files from the destination.
+Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it’s the contents of source:path that are copied, not the directory name and contents.
+If dest:path doesn’t exist, it is created and the source:path contents go there.
For example
-rclone copy source:sourcepath dest:destpath
Let's say there are two files in sourcepath
+Let’s say there are two files in sourcepath
sourcepath/one.txt sourcepath/two.txt
This copies them to
@@ -210,39 +216,42 @@ destpath/two.txt
Not to
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
-If you are familiar with rsync
, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination.
See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly.
+If you are familiar with rsync
, rclone always works as if you had written a trailing / - meaning “copy the contents of this directory”. This applies to all commands and whether you are talking about the source or destination.
See the –no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly.
For example, if you have many files in /path/to/src but only a few of them change every day, you can to copy all the files which have changed recently very efficiently like this:
rclone copy --max-age 24h --no-traverse /path/to/src remote:
Note: Use the -P
/--progress
flag to view real-time transfer statistics
rclone copy source:path dest:path [flags]
-h, --help help for copy
+ --create-empty-src-dirs Create empty source dirs on destination after copy
+ -h, --help help for copy
Make source and dest identical, modifying destination only.
Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.
+Sync the source to the destination, changing the destination only. Doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.
Important: Since this can cause data loss, test first with the --dry-run
flag to see exactly what would be copied and deleted.
Note that files in the destination won't be deleted if there were any errors at any point.
-It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy
command above if unsure.
If dest:path doesn't exist, it is created and the source:path contents go there.
+Note that files in the destination won’t be deleted if there were any errors at any point.
+It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it’s the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy
command above if unsure.
If dest:path doesn’t exist, it is created and the source:path contents go there.
Note: Use the -P
/--progress
flag to view real-time transfer statistics
rclone sync source:path dest:path [flags]
-h, --help help for sync
+ --create-empty-src-dirs Create empty source dirs on destination after sync
+ -h, --help help for sync
Move files from source to dest.
Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server side directory move operation.
If no filters are in use and if possible this will server side move source:path
into dest:path
. After this source:path
will no longer longer exist.
Otherwise for each file in source:path
selected by the filters (if any) this will move it into dest:path
. If possible a server side move will be used, otherwise it will copy it (server side if possible) into dest:path
then delete the original (if no errors on copy) in source:path
.
If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.
-See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly.
-Important: Since this can cause data loss, test first with the --dry-run flag.
+If you want to delete empty source directories after move, use the –delete-empty-src-dirs flag.
+See the –no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly.
+Important: Since this can cause data loss, test first with the –dry-run flag.
Note: Use the -P
/--progress
flag to view real-time transfer statistics.
rclone move source:path dest:path [flags]
--delete-empty-src-dirs Delete empty source dirs after move
+ --create-empty-src-dirs Create empty source dirs on destination after move
+ --delete-empty-src-dirs Delete empty source dirs after move
-h, --help help for move
rclone delete
Remove the contents of path.
@@ -255,7 +264,7 @@ destpath/sourcepath/two.txt
rclone --dry-run --min-size 100M delete remote:path
Then delete
rclone --min-size 100M delete remote:path
-That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes.
+That reads “delete everything with a minimum size of 100 MB”, hence delete all files bigger than 100MBytes.
rclone delete remote:path [flags]
-h, --help help for delete
@@ -267,26 +276,26 @@ rclone --dry-run --min-size 100M delete remote:path
-h, --help help for purge
Make the path if it doesn't already exist.
+Make the path if it doesn’t already exist.
Make the path if it doesn't already exist.
+Make the path if it doesn’t already exist.
rclone mkdir remote:path [flags]
-h, --help help for mkdir
Remove the path if empty.
Remove the path. Note that you can't remove a path with objects in it, use purge for that.
+Remove the path. Note that you can’t remove a path with objects in it, use purge for that.
rclone rmdir remote:path [flags]
-h, --help help for rmdir
Checks the files in the source and destination match.
Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination.
-If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.
-If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
-If you supply the --one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error.
+Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don’t match. It doesn’t alter the source or destination.
+If you supply the –size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.
+If you supply the –download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don’t support hashes or if you really want to check all the data.
+If you supply the –one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error.
rclone check source:path dest:path [flags]
--download Check by downloading rather than with hash.
@@ -312,9 +321,9 @@ rclone --dry-run --min-size 100M delete remote:path
lsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
Note that ls
and lsl
recurse by default - use "--max-depth 1" to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
+Note that ls
and lsl
recurse by default - use “–max-depth 1” to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use “-R” to make them recurse.
Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
rclone ls remote:path [flags]
-h, --help help for ls
@@ -331,7 +340,7 @@ rclone --dry-run --min-size 100M delete remote:path
-1 2016-10-17 17:41:53 -1 1000files
-1 2017-01-03 14:40:54 -1 2500files
-1 2017-07-08 14:39:28 -1 4000files
-If you just want the directory names use "rclone lsf --dirs-only".
+If you just want the directory names use “rclone lsf –dirs-only”.
Any of the filtering options can be applied to this commmand.
There are several related list commands
lsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
Note that ls
and lsl
recurse by default - use "--max-depth 1" to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
+Note that ls
and lsl
recurse by default - use “–max-depth 1” to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use “-R” to make them recurse.
Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
rclone lsd remote:path [flags]
-h, --help help for lsd
@@ -369,9 +378,9 @@ rclone --dry-run --min-size 100M delete remote:path
lsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
Note that ls
and lsl
recurse by default - use "--max-depth 1" to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
+Note that ls
and lsl
recurse by default - use “–max-depth 1” to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use “-R” to make them recurse.
Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
rclone lsl remote:path [flags]
-h, --help help for lsl
@@ -406,7 +415,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone v1.41
- os/arch: linux/amd64
- go version: go1.10
-If you supply the --check flag, then it will do an online check to compare your version with the latest release and the latest beta.
+If you supply the –check flag, then it will do an online check to compare your version with the latest release and the latest beta.
$ rclone version --check
yours: 1.42.0.6
latest: 1.42 (released 2018-06-16)
@@ -515,13 +524,13 @@ Other: 8.241G
Note that not all the backends provide all the fields - they will be missing if they are not known for that backend. Where it is known that the value is unlimited the value will also be omitted.
-Use the --full flag to see the numbers written out in full, eg
+Use the –full flag to see the numbers written out in full, eg
Total: 18253611008
Used: 7993453766
Free: 1411001220
Trashed: 104857602
Other: 8849156022
-Use the --json flag for a computer readable output, eg
+Use the –json flag for a computer readable output, eg
{
"total": 18253611008,
"used": 7993453766,
@@ -558,7 +567,7 @@ Other: 8849156022
rclone cat remote:path/to/dir
Or like this to output any .txt files in dir or subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
-Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1.
+Use the –head flag to print characters only at the start, –tail for the end and –offset and –count to print a section in the middle. Note that if offset is negative it will count from the end, so –offset -1 –count 1 is equivalent to –tail 1.
rclone cat remote:path [flags]
--count int Only print N characters. (default -1)
@@ -610,7 +619,7 @@ Other: 8849156022
Update password in an existing remote.
Update an existing remote's password. The password should be passed in in pairs of
Update an existing remote’s password. The password should be passed in in pairs of
For example to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
rclone config password <name> [<key> <value>]+ [flags]
@@ -633,10 +642,10 @@ Other: 8849156022
Update options in an existing remote.
Update an existing remote's options. The options should be passed in in pairs of
Update an existing remote’s options. The options should be passed in in pairs of
For example to update the env_auth field of a remote of name myremote you would do:
rclone config update myremote swift env_auth true
-If the remote uses oauth the token will be updated, if you don't require this add an extra parameter thus:
+If the remote uses oauth the token will be updated, if you don’t require this add an extra parameter thus:
rclone config update myremote swift env_auth true config_refresh_token false
rclone config update <name> [<key> <value>]+ [flags]
This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.
+This doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. It doesn’t delete files from the destination.
Note: Use the -P
/--progress
flag to view real-time transfer statistics
rclone copyto source:path dest:path [flags]
You can use it like this also, but that will involve downloading all the files in remote:path.
rclone cryptcheck remote:path encryptedremote:path
After it has run it will log the status of the encryptedremote:.
-If you supply the --one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error.
+If you supply the –one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error.
rclone cryptcheck remote:path cryptedremote:path [flags]
-h, --help help for cryptcheck
@@ -687,7 +696,7 @@ if src is directory
Cryptdecode returns unencrypted file names.
Synopsis
rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.
-If you supply the --reverse flag, it will return encrypted file names.
+If you supply the –reverse flag, it will return encrypted file names.
use it like this
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
@@ -706,14 +715,14 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
rclone deletefile
Remove a single file from remote.
Synopsis
-Remove a single file from remote. Unlike delete
it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.
+Remove a single file from remote. Unlike delete
it cannot be used to remove a directory and it doesn’t obey include/exclude filters - if the specified file exists, it will always be removed.
rclone deletefile remote:path [flags]
Options
-h, --help help for deletefile
rclone genautocomplete
Output completion script for a given shell.
Synopsis
-Generates a shell completion script for rclone. Run with --help to list the supported shells.
+Generates a shell completion script for rclone. Run with –help to list the supported shells.
Options
-h, --help help for genautocomplete
rclone genautocomplete bash
@@ -769,7 +778,7 @@ Supported hashes are:
rclone link will create or retrieve a public link to the given file or folder.
rclone link remote:path/to/file
rclone link remote:path/to/folder/
-If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always be created with the least constraints – e.g. no expiry, no password protection, accessible without account.
+If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always be created with the least constraints – e.g. no expiry, no password protection, accessible without account.
rclone link remote:path [flags]
Options
-h, --help help for link
@@ -793,14 +802,16 @@ canole
diwogej7
ferejej3gux/
fubuwic
-Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:
+Use the –format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:
p - path
s - size
t - modification time
h - hash
-i - ID of object if known
-m - MimeType of object if known
-So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last.
+i - ID of object +o - Original ID of underlying object +m - MimeType of object if known +e - encrypted name +So if you wanted the path, size and modification time, you would use –format “pst”, or maybe –format “tsp” to put the path last.
Eg
$ rclone lsf --format "tsp" swift:bucket
2016-06-25 18:55:41;60295;bevajer5jef
@@ -808,7 +819,7 @@ m - MimeType of object if known
2016-06-25 18:55:43;94467;diwogej7
2018-04-26 08:50:45;0;ferejej3gux/
2016-06-25 18:55:40;37600;fubuwic
-If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type.
+If you specify “h” in the format you will get the MD5 hash by default, use the “–hash” flag to change which hash you want. Note that this can be returned as an empty string if it isn’t available on the object (and for directories), “ERROR” if there was an error reading it from the object and “UNSUPPORTED” if that object does not support that hash type.
For example to emulate the md5sum command you can use
rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
Eg
@@ -818,8 +829,8 @@ cd65ac234e6fea5925974a51cdd865cc canole 03b5341b4f234b9d984d03ad076bae91 diwogej7 8fd37c3810dd660778137ac3a66cc06d fubuwic 99713e14a4c4ff553acaf1930fad985b gixacuh7ku -(Though "rclone md5sum ." is an easier way of typing this.)
-By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy.
+(Though “rclone md5sum .” is an easier way of typing this.)
+By default the separator is “;” this can be changed with the –separator flag. Note that separators aren’t escaped in the path so putting it last is a good strategy.
Eg
$ rclone lsf --separator "," --format "tshp" swift:bucket
2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
@@ -833,7 +844,7 @@ cd65ac234e6fea5925974a51cdd865cc canole
test.log,22355
test.sh,449
"this file contains a comma, in the file name.txt",6
-Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from flag.
+Note that the –absolute parameter is useful for making lists of files to pass to an rclone copy with the –files-from flag.
For example to find all the files modified within one day and copy those only (without traversing the whole directory structure):
rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
rclone copy --files-from new_files /path/to/local remote:path
@@ -847,9 +858,9 @@ rclone copy --files-from new_files /path/to/local remote:path
lsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
Note that ls
and lsl
recurse by default - use "--max-depth 1" to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
+Note that ls
and lsl
recurse by default - use “–max-depth 1” to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use “-R” to make them recurse.
Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
rclone lsf remote:path [flags]
--absolute Put a leading / in front of path names.
@@ -867,12 +878,14 @@ rclone copy --files-from new_files /path/to/local remote:path
List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this
-{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6 }
-If --hash is not specified the Hashes property won't be emitted.
-If --no-modtime is specified then ModTime will be blank.
-If --encrypted is not specified the Encrypted won't be emitted.
-The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name.
-The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (eg Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown ("2017-05-31T16:15:57+01:00").
+{ “Hashes” : { “SHA-1” : “f572d396fae9206628714fb2ce00f72e94f2258f”, “MD5” : “b1946ac92492d2347c6235b4d2611184”, “DropboxHash” : “ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc” }, “ID”: “y2djkhiujf83u33”, “OrigID”: “UYOJVTUW00Q1RzTDA”, “IsDir” : false, “MimeType” : “application/octet-stream”, “ModTime” : “2017-05-31T16:15:57.034468261+01:00”, “Name” : “file.txt”, “Encrypted” : “v0qpsdq8anpci8n929v3uu9338”, “Path” : “full/path/goes/here/file.txt”, “Size” : 6 }
+If –hash is not specified the Hashes property won’t be emitted.
+If –no-modtime is specified then ModTime will be blank.
+If –encrypted is not specified the Encrypted won’t be emitted.
+If –dirs-only is not specified files in addition to directories are returned
+If –files-only is not specified directories in addition to the files will be returned.
+The Path field will only show folders below the remote path being listed. If “remote:path” contains the file “subfolder/file.txt”, the Path for “file.txt” will be “subfolder/file.txt”, not “remote:path/subfolder/file.txt”. When used without –recursive the Path will always be the same as Name.
+The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (eg Google Drive) then 3 digits will always be shown (“2017-05-31T16:15:57.034+01:00”) whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown (“2017-05-31T16:15:57+01:00”).
The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line.
Any of the filtering options can be applied to this commmand.
There are several related list commands
@@ -884,12 +897,14 @@ rclone copy --files-from new_files /path/to/local remote:pathlsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
Note that ls
and lsl
recurse by default - use "--max-depth 1" to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
+Note that ls
and lsl
recurse by default - use “–max-depth 1” to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use “-R” to make them recurse.
Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
rclone lsjson remote:path [flags]
-M, --encrypted Show the encrypted names.
+ --dirs-only Show only directories in the listing.
+ -M, --encrypted Show the encrypted names.
+ --files-only Show only files in the listing.
--hash Include hashes in the output (may take longer).
-h, --help help for lsjson
--no-modtime Don't read the modification time (can speed things up).
@@ -898,14 +913,14 @@ rclone copy --files-from new_files /path/to/local remote:path
rclone mount
Mount the remote as file system on a mountpoint.
Synopsis
-rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
+rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a file system with FUSE.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
Start the mount like this
rclone mount remote:path/to/files /path/to/local/mount
Or on Windows like this where X: is an unused drive letter
rclone mount remote:path/to/files X:
When the program ends, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount is automatically stopped.
-The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually with
+The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user’s responsibility to stop the mount manually with
# Linux
fusermount -u /path/to/local/mount
# OS X
@@ -917,28 +932,28 @@ umount /path/to/local/mount
Note that drives created as Administrator are not visible by other accounts (including the account that was elevated as Administrator). So if you start a Windows drive from an Administrative Command Prompt and then try to access the same drive from Explorer (which does not run as Administrator), you will not be able to see the new drive.
The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using the WinFsp.Launcher infrastructure) which creates drives accessible for everyone on the system or alternatively using the nssm service manager.
Limitations
-Without the use of "--vfs-cache-mode" this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without "--vfs-cache-mode writes" or "--vfs-cache-mode full". See the File Caching section for more info.
-The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift:
won't work whereas swift:bucket
will as will swift:bucket/path
. None of these support the concept of directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
+Without the use of “–vfs-cache-mode” this can only write files sequentially, it can only seek when reading. This means that many applications won’t work with their files on an rclone mount without “–vfs-cache-mode writes” or “–vfs-cache-mode full”. See the File Caching section for more info.
+The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) won’t work from the root - you will need to specify a bucket, or a path within the bucket. So swift:
won’t work whereas swift:bucket
will as will swift:bucket/path
. None of these support the concept of directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
Only supported on Linux, FreeBSD, OS X and Windows at the moment.
rclone mount vs rclone sync/copy
-File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the file caching for solutions to make mount more reliable.
+File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can’t use retries in the same way without making local copies of the uploads. Look at the file caching for solutions to make mount more reliable.
Attribute caching
-You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries.
-The default is "1s" which caches files just long enough to avoid too many callbacks to rclone from the kernel.
+You can use the flag –attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries.
+The default is “1s” which caches files just long enough to avoid too many callbacks to rclone from the kernel.
In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as rclone using too much memory, rclone not serving files to samba and excessive time listing directories.
-The kernel can cache the info about a file for the time given by "--attr-timeout". You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With "--attr-timeout 1s" this is very unlikely but not impossible. The higher you set "--attr-timeout" the more likely it is. The default setting of "1s" is the lowest setting which mitigates the problems above.
-If you set it higher ('10s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above.
-If files don't change on the remote outside of the control of rclone then there is no chance of corruption.
+The kernel can cache the info about a file for the time given by “–attr-timeout”. You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With “–attr-timeout 1s” this is very unlikely but not impossible. The higher you set “–attr-timeout” the more likely it is. The default setting of “1s” is the lowest setting which mitigates the problems above.
+If you set it higher (‘10s’ or ‘1m’ say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above.
+If files don’t change on the remote outside of the control of rclone then there is no chance of corruption.
This is the same as setting the attr_timeout option in mount.fuse.
Filters
Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.
systemd
When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode.
chunked reading
---vfs-read-chunk-size will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests.
-When --vfs-read-chunk-size-limit is also specified and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1 will disable the limit and the chunk size will grow indefinitely.
-With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
-Chunked reading will only work with --vfs-cache-mode < full, as the file will always be copied to the vfs cache before opening with --vfs-cache-mode full.
+–vfs-read-chunk-size will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests.
+When –vfs-read-chunk-size-limit is also specified and greater than –vfs-read-chunk-size, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1 will disable the limit and the chunk size will grow indefinitely.
+With –vfs-read-chunk-size 100M and –vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When –vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
+Chunked reading will only work with –vfs-cache-mode < full, as the file will always be copied to the vfs cache before opening with –vfs-cache-mode full.
Directory Cache
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
@@ -949,11 +964,11 @@ umount /path/to/local/mount
rclone rc vfs/forget file=path/to/file dir=path/to/dir
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
+Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
-You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -962,39 +977,39 @@ umount /path/to/local/mount
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
-If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
-Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.
+If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
-If an upload fails it will be retried up to --low-level-retries times.
-If an upload fails it will be retried up to –low-level-retries times.
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
-If an upload or download fails it will be retried up to --low-level-retries times.
+If an upload or download fails it will be retried up to –low-level-retries times.
rclone mount remote:path /path/to/mountpoint [flags]
--allow-non-empty Allow mounting over a non-empty directory.
@@ -1042,8 +1057,8 @@ umount /path/to/local/mount
if src is directory
move it to dst, overwriting existing files if they exist
see move command for full details
-This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.
-Important: Since this can cause data loss, test first with the --dry-run flag.
+This doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.
+Important: Since this can cause data loss, test first with the –dry-run flag.
Note: Use the -P
/--progress
flag to view real-time transfer statistics.
rclone moveto source:path dest:path [flags]
Explore a remote with a text based user interface.
This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".
+This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - “What is using all my disk space?”.
To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.
-Here are the keys - press '?' to toggle the help on and off
+Here are the keys - press ‘?’ to toggle the help on and off
↑,↓ or k,j to Move
→,l to enter
←,h to return
@@ -1066,7 +1081,7 @@ if src is directory
? to toggle help on and off
q/ESC/c-C to quit
This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands.
-Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously.
+Note that it might take some time to delete big files/folders. The UI won’t respond in the meantime since the deletion is done synchronously.
rclone ncdu remote:path [flags]
-h, --help help for ncdu
@@ -1080,13 +1095,13 @@ if src is directory
Run a command against a running rclone.
This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"
-A username and password can be passed in with --user and --pass.
-Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.
+This runs a command against a running rclone. Use the –url flag to specify an non default URL to connect on. This can be either a “:port” which is taken to mean “http://localhost:port” or a “host:port” which is taken to mean “http://host:port”
+A username and password can be passed in with –user and –pass.
+Note that –rc-addr, –rc-user, –rc-pass will be read also for –url, –user, –pass.
Arguments should be passed in as parameter=value.
The result will be returned as a JSON object by default.
-The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values.
-Use "rclone rc" to see a list of all possible commands.
+The –json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values.
+Use “rclone rc” to see a list of all possible commands.
rclone rc commands parameter [flags]
-h, --help help for rc
@@ -1103,7 +1118,7 @@ if src is directory
ffmpeg - | rclone rcat remote:path/to/file
If the remote file already exists, it will be overwritten.
rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff
. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.
Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move
it to the destination.
Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you’re better off caching locally and then rclone move
it to the destination.
rclone rcat remote:path [flags]
-h, --help help for rcat
@@ -1121,7 +1136,7 @@ ffmpeg - | rclone rcat remote:path/to/file
Remove empty directories under the path.
This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.
-If you supply the --leave-root flag, it will not remove the root directory.
+If you supply the –leave-root flag, it will not remove the root directory.
This is useful for tidying up remotes that rclone has left a lot of empty directories in.
rclone rmdirs remote:path [flags]
rclone serve dlna is a DLNA media server for media stored in a rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.
Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.
Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs.
+Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs.
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
+Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
-You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -1166,39 +1181,39 @@ ffmpeg - | rclone rcat remote:path/to/file
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
-If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
-Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.
+If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
-If an upload fails it will be retried up to --low-level-retries times.
-If an upload fails it will be retried up to –low-level-retries times.
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
-If an upload or download fails it will be retried up to --low-level-retries times.
+If an upload or download fails it will be retried up to –low-level-retries times.
rclone serve dlna remote:path [flags]
--addr string ip:port or :port to bind the DLNA http server to. (default ":7879")
@@ -1225,11 +1240,11 @@ ffmpeg - | rclone rcat remote:path/to/file
rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it.
Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
-If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
+Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
+If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
By default this will serve files without needing a login.
-You can set a single username and password with the --user and --pass flags.
+You can set a single username and password with the –user and –pass flags.
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
+Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
-You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -1253,39 +1268,39 @@ ffmpeg - | rclone rcat remote:path/to/file
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
-If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
-Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.
+If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
-If an upload fails it will be retried up to --low-level-retries times.
-If an upload fails it will be retried up to –low-level-retries times.
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
-If an upload or download fails it will be retried up to --low-level-retries times.
+If an upload or download fails it will be retried up to –low-level-retries times.
rclone serve ftp remote:path [flags]
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
@@ -1314,27 +1329,27 @@ ffmpeg - | rclone rcat remote:path/to/file
Serve the remote over HTTP.
rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.
-You can use the filter flags (eg --include, --exclude) to control what is served.
+You can use the filter flags (eg –include, –exclude) to control what is served.
The server will log errors. Use -v to see access logs.
---bwlimit will be respected for file transfers. Use --stats to control the stats printing.
+–bwlimit will be respected for file transfers. Use –stats to control the stats printing.
Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
-If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
---server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
---max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
+If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
+–server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
+–max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
By default this will serve files without needing a login.
-You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
-Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags.
+Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser
The password file can be updated while rclone is running.
-Use --realm to set the authentication realm.
+Use –realm to set the authentication realm.
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
---cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
+By default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also.
+–cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate.
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
+Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
-You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -1358,39 +1373,39 @@ htpasswd -B htpasswd anotherUser
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
-If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
-Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.
+If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
-If an upload fails it will be retried up to --low-level-retries times.
-If an upload fails it will be retried up to –low-level-retries times.
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
-If an upload or download fails it will be retried up to --low-level-retries times.
+If an upload or download fails it will be retried up to –low-level-retries times.
rclone serve http remote:path [flags]
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
@@ -1423,24 +1438,24 @@ htpasswd -B htpasswd anotherUser
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
Serve the remote for restic's REST API.
+Serve the remote for restic’s REST API.
rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.
+rclone serve restic implements restic’s REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.
Restic is a command line program for doing backups.
The server will log errors. Use -v to see access logs.
---bwlimit will be respected for file transfers. Use --stats to control the stats printing.
+–bwlimit will be respected for file transfers. Use –stats to control the stats printing.
First set up a remote for your chosen cloud provider.
-Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions.
+Once you have set up the remote, check it is working with, for example “rclone lsd remote:”. You may have called the remote something other than “remote:” - just substitute whatever you called it in the following instructions.
Now start the rclone restic server
rclone serve restic -v remote:backup
-Where you can replace "backup" in the above by whatever path in the remote you wish to use.
-By default this will serve on "localhost:8080" you can change this with use of the "--addr" flag.
+Where you can replace “backup” in the above by whatever path in the remote you wish to use.
+By default this will serve on “localhost:8080” you can change this with use of the “–addr” flag.
You might wish to start this server on boot.
Now you can follow the restic instructions on setting up restic.
Note that you will need restic 0.8.2 or later to interoperate with rclone.
-For the example above you will want to use "http://localhost:8080/" as the URL for the REST server.
+For the example above you will want to use “http://localhost:8080/” as the URL for the REST server.
For example:
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/
$ export RESTIC_PASSWORD=yourpassword
@@ -1463,23 +1478,23 @@ snapshot 45c8fdd8 saved
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
# backup user2 stuff
Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
-If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
---server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
---max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
+If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
+–server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
+–max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
By default this will serve files without needing a login.
-You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
-Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags.
+Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser
The password file can be updated while rclone is running.
-Use --realm to set the authentication realm.
+Use –realm to set the authentication realm.
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
---cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
+By default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also.
+–cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate.
rclone serve restic remote:path [flags]
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
@@ -1501,28 +1516,28 @@ htpasswd -B htpasswd anotherUser
rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client or you can make a remote of type webdav to read and write it.
This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object.
-If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1".
-Use "rclone hashsum" to see the full list.
+If this flag is set to “auto” then rclone will choose the first supported hash on the backend or you can use a named hash such as “MD5” or “SHA-1”.
+Use “rclone hashsum” to see the full list.
Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
-If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
---server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
---max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
+If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
+–server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
+–max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
By default this will serve files without needing a login.
-You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
-Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags.
+Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser
The password file can be updated while rclone is running.
-Use --realm to set the authentication realm.
+Use –realm to set the authentication realm.
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
---cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
+By default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also.
+–cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate.
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
+Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
-You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -1546,39 +1561,39 @@ htpasswd -B htpasswd anotherUser
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
-If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
-Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.
+If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
-If an upload fails it will be retried up to --low-level-retries times.
-If an upload fails it will be retried up to –low-level-retries times.
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
-If an upload or download fails it will be retried up to --low-level-retries times.
+If an upload or download fails it will be retried up to –low-level-retries times.
rclone serve webdav remote:path [flags]
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
@@ -1649,8 +1664,8 @@ htpasswd -B htpasswd anotherUser
└── file5
1 directories, 5 files
-You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list.
-The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.
+You can use any of the filtering options with the tree command (eg –include and –exclude). You can also use –fast-list.
+The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone’s short options.
rclone tree remote:path [flags]
-a, --all All files are listed (list . files too).
@@ -1675,7 +1690,7 @@ htpasswd -B htpasswd anotherUser
-U, --unsorted Leave files unsorted.
--version Sort files alphanumerically by version.
rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory
if it isn't.
rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory
if it isn’t.
For example, suppose you have a remote with a file in called test.jpg
, then you could copy just that file like this
rclone copy remote:test.jpg /tmp/download
The file test.jpg
will be placed inside /tmp/download
.
This refers to the local file system.
On Windows only \
may be used instead of /
in local paths only, non local paths must use /
.
These paths needn't start with a leading /
- if they don't then they will be relative to the current directory.
These paths needn’t start with a leading /
- if they don’t then they will be relative to the current directory.
This refers to a directory path/to/dir
on remote:
as defined in the config file (configured with rclone config
).
On most backends this is refers to the same directory as remote:path/to/dir
and that format should be preferred. On a very small number of remotes (FTP, SFTP, Dropbox for business) this will refer to a different directory. On these, paths without a leading /
will refer to your "home" directory and paths with a leading /
will refer to the root.
On most backends this is refers to the same directory as remote:path/to/dir
and that format should be preferred. On a very small number of remotes (FTP, SFTP, Dropbox for business) this will refer to a different directory. On these, paths without a leading /
will refer to your “home” directory and paths with a leading /
will refer to the root.
This is an advanced form for creating remotes on the fly. backend
should be the name or prefix of a backend (the type
in the config file) and all the configuration for the backend should be provided on the command line (or in environment variables).
Eg
+Here are some examples:
rclone lsd --http-url https://pub.rclone.org :http:
-Which lists all the directories in pub.rclone.org
.
To list all the directories in the root of https://pub.rclone.org/
.
rclone lsf --http-url https://example.com :http:path/to/dir
+To list files and directories in https://example.com/path/to/dir/
rclone copy --http-url https://example.com :http:path/to/dir /tmp/dir
+To copy files and directories in https://example.com/path/to/dir
to /tmp/dir
.
rclone copy --sftp-host example.com :sftp:path/to/dir /tmp/dir
+To copy files and directories from example.com
in the relative directory path/to/dir
to /tmp/dir
using sftp.
When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.
Here are some gotchas which may help users unfamiliar with the shell rules
@@ -1707,11 +1728,11 @@ htpasswd -B htpasswd anotherUserrclone copy 'Important files?' remote:backup
If you want to send a '
you will need to use "
, eg
rclone copy "O'Reilly Reviews" remote:backup
-The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell.
+The rules for quoting metacharacters are complicated and if you want the full details you’ll have to consult the manual page for your shell.
If your names have spaces in you need to put them in "
, eg
rclone copy "E:\folder name\folder name\folder name" remote:backup
-If you are using the root directory on its own then don't quote it (see #464 for why), eg
+If you are using the root directory on its own then don’t quote it (see #464 for why), eg
rclone copy E:\ remote:backup
:
in the namesrclone uses :
to mark a remote name. This is, however, a valid filename component in non-Windows OSes. The remote name parser will only search for a :
up to the first /
so if you need to act on a file or directory like this then use the full path starting with a /
, or use ./
as a current directory prefix.
rclone sync /full/path/to/sync:me remote:path
Most remotes (but not all - see the overview) support server side copy.
-This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place.
+This means if you want to copy one folder to another then rclone won’t download all the files and re-upload them; it will instruct the server to copy them in place.
Eg
rclone copy s3:oldbucket s3:newbucket
Will copy the contents of oldbucket
to newbucket
without downloading and re-uploading.
Remotes which don't support server side copy will download and re-upload in this case.
-Server side copies are used with sync
and copy
and will be identified in the log when using the -v
flag. The move
command may also use them if remote doesn't support server side move directly. This is done by issuing a server side copy then a delete which is much quicker than a download and re-upload.
Remotes which don’t support server side copy will download and re-upload in this case.
+Server side copies are used with sync
and copy
and will be identified in the log when using the -v
flag. The move
command may also use them if remote doesn’t support server side move directly. This is done by issuing a server side copy then a delete which is much quicker than a download and re-upload.
Server side copies will only be attempted if the remote names are the same.
This can be used when scripting to make aged backups efficiently, eg
rclone sync remote:current-backup remote:previous-backup
@@ -1734,64 +1755,64 @@ rclone sync /path/to/files remote:current-backup
Rclone has a number of options to control its behaviour.
Options that take parameters can have the values passed in two ways, --option=value
or --option value
. However boolean (true/false) options behave slightly differently to the other options in that --boolean
sets the option to true
and the absence of the flag sets it to false
. It is also possible to specify --boolean=false
or --boolean=true
. Note that --boolean false
is not valid - this is parsed as --boolean
and the false
is parsed as an extra command line argument for rclone.
Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as “300ms”, “-1.5h” or “2h45m”. Valid time units are “ns”, “us” (or “µs”), “ms”, “s”, “m”, “h”.
Options which use SIZE use kByte by default. However, a suffix of b
for bytes, k
for kBytes, M
for MBytes, G
for GBytes, T
for TBytes and P
for PBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
When using sync
, copy
or move
any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.
If --suffix
is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.
The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory.
For example
rclone sync /path/to/local remote:current --backup-dir remote:old
will sync /path/to/local
to remote:current
, but for any files which would have been updated or deleted will be stored in remote:old
.
If running rclone from a script you might want to use today's date as the directory name passed to --backup-dir
to store the old files, or you might want to pass --suffix
with today's date.
Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resolves to more than one IP address it will give an error.
-If running rclone from a script you might want to use today’s date as the directory name passed to --backup-dir
to store the old files, or you might want to pass --suffix
with today’s date.
Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn’t resolve or resolves to more than one IP address it will give an error.
+This option controls the bandwidth limit. Limits can be specified in two ways: As a single limit, or as a timetable.
Single limits last for the duration of the session. To use a single limit, specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The default is 0
which means to not limit bandwidth.
For example, to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as "WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH..." where: WEEKDAY is optional element. It could be writen as whole world or only using 3 first characters. HH:MM is an hour from 00:00 to 23:59.
+It is also possible to specify a “timetable” of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as “WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH…” where: WEEKDAY is optional element. It could be written as whole world or only using 3 first characters. HH:MM is an hour from 00:00 to 23:59.
An example of a typical timetable to avoid link saturation during daytime working hours could be:
--bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"
In this example, the transfer bandwidth will be every day set to 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited.
An example of timetable with WEEKDAY could be:
--bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"
It mean that, the transfer bandwidh will be set to 512kBytes/sec on Monday. It will raise to 10Mbytes/s before the end of Friday. At 10:00 on Sunday it will be set to 1Mbyte/s. From 20:00 at Sunday will be unlimited.
+It mean that, the transfer bandwidth will be set to 512kBytes/sec on Monday. It will raise to 10Mbytes/s before the end of Friday. At 10:00 on Sunday it will be set to 1Mbyte/s. From 20:00 at Sunday will be unlimited.
Timeslots without weekday are extended to whole week. So this one example:
--bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"
Is equal to this:
--bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"
Bandwidth limits only apply to the data transfer. They don't apply to the bandwidth of the directory listings etc.
-Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
Bandwidth limits only apply to the data transfer. They don’t apply to the bandwidth of the directory listings etc.
+Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let’s say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
On Unix systems (Linux, MacOS, …) the bandwidth limiter can be toggled by sending a SIGUSR2
signal to rclone. This allows to remove the limitations of a long running rclone transfer and to restore it back to the value specified with --bwlimit
quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this:
kill -SIGUSR2 $(pidof rclone)
If you configure rclone with a remote control then you can use change the bwlimit dynamically:
rclone rc core/bwlimit rate=1M
-Use this sized buffer to speed up file transfers. Each --transfer
will use this much memory for buffering.
When using mount
or cmount
each open file descriptor will use this much memory for buffering. See the mount documentation for more details.
Set to 0 to disable the buffering for the minimum memory usage.
-Note that the memory allocation of the buffers is influenced by the --use-mmap flag.
-Note that the memory allocation of the buffers is influenced by the –use-mmap flag.
+The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel.
The default is to run 8 checkers in parallel.
-Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal.
-This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size.
+This is useful when the remote doesn’t support setting modified time and a more accurate sync is desired than just checking the file size.
This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section.
Eg rclone --checksum sync s3:/bucket swift:/bucket
would run much quicker than without the --checksum
flag.
When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally.
-When using this flag, rclone won’t update mtimes of remote files if they are incorrect as it would normally.
+Specify the location of the rclone config file.
Normally the config file is in your home directory as a file called .config/rclone/rclone.conf
(or .rclone.conf
if created with an older version). If $XDG_CONFIG_HOME
is set it will be at $XDG_CONFIG_HOME/rclone/rclone.conf
If you run rclone config file
you will see where the default location is for you.
Use this flag to override the config location, eg rclone --config=".myconfig" .config
.
Set the connection timeout. This should be in go time format which looks like 5s
for 5 seconds, 10m
for 10 minutes, or 3h30m
.
The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is 1m
by default.
Mode to run dedupe command in. One of interactive
, skip
, first
, newest
, oldest
, rename
. The default is interactive
. See the dedupe command for more information as to what these options mean.
This disables a comma separated list of optional features. For example to disable server side move and server side copy use:
--disable move,copy
The features can be put in in any case.
@@ -1799,140 +1820,148 @@ rclone sync /path/to/files remote:current-backup--disable help
See the overview features and optional features to get an idea of which feature does what.
This flag can be useful for debugging and in exceptional circumstances (eg Google Drive limiting the total volume of Server Side Copies to 100GB/day).
-Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the sync
command which deletes files in the destination.
Normally rclone will check that the checksums of transferred files match, and give an error "corrupted on transfer" if they don't.
-You can use this option to skip that check. You should only use it if you have had the "corrupted on transfer" error message and you are sure you might want to transfer potentially corrupted data.
-Normally rclone will check that the checksums of transferred files match, and give an error “corrupted on transfer” if they don’t.
+You can use this option to skip that check. You should only use it if you have had the “corrupted on transfer” error message and you are sure you might want to transfer potentially corrupted data.
+Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.
-While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.
-While this isn’t a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.
+Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If --checksum
is set then it only checks the checksum.
It will also cause rclone to skip verifying the sizes are the same after transfer.
This can be useful for transferring files to and from OneDrive which occasionally misreports the size of image files (see #399 for more info).
-Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination.
Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using --checksum
).
Treat source and destination files as immutable and disallow modification.
With this option set, files will be created and deleted as requested, but existing files will never be updated. If an existing file does not match between the source and destination, rclone will give the error Source and destination exist but do not match: immutable file modified
.
Note that only commands which transfer files (e.g. sync
, copy
, move
) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. delete
, purge
) or implicitly (e.g. sync
, move
). Use copy --immutable
if it is desired to avoid deletion as well as modification.
This can be useful as an additional layer of protection for immutable or append-only data sets (notably backup archives), where modification implies corruption and should not be propagated.
-During rmdirs it will not remove root directory, even if it's empty.
-Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v
flag. See the Logging section for more info.
Note that if you are using the logrotate
program to manage rclone's logs, then you should use the copytruncate
option as rclone doesn't have a signal to rotate logs.
Comma separated list of log format options. date
, time
, microseconds
, longfile
, shortfile
, UTC
. The default is "date
,time
".
During rmdirs it will not remove root directory, even if it’s empty.
+Log all of rclone’s output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v
flag. See the Logging section for more info.
Note that if you are using the logrotate
program to manage rclone’s logs, then you should use the copytruncate
option as rclone doesn’t have a signal to rotate logs.
Comma separated list of log format options. date
, time
, microseconds
, longfile
, shortfile
, UTC
. The default is “date
,time
”.
This sets the log level for rclone. The default log level is NOTICE
.
DEBUG
is equivalent to -vv
. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.
INFO
is equivalent to -v
. It outputs information about each transfer and prints stats once a minute by default.
NOTICE
is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events.
ERROR
is equivalent to -q
. It only outputs error messages.
This controls the number of low level retries rclone does.
A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v
flag.
This shouldn't need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries
flag) quicker.
This shouldn’t need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries
flag) quicker.
Disable low level retries with --low-level-retries 1
.
This is the maximum allowable backlog of files in a sync/copy/move queued for being checked or transferred.
This can be set arbitrarily large. It will only use memory when the queue is in use. Note that it will use in the order of N kB of memory when the backlog is in use.
Setting this large allows rclone to calculate how many files are pending more accurately and give a more accurate estimated finish time.
Setting this small will make rclone more synchronous to the listings of the remote which may be desirable.
-This tells rclone not to delete more than N files. If that limit is exceeded then a fatal error will be generated and rclone will stop the operation in progress.
-This modifies the recursion depth for all the commands except purge.
So if you do rclone --max-depth 1 ls remote:path
you will see only the files in the top level directory. Using --max-depth 2
means you will see all the files in first two directory levels and so on.
For historical reasons the lsd
command defaults to using a --max-depth
of 1 - you can override this with the command line flag.
You can use this command to disable recursion (with --max-depth 1
).
Note that if you use this with sync
and --delete-excluded
the files not recursed through are considered excluded and will be deleted on the destination. Test first with --dry-run
if you are not sure what will happen.
Rclone will stop transferring when it has reached the size specified. Defaults to off.
When the limit is reached all transfers will stop immediately.
Rclone will exit with exit code 8 if the transfer limit is reached.
-When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent.
The default is 1ns
unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be 1s
by default.
This command line flag allows you to override that computed default.
-Don't set Accept-Encoding: gzip
. This means that rclone won't ask the server for compressed files automatically. Useful if you've set the server to return files with Content-Encoding: gzip
but you uploaded compressed files.
Don’t set Accept-Encoding: gzip
. This means that rclone won’t ask the server for compressed files automatically. Useful if you’ve set the server to return files with Content-Encoding: gzip
but you uploaded compressed files.
There is no need to set this in normal operation, and doing so will decrease the network transfer efficiency of rclone.
-When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally.
+The --no-traverse
flag controls whether the destination file system is traversed when using the copy
or move
commands. --no-traverse
is not compatible with sync
and will be ignored if you supply it with sync
.
If you are only copying a small number of files (or are filtering most of the files) and/or have a large number of files on the destination then --no-traverse
will stop rclone listing the destination and save time.
However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven’t changed and won’t need copying then you shouldn’t use --no-traverse
.
See rclone copy for an example of how to use it.
+When using this flag, rclone won’t update modification times of remote files if they are incorrect as it would normally.
This can be used if the remote is being synced with another tool also (eg the Google Drive client).
-This flag makes rclone update the stats in a static block in the terminal providing a realtime overview of the transfer.
Any log messages will scroll above the static block. Log messages will push the static block down to the bottom of the terminal where it will stay.
Normally this is updated every 500mS but this period can be overridden with the --stats
flag.
This can be used with the --stats-one-line
flag for a simpler display.
Note: On Windows untilthis bug is fixed all non-ASCII characters will be replaced with .
when --progress
is in use.
Normally rclone outputs stats and a completion message. If you set this flag it will make as little output as possible.
-Retry the entire sync if it fails this many times it fails (default 3).
-Some remotes can be unreliable and a few retries help pick up the files which didn't get transferred because of errors.
+Some remotes can be unreliable and a few retries help pick up the files which didn’t get transferred because of errors.
Disable retries with --retries 1
.
This sets the interval between each retry specified by --retries
The default is 0. Use 0 to disable.
-Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size.
-This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.
-This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn’t set checksums of modification times in the same way as rclone.
+Commands which transfer data (sync
, copy
, copyto
, move
, moveto
) will print data transfer stats at regular intervals to show their progress.
This sets the interval.
The default is 1m
. Use 0 to disable.
If you set the stats interval then all commands can show stats. This can be useful when running other commands, check
or mount
for example.
Stats are logged at INFO
level by default which means they won't show at default log level NOTICE
. Use --stats-log-level NOTICE
or -v
to make them show. See the Logging section for more info on log levels.
Stats are logged at INFO
level by default which means they won’t show at default log level NOTICE
. Use --stats-log-level NOTICE
or -v
to make them show. See the Logging section for more info on log levels.
Note that on macOS you can send a SIGINFO (which is normally ctrl-T in the terminal) to make the stats print immediately.
-By default, the --stats
output will truncate file names and paths longer than 40 characters. This is equivalent to providing --stats-file-name-length 40
. Use --stats-file-name-length 0
to disable any truncation of file names printed by stats.
Log level to show --stats
output at. This can be DEBUG
, INFO
, NOTICE
, or ERROR
. The default is INFO
. This means at the default level of logging which is NOTICE
the stats won't show - if you want them to then use --stats-log-level NOTICE
. See the Logging section for more info on log levels.
Log level to show --stats
output at. This can be DEBUG
, INFO
, NOTICE
, or ERROR
. The default is INFO
. This means at the default level of logging which is NOTICE
the stats won’t show - if you want them to then use --stats-log-level NOTICE
. See the Logging section for more info on log levels.
When this is specified, rclone condenses the stats into a single line showing the most important stats only.
-By default, data transfer rates will be printed in bytes/second.
This option allows the data rate to be printed in bits/second.
Data transfer volume will still be reported in bytes.
The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s.
The default is bytes
.
This is for use with --backup-dir
only. If this isn't set then --backup-dir
will move files with their original name. If it is set then the files will have SUFFIX added on to them.
This is for use with --backup-dir
only. If this isn’t set then --backup-dir
will move files with their original name. If it is set then the files will have SUFFIX added on to them.
See --backup-dir
for more info.
When using --suffix
, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than after.
So let’s say we had --suffix -2019-01-01
, without the flag file.txt
would be backed up to file.txt-2019-01-01
and with the flag it would be backed up to file-2019-01-01.txt
. This can be helpful to make sure the suffixed files can still be opened.
On capable OSes (not Windows or Plan9) send all log output to syslog.
This can be useful for running rclone in a script or rclone mount
.
If using --syslog
this sets the syslog facility (eg KERN
, USER
). See man syslog
for a list of possible facilities. The default facility is DAEMON
.
Limit HTTP transactions per second to this. Default is 0 which is used to mean unlimited transactions per second.
For example to limit rclone to 10 HTTP transactions per second use --tpslimit 10
, or to 1 transaction every 2 seconds use --tpslimit 0.5
.
Use this when the number of transactions per second from rclone is causing a problem with the cloud storage provider (eg getting you banned or rate limited).
This can be very useful for rclone mount
to control the behaviour of applications using it.
See also --tpslimit-burst
.
Max burst of transactions for --tpslimit
. (default 1)
Normally --tpslimit
will do exactly the number of transaction per second specified. However if you supply --tps-burst
then rclone can save up some transactions from when it was idle giving a burst of up to the parameter supplied.
For example if you provide --tpslimit-burst 10
then if rclone has been idle for more than 10*--tpslimit
then it can do 10 transactions very quickly before they are limited again.
This may be used to increase performance of --tpslimit
without changing the long term average number of transactions per second.
By default, rclone doesn't keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy.
+By default, rclone doesn’t keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy.
If you use this flag, and the remote supports server side copy or server side move, and the source and destination have a compatible hash, then this will track renames during sync
operations and perform renaming server-side.
Files will be matched by size and hash - if both match then a rename will be considered.
If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console. Note: Encrypted destinations are not supported by --track-renames
.
Note that --track-renames
is incompatible with --no-traverse
and that it uses extra memory to keep track of all the rename candidates.
Note also that --track-renames
is incompatible with --delete-before
and will select --delete-after
instead of --delete-during
.
This option allows you to specify when files on your destination are deleted when you sync folders.
Specifying the value --delete-before
will delete all files present on the destination, but not on the source before starting the transfer of any new or updated files. This uses two passes through the file systems, one for the deletions and one for the copies.
Specifying --delete-during
will delete files while checking and uploading files. This is the fastest option and uses the least memory.
Specifying --delete-after
(the default value) will delay deletion of files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors
.
When doing anything which involves a directory listing (eg sync
, copy
, ls
- in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very quickly using the least amount of memory.
However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket based remotes (eg S3, B2, GCS, Swift, Hubic).
If you use the --fast-list
flag then rclone will use this method for listing directories. This will have the following consequences for the listing:
rclone should always give identical results with and without --fast-list
.
If you pay for transactions and can fit your entire sync listing into memory then --fast-list
is recommended. If you have a very big sync to do then don't use --fast-list
otherwise you will run out of memory.
If you use --fast-list
on a remote which doesn't support it, then rclone will just ignore it.
If you pay for transactions and can fit your entire sync listing into memory then --fast-list
is recommended. If you have a very big sync to do then don’t use --fast-list
otherwise you will run out of memory.
If you use --fast-list
on a remote which doesn’t support it, then rclone will just ignore it.
This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected.
The default is 5m
. Set to 0 to disable.
The number of file transfers to run in parallel. It can sometimes be useful to set this to a smaller number if the remote is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote.
The default is to run 4 file transfers in parallel.
-This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file.
-If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different.
-On remotes which don't support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.
-This can be useful when transferring to a remote which doesn't support mod times directly as it is more accurate than a --size-only
check and faster than using --checksum
.
If an existing destination file has a modification time equal (within the computed modify window precision) to the source file’s, it will be updated if the sizes are different.
+On remotes which don’t support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.
+This can be useful when transferring to a remote which doesn’t support mod times directly as it is more accurate than a --size-only
check and faster than using --checksum
.
If this flag is set then rclone will use anonymous memory allocated by mmap on Unix based platforms and VirtualAlloc on Windows for its transfer buffers (size controlled by --buffer-size
). Memory allocated like this does not go on the Go heap and can be returned to the OS immediately when it is finished with.
If this flag is not set then rclone will allocate and free the buffers using the Go memory allocator which may use more memory as memory pages are returned less aggressively to the OS.
It is possible this does not work well on all platforms so it is disabled by default; in the future it may be enabled by default.
-Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime). On these backends, rclone stores the original modtime as additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation.
-Use this flag to disable the extra API call and rely instead on the server's modified time. In cases such as a local to remote sync, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.
-Use this flag to disable the extra API call and rely instead on the server’s modified time. In cases such as a local to remote sync, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.
+With -v
rclone will tell you about each file that is transferred and a small number of significant events.
With -vv
rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting.
Prints the version number
+The outoing SSL/TLS connections rclone makes can be controlled with these options. For example this can be very useful with the HTTP or WebDAV backends. Rclone HTTP servers have their own set of configuration for SSL/TLS which you can find in their documentation.
+This loads the PEM encoded certificate authority certificate and uses it to verify the certificates of the servers rclone connects to.
+If you have generated certificates signed with a local CA then you will need this flag to connect to servers using those certificates.
+This loads the PEM encoded client side certificate.
+This is used for mutual TLS authentication.
+The --client-key
flag is required too when using this.
This loads the PEM encoded client side private key used for mutual TLS authentication. Used in conjunction with --client-cert
.
--no-check-certificate
controls whether a client verifies the server’s certificate chain and host name. If --no-check-certificate
is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.
This option defaults to false
.
This should be used only for testing.
Your configuration file contains information for logging in to your cloud services. This means that you should keep your .rclone.conf
file in a secure location.
If you are in an environment where that isn't possible, you can add a password to your configuration. This means that you will have to enter the password every time you start rclone.
+If you are in an environment where that isn’t possible, you can add a password to your configuration. This means that you will have to enter the password every time you start rclone.
To add a password to your rclone configuration, execute rclone config
.
>rclone config
Current remotes:
@@ -2009,42 +2053,33 @@ c/u/q>
read -s RCLONE_CONFIG_PASS
export RCLONE_CONFIG_PASS
Then source the file when you want to use it. From the shell you would do source set-rclone-password
. It will then ask you for the password and set it in the environment variable.
If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter --ask-password=false
to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS
doesn't contain a valid password.
If you are running rclone inside a script, you might want to disable password prompts. To do that, pass the parameter --ask-password=false
to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS
doesn’t contain a valid password.
These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg --drive-test-option
- see the docs for the remote in question.
These options are useful when developing or debugging rclone. There are also some more remote specific options which aren’t documented here which are used for testing. These start with remote name eg --drive-test-option
- see the docs for the remote in question.
Write CPU profile to file. This can be analysed with go tool pprof
.
The --dump
flag takes a comma separated list of flags to dump info about. These are:
Dump HTTP headers with Authorization:
lines removed. May still contain sensitive info. Can be very verbose. Useful for debugging only.
Use --dump auth
if you do want the Authorization:
headers.
Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only.
-Note that the bodies are buffered in memory so don't use this for enormous files.
-Note that the bodies are buffered in memory so don’t use this for enormous files.
+Like --dump bodies
but dumps the request bodies and the response headers. Useful for debugging download problems.
Like --dump bodies
but dumps the response bodies and the request headers. Useful for debugging upload problems.
Dump HTTP headers - will contain sensitive info such as Authorization:
headers - use --dump headers
to dump without Authorization:
headers. Can be very verbose. Useful for debugging only.
Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.
-This dumps a list of the running go-routines at the end of the command to standard output.
-This dumps a list of the open files at the end of the command. It uses the lsof
command to do that so you'll need that installed to use it.
This dumps a list of the open files at the end of the command. It uses the lsof
command to do that so you’ll need that installed to use it.
Write memory profile to file. This can be analysed with go tool pprof
.
--no-check-certificate
controls whether a client verifies the server's certificate chain and host name. If --no-check-certificate
is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.
This option defaults to false
.
This should be used only for testing.
-The --no-traverse
flag controls whether the destination file system is traversed when using the copy
or move
commands. --no-traverse
is not compatible with sync
and will be ignored if you supply it with sync
.
If you are only copying a small number of files (or are filtering most of the files) and/or have a large number of files on the destination then --no-traverse
will stop rclone listing the destination and save time.
However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven't changed and won't need copying then you shouldn't use --no-traverse
.
See rclone copy for an example of how to use it.
For the filtering options
4
- File not found5
- Temporary error (one that more retries might fix) (Retry errors)6
- Less serious errors (like 461 errors from dropbox) (NoRetry errors)7
- Fatal error (one that more retries won't fix, like account suspended) (Fatal errors)8
- Transfer exceeded - limit set by --max-transfer reached7
- Fatal error (one that more retries won’t fix, like account suspended) (Fatal errors)8
- Transfer exceeded - limit set by –max-transfer reachedRclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.
@@ -2123,7 +2158,7 @@ mys3:The filters are applied for the copy
, sync
, move
, ls
, lsl
, md5sum
, sha1sum
, size
, delete
and check
operations. Note that purge
does not obey the filters.
Each path as it passes through rclone is matched against the include and exclude rules like --include
, --exclude
, --include-from
, --exclude-from
, --filter
, or --filter-from
. The simplest way to try them out is using the ls
command, or --dry-run
together with -v
.
The patterns used to match files for inclusion or exclusion are based on "file globs" as used by the unix shell.
-If the pattern starts with a /
then it only matches at the top level of the directory tree, relative to the root of the remote (not necessarily the root of the local drive). If it doesn't start with /
then it is matched starting at the end of the path, but it will only match a complete path element:
The patterns used to match files for inclusion or exclusion are based on “file globs” as used by the unix shell.
+If the pattern starts with a /
then it only matches at the top level of the directory tree, relative to the root of the remote (not necessarily the root of the local drive). If it doesn’t start with /
then it is matched starting at the end of the path, but it will only match a complete path element:
file.jpg - matches "file.jpg"
- matches "directory/file.jpg"
- doesn't match "afile.jpg"
@@ -2203,7 +2238,7 @@ Configuration file is stored at:
l?ss - matches "less"
- matches "lass"
- doesn't match "floss"
-A [
and ]
together make a a character class, such as [a-z]
or [aeiou]
or [[:alpha:]]
. See the go regexp docs for more info on these.
+A [
and ]
together make a character class, such as [a-z]
or [aeiou]
or [[:alpha:]]
. See the go regexp docs for more info on these.
h[ae]llo - matches "hello"
- matches "hallo"
- doesn't match "hullo"
@@ -2223,7 +2258,7 @@ Configuration file is stored at:
With --ignore-case
potato - matches "potato"
- matches "POTATO"
-Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so rclone copy "remote:dir*.jpg" /path/to/dir
won't work - what is required is rclone --include "*.jpg" copy remote:dir /path/to/dir
+Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so rclone copy "remote:dir*.jpg" /path/to/dir
won’t work - what is required is rclone --include "*.jpg" copy remote:dir /path/to/dir
Directories
Rclone keeps track of directories that could match any file patterns.
Eg if you add the include rule
@@ -2231,9 +2266,9 @@ Configuration file is stored at:
Rclone will synthesize the directory include rule
/a/
If you put any rules which end in /
then it will only match directories.
-Directory matches are only used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory.
+Directory matches are only used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won’t optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don’t have a concept of directory.
Differences between rsync and rclone patterns
-Rclone implements bash style {a,b,c}
glob matching which rsync doesn't.
+Rclone implements bash style {a,b,c}
glob matching which rsync doesn’t.
Rclone always does a wildcard match so \
must always escape a \
.
How the rules are used
Rclone maintains a combined list of include rules and exclude rules.
@@ -2290,7 +2325,7 @@ file2.jpg
Add a single include rule with --include
.
This flag can be repeated. See above for the order the flags are processed in.
Eg --include *.{png,jpg}
to include all png
and jpg
files in the backup and no others.
This adds an implicit --exclude *
at the very end of the filter list. This means you can mix --include
and --include-from
with the other filters (eg --exclude
) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from
.
This adds an implicit --exclude *
at the very end of the filter list. This means you can mix --include
and --include-from
with the other filters (eg --exclude
) but you must include all the files you want in the include statement. If this doesn’t provide enough flexibility then you must use --filter-from
.
--include-from
- Read include patterns from fileAdd include rules from a file.
This flag can be repeated. See above for the order the flags are processed in.
@@ -2301,7 +2336,7 @@ file2.jpg file2.aviThen use as --include-from include-file.txt
. This will sync all jpg
, png
files and file2.avi
.
This is useful if you have a lot of rules.
-This adds an implicit --exclude *
at the very end of the filter list. This means you can mix --include
and --include-from
with the other filters (eg --exclude
) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from
.
This adds an implicit --exclude *
at the very end of the filter list. This means you can mix --include
and --include-from
with the other filters (eg --exclude
) but you must include all the files you want in the include statement. If this doesn’t provide enough flexibility then you must use --filter-from
.
--filter
- Add a file-filtering ruleThis can be used to add a single include or exclude rule. Include rules start with +
and exclude rules start with -
. A special rule called !
can be used to clear the existing rules.
This flag can be repeated. See above for the order the flags are processed in.
@@ -2323,7 +2358,8 @@ file2.aviThis example will include all jpg
and png
files, exclude any files matching secret*.jpg
and include file2.avi
. It will also include everything in the directory dir
at the root of the sync, except dir/Trash
which it will exclude. Everything else will be excluded from the sync.
--files-from
- Read list of source-file namesThis reads a list of file names from the file passed in and only these files are transferred. The filtering rules are ignored completely if you use this option.
-Rclone will not scan any directories if you use --files-from
it will just look at the files specified. Rclone will not error if any of the files are missing from the source.
Rclone will traverse the file system if you use --files-from
, effectively using the files in --files-from
as a set of filters. Rclone will not error if any of the files are missing.
If you use --no-traverse
as well as --files-from
then rclone will not traverse the destination file system, it will find each file individually using approximately 1 API call. This can be more efficient for small lists of files.
This option can be repeated to read from more than one file. These are read in the order that they are placed on the command line.
Paths within the --files-from
file will be interpreted as starting with the root specified in the command. Leading /
characters are ignored.
For example, suppose you had files-from.txt
with this content:
This will transfer these files only (if they exist)
/home/me/pics/file1.jpg → remote:pics/file1.jpg
/home/me/pics/subdir/file2.jpg → remote:pics/subdirfile1.jpg
-To take a more complicated example, let's say you had a few files you want to back up regularly with these absolute paths:
+To take a more complicated example, let’s say you had a few files you want to back up regularly with these absolute paths:
/home/user1/important
/home/user1/dir/file
/home/user2/stuff
-To copy these you'd find a common subdirectory - in this case /home
and put the remaining files in files-from.txt
with or without leading /
, eg
To copy these you’d find a common subdirectory - in this case /home
and put the remaining files in files-from.txt
with or without leading /
, eg
user1/important
user1/dir/file
user2/stuff
@@ -2359,13 +2395,13 @@ user2/stuff
/home/user1/important → remote:home/backup/user1/important
/home/user1/dir/file → remote:home/backup/user1/dir/file
/home/user2/stuff → remote:home/backup/stuff
---min-size
- Don't transfer any file smaller than this--min-size
- Don’t transfer any file smaller than thisThis option controls the minimum size file which will be transferred. This defaults to kBytes
but a suffix of k
, M
, or G
can be used.
For example --min-size 50k
means no files smaller than 50kByte will be transferred.
--max-size
- Don't transfer any file larger than this--max-size
- Don’t transfer any file larger than thisThis option controls the maximum size file which will be transferred. This defaults to kBytes
but a suffix of k
, M
, or G
can be used.
For example --max-size 1G
means no files larger than 1GByte will be transferred.
--max-age
- Don't transfer any file older than this--max-age
- Don’t transfer any file older than thisThis option controls the maximum age of files to transfer. Give in seconds or with a suffix of:
ms
- Millisecondsy
- YearsFor example --max-age 2d
means no files older than 2 days will be transferred.
--min-age
- Don't transfer any file younger than this--min-age
- Don’t transfer any file younger than thisThis option controls the minimum age of files to transfer. Give in seconds or with a suffix (see --max-age
for list of suffixes)
For example --min-age 2d
means no files younger than 2 days will be transferred.
--delete-excluded
- Delete files on dest excluded from syncIf you just want to run a remote control then see the rcd command.
NB this is experimental and everything here is subject to change!
Flag to start the http server listen on remote requests
-IPaddress:Port or :Port to bind server to. (default "localhost:5572")
-IPaddress:Port or :Port to bind server to. (default “localhost:5572”)
+SSL PEM key (concatenation of certificate and CA certificate)
-Client certificate authority to verify clients with
-htpasswd file - if not provided no authentication is done
-SSL PEM Private key
-Maximum size of request header (default 4096)
-User name for authentication.
-Password for authentication.
-Realm for authentication (default "rclone")
-Realm for authentication (default “rclone”)
+Timeout for server reading data (default 1h0m0s)
-Timeout for server writing data (default 1h0m0s)
-Enable the serving of remote objects via the HTTP interface. This means objects will be accessible at http://127.0.0.1:5572/ by default, so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/* to see a listing of the remotes. Objects may be requested from remotes using this syntax http://127.0.0.1:5572/[remote:path]/path/to/object
Default Off.
-Path to local files to serve on the HTTP server.
If this is set then rclone will serve the files in that directory. It will also open the root in the web browser if specified. This is for implementing browser based GUIs for rclone functions.
If --rc-user
or --rc-pass
is set then the URL that is opened will have the authorization in the URL in the http://user:pass@localhost/
style.
Default Off.
-By default rclone will require authorisation to have been set up on the rc interface in order to use any methods which access any rclone remotes. Eg operations/list
is denied as it involved creating a remote as is sync/copy
.
If this is set then no authorisation will be required on the server to use these methods. The alternative is to use --rc-user
and --rc-pass
and use these credentials in the request.
Default Off.
@@ -2533,9 +2569,9 @@ rclone rc cache/expire remote=/ withData=trueEnsure the specified file chunks are cached on disk.
The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end]
-start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file.
-Some valid examples are: ":5,-5:" -> the first and last five chunks "0,-2" -> the first and the second last chunk "0:10" -> the first ten chunks
-Any parameter with a key that starts with "file" can be used to specify files to fetch, eg
+start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value “-5:” represents the last 5 chunks of a file.
+Some valid examples are: “:5,-5:” -> the first and last five chunks “0,-2” -> the first and the second last chunk “0:10” -> the first ten chunks
+Any parameter with a key that starts with “file” can be used to specify files to fetch, eg
rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
File names will automatically be encrypted when the a crypt remote is used on top of the cache.
Eg
rclone rc core/bwlimit rate=1M
rclone rc core/bwlimit rate=off
-The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified.
+The format of the parameter is exactly the same as passed to –bwlimit except only one bandwidth may be specified.
This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems.
+This tells the go runtime to do a garbage collection run. It isn’t necessary to call this normally, but it can be useful for debugging memory problems.
This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats
The most interesting values for most people are:
Pass a clear string and rclone will obscure it for the config file: - clear - string
@@ -2638,44 +2676,44 @@ rclone rc core/bwlimit rate=off "checking": an array of names of currently active file checks [] } -Values for "transferring", "checking" and "lastError" are only assigned if data is available. The value for "eta" is null if an eta cannot be determined.
+Values for “transferring”, “checking” and “lastError” are only assigned if data is available. The value for “eta” is null if an eta cannot be determined.
This shows the current version of go and the go runtime - version - rclone version, eg "v1.44" - decomposed - version number as [major, minor, patch, subpatch] - note patch and subpatch will be 999 for a git compiled version - isGit - boolean - true if this was compiled from the git version - os - OS in use as according to Go - arch - cpu architecture in use according to Go - goVersion - version of Go runtime in use
+This shows the current version of go and the go runtime - version - rclone version, eg “v1.44” - decomposed - version number as [major, minor, patch, subpatch] - note patch and subpatch will be 999 for a git compiled version - isGit - boolean - true if this was compiled from the git version - os - OS in use as according to Go - arch - cpu architecture in use according to Go - goVersion - version of Go runtime in use
Parameters - None
Results - jobids - array of integer job ids
Parameters - jobid - id of the job (integer)
-Results - finished - boolean - duration - time in seconds that the job ran for - endTime - time the job finished (eg "2018-10-26T18:50:20.528746884+01:00") - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above - startTime - time the job started (eg "2018-10-26T18:50:20.528336039+01:00") - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously
+Results - finished - boolean - duration - time in seconds that the job ran for - endTime - time the job finished (eg “2018-10-26T18:50:20.528746884+01:00”) - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above - startTime - time the job started (eg “2018-10-26T18:50:20.528336039+01:00”) - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously
This takes the following parameters
The result is as returned from rclone about --json
+The result is as returned from rclone about –json
Authentication is required for this call.
This takes the following parameters
See the cleanup command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
Authentication is required for this call.
This takes the following parameters
See the copyurl command command for more information on the above.
@@ -2683,23 +2721,23 @@ rclone rc core/bwlimit rate=offThis takes the following parameters
See the delete command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
See the deletefile command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
This takes the following parameters
See the mkdir command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
Authentication is required for this call.
+This takes the following parameters
+Returns
+See the link command command for more information on the above.
+Authentication is required for this call.
This takes the following parameters
See the purge command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
See the rmdir command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
See the rmdirs command command for more information on the above.
@@ -2763,7 +2813,7 @@ rclone rc core/bwlimit rate=offThis takes the following parameters
Returns
Parameters
Repeated as often as required.
Only supply the options you wish to change. If an option is unknown it will be silently ignored. Not all options will have an effect when changed like this.
@@ -2804,16 +2856,16 @@ rclone rc core/bwlimit rate=offThis takes the following parameters
See the copy command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
See the move command command for more information on the above.
@@ -2821,8 +2873,8 @@ rclone rc core/bwlimit rate=offThis takes the following parameters
See the sync command command for more information on the above.
Authentication is required for this call.
@@ -2845,13 +2897,13 @@ rclone rc core/bwlimit rate=offrclone rc vfs/refresh
Otherwise pass directories in as dir=path. Any parameter key starting with dir will refresh that directory, eg
rclone rc vfs/refresh dir=home/junk dir2=data/misc
-If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use --fast-list if enabled.
+If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use –fast-list if enabled.
Rclone implements a simple HTTP based protocol.
Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values.
All calls must made using POST.
-The input objects can be supplied using URL parameters, POST parameters or by supplying "Content-Type: application/json" and a JSON blob in the body. There are examples of these below using curl
.
The input objects can be supplied using URL parameters, POST parameters or by supplying “Content-Type: application/json” and a JSON blob in the body. There are examples of these below using curl
.
The response will be a JSON blob in the body of the response. This is formatted to be reasonably human readable.
If an error occurs then there will be an HTTP error status (eg 500) and the body of the response will contain a JSON encoded error object, eg
@@ -2866,7 +2918,7 @@ rclone rc core/bwlimit rate=off }The keys in the error response are - error - error string - input - the input parameters to the call - status - the HTTP status code - path - the path of the call
The sever implements basic CORS support and allows all origins for that. The response to a preflight OPTIONS request will echo the requested "Access-Control-Request-Headers" back.
+The sever implements basic CORS support and allows all origins for that. The response to a preflight OPTIONS request will echo the requested “Access-Control-Request-Headers” back.
curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2'
Response
@@ -2883,7 +2935,7 @@ rclone rc core/bwlimit rate=off "sausage": "2" } } -Note that curl doesn't return errors to the shell unless you use the -f
option
Note that curl doesn’t return errors to the shell unless you use the -f
option
$ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
curl: (22) The requested URL returned error: 400 Bad Request
$ echo $?
@@ -2922,7 +2974,7 @@ $ echo $?
If you use the --rc
flag this will also enable the use of the go profiling tools on the same port.
To use these, first install go.
Debugging memory use
-To profile rclone's memory use you can run:
+To profile rclone’s memory use you can run:
go tool pprof -web http://localhost:5572/debug/pprof/heap
This should open a page in your browser showing what is using what memory.
You can also use the -text
flag to produce a textual summary
@@ -2955,7 +3007,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- 30-second CPU profile:
go tool pprof http://localhost:5572/debug/pprof/profile
- 5-second execution trace:
wget http://localhost:5572/debug/pprof/trace?seconds=5
See the net/http/pprof docs for more info on how to use the profiling and for a general overview see the Go team's blog post on profiling go programs.
+See the net/http/pprof docs for more info on how to use the profiling and for a general overview see the Go team’s blog post on profiling go programs.
The profiling hook is zero overhead unless it is used.
Each cloud storage system is slightly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through.
@@ -2965,189 +3017,197 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalThe cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum
flag in syncs and in the check
command.
To use the verify checksums when transferring between cloud storage systems they must support a common hash type.
† Note that Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s.
-‡ SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote's PATH.
‡ SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote’s PATH.
†† WebDAV supports hashes when used with Owncloud and Nextcloud only.
††† WebDAV supports modtimes when used with Owncloud and Nextcloud only.
-‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own QuickXorHash.
+‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft’s own QuickXorHash.
The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum
flag.
All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.
If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg file.txt
and FILE.txt
. If a cloud storage system is case insensitive then that isn't possible.
If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg file.txt
and FILE.txt
. If a cloud storage system is case insensitive then that isn’t possible.
This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.
The local filesystem and SFTP may or may not be case sensitive depending on OS.
Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems.
+Most of the time this doesn’t cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems.
If a cloud storage system allows duplicate files then it can have two objects with the same name.
This confuses rclone greatly when syncing - use the rclone dedupe
command to rename or remove duplicates.
This deletes a directory quicker than just deleting all the files in the directory.
-† Note Swift and Hubic implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.
+† Note Swift and Hubic implement this in order to delete directory markers but they don’t actually have a quicker way of deleting files other than deleting them individually.
‡ StreamUpload is not supported with Nextcloud
Used when copying an object to and from the same remote. This known as a server side copy so you can copy a file without downloading it and uploading it again. It is used if you use rclone copy
or rclone move
if the remote doesn't support Move
directly.
If the server doesn't support Copy
directly then for copy operations the file is downloaded then re-uploaded.
Used when copying an object to and from the same remote. This known as a server side copy so you can copy a file without downloading it and uploading it again. It is used if you use rclone copy
or rclone move
if the remote doesn’t support Move
directly.
If the server doesn’t support Copy
directly then for copy operations the file is downloaded then re-uploaded.
Used when moving/renaming an object on the same remote. This is known as a server side move of a file. This is used in rclone move
if the server doesn't support DirMove
.
If the server isn't capable of Move
then rclone simulates it with Copy
then delete. If the server doesn't support Copy
then rclone will download the file and re-upload it.
Used when moving/renaming an object on the same remote. This is known as a server side move of a file. This is used in rclone move
if the server doesn’t support DirMove
.
If the server isn’t capable of Move
then rclone simulates it with Copy
then delete. If the server doesn’t support Copy
then rclone will download the file and re-upload it.
This is used to implement rclone move
to move a directory if possible. If it isn't then it will use Move
on each file (which falls back to Copy
then download and upload - see Move
section).
This is used to implement rclone move
to move a directory if possible. If it isn’t then it will use Move
on each file (which falls back to Copy
then download and upload - see Move
section).
This is used for emptying the trash for a remote by rclone cleanup
.
If the server can't do CleanUp
then rclone cleanup
will return an error.
If the server can’t do CleanUp
then rclone cleanup
will return an error.
The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list
flag to work. See the rclone docs for more details.
Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat
.
Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don't have an account on the particular cloud provider.
+Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don’t have an account on the particular cloud provider.
This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash.
This is also used to return the space used, available for rclone mount
.
If the server can't do About
then rclone about
will return an error.
If the server can’t do About
then rclone about
will return an error.
The alias
remote provides a new name for another remote.
Paths may be as deep as required or a local path, eg remote:directory/subdirectory
or /directory/subdirectory
.
During the initial setup with rclone config
you will specify the target remote. The target remote can either be a local path or another remote.
Subfolders can be used in target remote. Asume a alias remote named backup
with the target mydrive:private/backup
. Invoking rclone mkdir backup:desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop
.
Subfolders can be used in target remote. Assume a alias remote named backup
with the target mydrive:private/backup
. Invoking rclone mkdir backup:desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop
.
There will be no special handling of paths containing ..
segments. Invoking rclone mkdir backup:../desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop
. The empty path is not allowed as a remote. To alias the current directory use .
instead.
Here is an example of how to make a alias called remote
for local folder. First run:
rclone config
@@ -3587,8 +3647,8 @@ e/n/d/r/c/s/q> q
Here are the standard options specific to alias (Alias for a existing remote).
-Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
+Remote or path to alias. Can be “myremote:path/to/dir”, “myremote:bucket”, “myremote:” or “/local/path”.
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage service run by Amazon for consumers.
Important: rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the Amazon Drive developer program is now closed to new entries so if you don't already have your own set of keys you will not be able to use rclone with Amazon Drive.
+Important: rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the Amazon Drive developer program is now closed to new entries so if you don’t already have your own set of keys you will not be able to use rclone with Amazon Drive.
For the history on why rclone no longer has a set of Amazon Drive API keys see the forum.
If you happen to know anyone who works at Amazon then please ask them to re-instate rclone into the Amazon Drive developer program - thanks!
The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config
walks you through it.
The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it.
-Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own client_id
and client_secret
with Amazon Drive, or use a a third party ouath proxy in which case you will need to enter client_id
, client_secret
, auth_url
and token_url
.
Note also if you are not using Amazon's auth_url
and token_url
, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize
will not work.
The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google’s very secure App Engine environment and doesn’t store any credentials which pass through it.
+Since rclone doesn’t currently have its own Amazon Drive credentials so you will either need to have your own client_id
and client_secret
with Amazon Drive, or use a a third party ouath proxy in which case you will need to enter client_id
, client_secret
, auth_url
and token_url
.
Note also if you are not using Amazon’s auth_url
and token_url
, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize
will not work.
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -3691,16 +3751,16 @@ y/e/d> yTo copy a local directory to an Amazon Drive directory called backup
rclone copy /home/source remote:backup
Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.
+Amazon Drive doesn’t allow modification times to be changed via the API so these won’t be accurate or used for syncing.
It does store MD5SUMs so for a more accurate sync, you can use the --checksum
flag.
Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days.
+Any files you delete with rclone will end up in the trash. Amazon don’t provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon’s apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days.
.com
Amazon accountsLet's say you usually use amazon.co.uk
. When you authenticate with rclone it will take you to an amazon.com
page to log in. Your amazon.co.uk
email and password should work here just fine.
Let’s say you usually use amazon.co.uk
. When you authenticate with rclone it will take you to an amazon.com
page to log in. Your amazon.co.uk
email and password should work here just fine.
Here are the standard options specific to amazon cloud drive (Amazon Drive).
-Amazon Application Client ID.
Amazon Application Client Secret.
Here are the advanced options specific to amazon cloud drive (Amazon Drive).
-Auth server URL. Leave blank to use Amazon's.
+Auth server URL. Leave blank to use Amazon’s.
Token server url. leave blank to use Amazon's.
+Token server url. leave blank to use Amazon’s.
Checkpoint for internal polling (debug).
Additional time per GB to wait after a failed complete upload to see if it appears.
Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear.
The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears.
You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually.
These values were determined empirically by observing lots of uploads of big files for a range of file sizes.
-Upload with the "-v" flag to see more info about what rclone is doing in this situation.
+Upload with the “-v” flag to see more info about what rclone is doing in this situation.
Files >= this size will be downloaded via their tempLink.
-Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.
-To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage.
+Files this size or more will be downloaded via their “tempLink”. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn’t need to be changed.
+To download files above this threshold, rclone requests a “tempLink” which downloads the file through a temporary URL directly from the underlying S3 storage.
Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
+Note that Amazon Drive is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries
flag) which should hopefully work around this problem.
Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.
At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.
@@ -3968,6 +4028,8 @@ Choose a number from below, or type in your own value \ "ONEZONE_IA" 6 / Glacier storage class \ "GLACIER" + 7 / Glacier Deep Archive storage class + \ "DEEP_ARCHIVE" storage_class> 1 Remote config -------------------- @@ -3988,31 +4050,34 @@ y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> -This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.
-For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update
along with --use-server-modtime
, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.
For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is “dirty”. By using --update
along with --use-server-modtime
, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.
The modified time is stored as metadata on the object as X-Amz-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB.
Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.
-Rclone switches from single part uploads to multipart uploads at the point specified by --s3-upload-cutoff
. This can be a maximum of 5GB and a minimum of 0 (ie always upload mulipart files).
rclone switches from single part uploads to multipart uploads at the point specified by --s3-upload-cutoff
. This can be a maximum of 5GB and a minimum of 0 (ie always upload multipart files).
The chunk sizes used in the multipart upload are specified by --s3-chunk-size
and the number of chunks uploaded concurrently is specified by --s3-upload-concurrency
.
Multipart uploads will use --transfers
* --s3-upload-concurrency
* --s3-chunk-size
extra memory. Single part uploads to not use extra memory.
Single part transfers can be faster than multipart transfers or slower depending on your latency from S3 - the more latency, the more likely single part transfers will be faster.
-Increasing --s3-upload-concurrency
will increase throughput (8 would be a sensible value) and increasing --s3-chunk-size
also increases througput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.
Increasing --s3-upload-concurrency
will increase throughput (8 would be a sensible value) and increasing --s3-chunk-size
also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.
With Amazon S3 you can list buckets (rclone lsd
) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region
.
There are a number of ways to supply rclone
with a set of AWS credentials, with and without using the environment.
The different authentication methods are tried in this order:
env_auth = false
in the config file):env_auth = false
in the config file):
+access_key_id
and secret_access_key
are required.session_token
can be optionally set when using AWS STS.env_auth = true
in the config file):env_auth = true
in the config file):
+rclone
:
AWS_ACCESS_KEY_ID
or AWS_ACCESS_KEY
~/.aws/credentials
on unix based systems) file and the "default" profile, to change set these environment variables:
+~/.aws/credentials
on unix based systems) file and the “default” profile, to change set these environment variables:
AWS_SHARED_CREDENTIALS_FILE
to control which file.AWS_PROFILE
to control which profile to use.rclone
in an ECS task with an IAM role (AWS only).rclone
on an EC2 instance with an IAM role (AWS only).If none of these option actually end up providing rclone
with AWS credentials then S3 interaction will be non-authenticated (see below).
Notes on above:
-USER_NAME
has been created.For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync
.
For reference, here’s an Ansible script that will generate one or more buckets that will work with rclone sync
.
If you are using server side encryption with KMS then you will find you can't transfer small objects. As a work-around you can use the --ignore-checksum
flag.
If you are using server side encryption with KMS then you will find you can’t transfer small objects. As a work-around you can use the --ignore-checksum
flag.
A proper fix is being worked on in issue #1824.
-You can upload objects using the glacier storage class or transition them to glacier using a lifecycle policy. The bucket can still be synced or copied into normally, but if rclone tries to access data from the glacier storage class you will see an error like below.
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
In this case you need to restore the object(s) in question before using rclone.
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
-Choose your S3 provider.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
AWS Access Key ID. Leave blank for anonymous access or runtime credentials.
AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials.
Region to connect to.
Region to connect to. Leave blank if you are using an S3 clone and you don't have a region.
+Region to connect to. Leave blank if you are using an S3 clone and you don’t have a region.
Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region.
Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise.
Endpoint for OSS API.
Endpoint for S3 API. Required when using an S3 clone.
Location constraint - must be set to match the Region. Used when creating buckets only.
Location constraint - must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter
Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only.
Canned ACL used when creating buckets and storing or copying objects.
-This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+This ACL is used for creating objects and if bucket_acl isn’t set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
-Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one.
+Note that this ACL is applied when server side copying objects as S3 doesn’t copy the ACL from the source but rather writes a fresh one.
The server-side encryption algorithm used when storing this object in S3.
If using KMS ID you must provide the ARN of Key.
The storage class to use when storing new objects in S3.
The storage class to use when storing new objects in OSS.
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)).
-Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
-Note that this ACL is applied when only when creating buckets. If it isn't set then "acl" is used instead.
+Note that this ACL is applied when only when creating buckets. If it isn’t set then “acl” is used instead.
Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB.
Chunk size to use for uploading.
When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size.
-Note that "--s3-upload-concurrency" chunks of this size are buffered in memory per transfer.
+Note that “–s3-upload-concurrency” chunks of this size are buffered in memory per transfer.
If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers.
Don't store MD5 checksum with object metadata
+Don’t store MD5 checksum with object metadata
An AWS session token
Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded concurrently.
If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.
@@ -5006,7 +5076,7 @@ y/e/d>If true use path style access if false use virtual hosted style.
If this is true (the default) then rclone will use path style access, if false then rclone will use virtual path style. See the AWS S3 docs for more info.
Some providers (eg Aliyun OSS or Netease COS) require this set to false.
@@ -5016,10 +5086,10 @@ y/e/d>If true use v2 authentication.
If this is false (the default) then rclone will use v4 authentication. If it is set then rclone will use v2 authentication.
-Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
+Use this only if v4 signatures don’t work, eg pre Jewel/v10 CEPH.
If you are using an older version of CEPH, eg 10.2.x Jewel, then you may need to supply the parameter --s3-upload-cutoff 0
or put this in the config file as upload_cutoff 0
to work around a bug which causes uploading of small files to fail.
Note also that Ceph sometimes puts /
in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the /
escaped as \/
. Make sure you only write /
in the secret access key.
Eg the dump from Ceph looks something like this (irrelevant keys removed).
{
@@ -5090,8 +5161,8 @@ server_side_encryption =
storage_class =
Spaces is an S3-interoperable object storage service from cloud provider DigitalOcean.
-To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "Applications & API" page of the DigitalOcean control panel. They will be needed when promted by rclone config
for your access_key_id
and secret_access_key
.
When prompted for a region
or location_constraint
, press enter to use the default value. The region must be included in the endpoint
setting (e.g. nyc3.digitaloceanspaces.com
). The defualt values can be used for other settings.
To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the “Applications & API” page of the DigitalOcean control panel. They will be needed when promted by rclone config
for your access_key_id
and secret_access_key
.
When prompted for a region
or location_constraint
, press enter to use the default value. The region must be included in the endpoint
setting (e.g. nyc3.digitaloceanspaces.com
). The default values can be used for other settings.
Going through the whole process of creating a new remote by running rclone config
, each prompt should be answered as shown below:
Storage> s3
env_auth> 1
@@ -5121,142 +5192,160 @@ rclone copy /path/to/files spaces:my-new-space
Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)
To configure access to IBM COS S3, follow the steps below:
-Run rclone config and select n for a new remote.
-2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
-No remotes found - make a new one
-n) New remote
-s) Set configuration password
-q) Quit config
-n/s/q> n
Enter the name for the configuration
-name> <YOUR NAME>
Select "s3" storage.
+ 2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
+ No remotes found - make a new one
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+ name> <YOUR NAME>
+Choose a number from below, or type in your own value
-1 / Alias for a existing remote
-\ "alias"
-2 / Amazon Drive
-\ "amazon cloud drive"
-3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
-\ "s3"
-4 / Backblaze B2
-\ "b2"
+ 1 / Alias for a existing remote
+ \ "alias"
+ 2 / Amazon Drive
+ \ "amazon cloud drive"
+ 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
+ \ "s3"
+ 4 / Backblaze B2
+ \ "b2"
[snip]
-23 / http Connection
-\ "http"
-Storage> 3
Select IBM COS as the S3 Storage Provider.
+ 23 / http Connection + \ "http" +Storage> 3 +Choose the S3 provider.
Choose a number from below, or type in your own value
- 1 / Choose this option to configure Storage to AWS S3
- \ "AWS"
- 2 / Choose this option to configure Storage to Ceph Systems
- \ "Ceph"
- 3 / Choose this option to configure Storage to Dreamhost
- \ "Dreamhost"
+ 1 / Choose this option to configure Storage to AWS S3
+ \ "AWS"
+ 2 / Choose this option to configure Storage to Ceph Systems
+ \ "Ceph"
+ 3 / Choose this option to configure Storage to Dreamhost
+ \ "Dreamhost"
4 / Choose this option to the configure Storage to IBM COS S3
- \ "IBMCOS"
- 5 / Choose this option to the configure Storage to Minio
- \ "Minio"
- Provider>4
Enter the Access Key and Secret.
-AWS Access Key ID - leave blank for anonymous access or runtime credentials.
-access_key_id> <>
-AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
-secret_access_key> <>
Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an enpoint address.
-Endpoint for IBM COS S3 API.
-Specify if using an IBM COS On Premise.
-Choose a number from below, or type in your own value
- 1 / US Cross Region Endpoint
- \ "s3-api.us-geo.objectstorage.softlayer.net"
- 2 / US Cross Region Dallas Endpoint
- \ "s3-api.dal.us-geo.objectstorage.softlayer.net"
- 3 / US Cross Region Washington DC Endpoint
- \ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
- 4 / US Cross Region San Jose Endpoint
- \ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
- 5 / US Cross Region Private Endpoint
- \ "s3-api.us-geo.objectstorage.service.networklayer.com"
- 6 / US Cross Region Dallas Private Endpoint
- \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
- 7 / US Cross Region Washington DC Private Endpoint
- \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
- 8 / US Cross Region San Jose Private Endpoint
- \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
- 9 / US Region East Endpoint
- \ "s3.us-east.objectstorage.softlayer.net"
-10 / US Region East Private Endpoint
- \ "s3.us-east.objectstorage.service.networklayer.com"
-11 / US Region South Endpoint
+ \ "IBMCOS"
+ 5 / Choose this option to the configure Storage to Minio
+ \ "Minio"
+ Provider>4
+ AWS Access Key ID - leave blank for anonymous access or runtime credentials.
+ access_key_id> <>
+ AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
+ secret_access_key> <>
+ Endpoint for IBM COS S3 API.
+ Specify if using an IBM COS On Premise.
+ Choose a number from below, or type in your own value
+ 1 / US Cross Region Endpoint
+ \ "s3-api.us-geo.objectstorage.softlayer.net"
+ 2 / US Cross Region Dallas Endpoint
+ \ "s3-api.dal.us-geo.objectstorage.softlayer.net"
+ 3 / US Cross Region Washington DC Endpoint
+ \ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
+ 4 / US Cross Region San Jose Endpoint
+ \ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
+ 5 / US Cross Region Private Endpoint
+ \ "s3-api.us-geo.objectstorage.service.networklayer.com"
+ 6 / US Cross Region Dallas Private Endpoint
+ \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
+ 7 / US Cross Region Washington DC Private Endpoint
+ \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
+ 8 / US Cross Region San Jose Private Endpoint
+ \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
+ 9 / US Region East Endpoint
+ \ "s3.us-east.objectstorage.softlayer.net"
+ 10 / US Region East Private Endpoint
+ \ "s3.us-east.objectstorage.service.networklayer.com"
+ 11 / US Region South Endpoint
[snip]
-34 / Toronto Single Site Private Endpoint
- \ "s3.tor01.objectstorage.service.networklayer.com"
-endpoint>1
Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter
- 1 / US Cross Region Standard
- \ "us-standard"
- 2 / US Cross Region Vault
- \ "us-vault"
- 3 / US Cross Region Cold
- \ "us-cold"
- 4 / US Cross Region Flex
- \ "us-flex"
- 5 / US East Region Standard
- \ "us-east-standard"
- 6 / US East Region Vault
- \ "us-east-vault"
- 7 / US East Region Cold
- \ "us-east-cold"
- 8 / US East Region Flex
- \ "us-east-flex"
- 9 / US South Region Standard
- \ "us-south-standard"
-10 / US South Region Vault
- \ "us-south-vault"
+ 34 / Toronto Single Site Private Endpoint
+ \ "s3.tor01.objectstorage.service.networklayer.com"
+ endpoint>1
+ 1 / US Cross Region Standard
+ \ "us-standard"
+ 2 / US Cross Region Vault
+ \ "us-vault"
+ 3 / US Cross Region Cold
+ \ "us-cold"
+ 4 / US Cross Region Flex
+ \ "us-flex"
+ 5 / US East Region Standard
+ \ "us-east-standard"
+ 6 / US East Region Vault
+ \ "us-east-vault"
+ 7 / US East Region Cold
+ \ "us-east-cold"
+ 8 / US East Region Flex
+ \ "us-east-flex"
+ 9 / US South Region Standard
+ \ "us-south-standard"
+ 10 / US South Region Vault
+ \ "us-south-vault"
[snip]
-32 / Toronto Flex
- \ "tor01-flex"
-location_constraint>1
Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
+ 32 / Toronto Flex + \ "tor01-flex" +location_constraint>1 +Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
- 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
- \ "private"
- 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
- \ "public-read"
- 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
- \ "public-read-write"
- 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
- \ "authenticated-read"
-acl> 1
Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
-[xxx]
-type = s3
-Provider = IBMCOS
-access_key_id = xxx
-secret_access_key = yyy
-endpoint = s3-api.us-geo.objectstorage.softlayer.net
-location_constraint = us-standard
-acl = private
Execute rclone commands
-1) Create a bucket.
- rclone mkdir IBM-COS-XREGION:newbucket
-2) List available buckets.
- rclone lsd IBM-COS-XREGION:
- -1 2017-11-08 21:16:22 -1 test
- -1 2018-02-14 20:16:39 -1 newbucket
-3) List contents of a bucket.
- rclone ls IBM-COS-XREGION:newbucket
- 18685952 test.exe
-4) Copy a file from local to remote.
- rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
-5) Copy a file from remote to local.
- rclone copy IBM-COS-XREGION:newbucket/file.txt .
-6) Delete a file on remote.
- rclone delete IBM-COS-XREGION:newbucket/file.txt
[xxx]
+ type = s3
+ Provider = IBMCOS
+ access_key_id = xxx
+ secret_access_key = yyy
+ endpoint = s3-api.us-geo.objectstorage.softlayer.net
+ location_constraint = us-standard
+ acl = private
+ 1) Create a bucket.
+ rclone mkdir IBM-COS-XREGION:newbucket
+ 2) List available buckets.
+ rclone lsd IBM-COS-XREGION:
+ -1 2017-11-08 21:16:22 -1 test
+ -1 2018-02-14 20:16:39 -1 newbucket
+ 3) List contents of a bucket.
+ rclone ls IBM-COS-XREGION:newbucket
+ 18685952 test.exe
+ 4) Copy a file from local to remote.
+ rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
+ 5) Copy a file from remote to local.
+ rclone copy IBM-COS-XREGION:newbucket/file.txt .
+ 6) Delete a file on remote.
+ rclone delete IBM-COS-XREGION:newbucket/file.txt
Minio is an object storage server built for cloud application developers and devops.
It is very easy to install and provides an S3 compatible server which can be used by rclone.
@@ -5523,7 +5612,7 @@ y/e/d> yFor Netease NOS configure as per the configurator rclone config
setting the provider Netease
. This will automatically set force_path_style = false
which is necessary for it to run properly.
B2 is Backblaze's cloud storage system.
+B2 is Backblaze’s cloud storage system.
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, eg remote:bucket/path/to/dir
.
Here is an example of making a b2 configuration. First run
rclone config
@@ -5591,24 +5680,24 @@ y/e/d> y
B2 supports multiple Application Keys for different access permission to B2 Buckets.
You can use these with rclone too; you will need to use rclone version 1.43 or later.
-Follow Backblaze's docs to create an Application Key with the required permission and add the applicationKeyId
as the account
and the Application Key
itself as the key
.
Note that you must put the applicationKeyId as the account
– you can't use the master Account ID. If you try then B2 will return 401 errors.
Follow Backblaze’s docs to create an Application Key with the required permission and add the applicationKeyId
as the account
and the Application Key
itself as the key
.
Note that you must put the applicationKeyId as the account
– you can’t use the master Account ID. If you try then B2 will return 401 errors.
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
The modified time is stored as metadata on the object as X-Bz-Info-src_last_modified_millis
as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time.
Modified times are used in syncing and are fully supported except in the case of updating a modification time on an existing object. In this case the object will be uploaded again as B2 doesn't have an API method to set the modification time independent of doing an upload.
+Modified times are used in syncing and are fully supported except in the case of updating a modification time on an existing object. In this case the object will be uploaded again as B2 doesn’t have an API method to set the modification time independent of doing an upload.
The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process.
Large files (bigger than the limit in --b2-upload-cutoff
) which are uploaded in chunks will store their SHA1 on the object as X-Bz-Info-large_file_sha1
as recommended by Backblaze.
For a large file to be uploaded with an SHA1 checksum, the source needs to support SHA1 checksums. The local disk supports SHA1 checksums so large file transfers from local disk will have an SHA1. See the overview for exactly which remotes support SHA1.
-Sources which don't support SHA1, in particular crypt
will upload large files without SHA1 checksums. This may be fixed in the future (see #1767).
Sources which don’t support SHA1, in particular crypt
will upload large files without SHA1 checksums. This may be fixed in the future (see #1767).
Files sizes below --b2-upload-cutoff
will always have an SHA1 regardless of the source.
Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equipped laptop the optimum setting is about --transfers 32
though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4
is definitely too low for Backblaze B2 though.
Note that uploading big files (bigger than 200 MB by default) will use a 96 MB RAM buffer by default. There can be at most --transfers
of these in use at any moment, so this sets the upper limit on the memory used.
When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the --b2-hard-delete
flag which would permanently remove the file instead of hiding it.
When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a “hard delete” of files with the --b2-hard-delete
flag which would permanently remove the file instead of hiding it.
Old versions of files, where available, are visible using the --b2-versions
flag.
If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket
command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg rclone cleanup remote:bucket/path/to/stuff
.
Note that cleanup
will remove partially uploaded files from the bucket if they are more than a day old.
Clean up all the old versions and show that they've gone.
+Clean up all the old versions and show that they’ve gone.
$ rclone -q cleanup b2:cleanup-test
$ rclone -q ls b2:cleanup-test
@@ -5654,7 +5743,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test
/b2api/v1/b2_upload_part/
/b2api/v1/b2_finish_large_file
Versions can be viewd with the --b2-versions
flag. When it is set rclone will show and act on older versions of files. For example
Versions can be viewed with the --b2-versions
flag. When it is set rclone will show and act on older versions of files. For example
Listing without --b2-versions
$ rclone -q ls b2:cleanup-test
9 one.txt
@@ -5665,11 +5754,11 @@ $ rclone -q --b2-versions ls b2:cleanup-test
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them.
-Note that when using --b2-versions
no file write operations are permitted, so you can't upload files or delete them.
Note that when using --b2-versions
no file write operations are permitted, so you can’t upload files or delete them.
Here are the standard options specific to b2 (Backblaze B2).
-Account ID or Application Key ID
Application Key
Permanently delete files on remote removal, otherwise hide files.
Here are the advanced options specific to b2 (Backblaze B2).
-Endpoint for the service. Leave blank normally.
A flag string for X-Bz-Test-Mode header for debugging.
This is for debugging purposes only. Setting it to one of the strings below will cause b2 to return specific errors:
These will be set in the "X-Bz-Test-Mode" header which is documented in the b2 integrations checklist.
+These will be set in the “X-Bz-Test-Mode” header which is documented in the b2 integrations checklist.
Include old versions in directory listings. Note that when using this no file write operations are permitted, so you can't upload files or delete them.
+Include old versions in directory listings. Note that when using this no file write operations are permitted, so you can’t upload files or delete them.
Cutoff for switching to chunked upload.
-Files above this size will be uploaded in chunks of "--b2-chunk-size".
+Files above this size will be uploaded in chunks of “–b2-chunk-size”.
This value should be set no larger than 4.657GiB (== 5GB).
Upload chunk size. Must fit in memory.
-When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of "--transfers" chunks in progress at once. 5,000,000 Bytes is the minimum size.
+When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of “–transfers” chunks in progress at once. 5,000,000 Bytes is the minimum size.
Disable checksums for large (> upload cutoff) files
Custom endpoint for downloads.
+This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. Leave blank if you want to use the endpoint provided by Backblaze.
+Paths are specified as remote:path
To copy a local directory to an Box directory called backup
rclone copy /home/source remote:backup
If you have an "Enterprise" account type with Box with single sign on (SSO), you need to create a password to use Box with rclone. This can be done at your Enterprise Box account by going to Settings, "Account" Tab, and then set the password in the "Authentication" field.
+If you have an “Enterprise” account type with Box with single sign on (SSO), you need to create a password to use Box with rclone. This can be done at your Enterprise Box account by going to Settings, “Account” Tab, and then set the password in the “Authentication” field.
Once you have done this, you can setup your Enterprise Box account using the same procedure detailed above in the, using the password you have just set.
According to the box docs:
@@ -5846,7 +5944,7 @@ y/e/d> yThis means that if you
Here are the standard options specific to box (Box).
-Box App Client Id. Leave blank normally.
Box App Client Secret Leave blank normally.
Here are the advanced options specific to box (Box).
-Cutoff for switching to multipart upload (>= 50MB).
Max number of times to try committing a multipart file.
Note that Box is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
-Box file names can't have the \
character in. rclone maps this to and from an identical looking unicode equivalent \
.
Note that Box is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
+Box file names can’t have the \
character in. rclone maps this to and from an identical looking unicode equivalent \
.
Box only supports filenames up to 255 characters in length.
The cache
remote wraps another existing remote and stores file structure and its data for long running tasks like rclone mount
.
In an effort to make writing through cache more reliable, the backend now supports this feature which can be activated by specifying a cache-tmp-upload-path
.
A files goes through these states when using this feature:
-cache-tmp-wait-time
passes and the file is next in line, rclone move
is used to move the file to the cloud providercache
when it's actually deleted from the temporary path then cache
will simply swap the source to the cloud provider without interrupting the reading (small blip can happen though)cache
when it’s actually deleted from the temporary path then cache
will simply swap the source to the cloud provider without interrupting the reading (small blip can happen though)Files are uploaded in sequence and only one file is uploaded at a time. Uploads will be stored in a queue and be processed based on the order they were added. The queue and the temporary storage is persistent across restarts but can be cleared on startup with the --cache-db-purge
flag.
How to enable? Run rclone config
and add all the Plex options (endpoint, username and password) in your remote and it will be automatically enabled.
Affected settings: - cache-workers
: Configured value during confirmed playback or 1 all the other times
When the Plex server is configured to only accept secure connections, it is possible to use .plex.direct
URL's to ensure certificate validation succeeds. These URL's are used by Plex internally to connect to the Plex server securely.
The format for this URL's is the following:
+When the Plex server is configured to only accept secure connections, it is possible to use .plex.direct
URL’s to ensure certificate validation succeeds. These URL’s are used by Plex internally to connect to the Plex server securely.
The format for this URL’s is the following:
https://ip-with-dots-replaced.server-hash.plex.direct:32400/
The ip-with-dots-replaced
part can be any IPv4 address, where the dots have been replaced with dashes, e.g. 127.0.0.1
becomes 127-0-0-1
.
To get the server-hash
part, the easiest way is to visit
https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token
This page will list all the available Plex servers for your account with at least one .plex.direct
link for each. Copy one URL and replace the IP address with the desired address. This can be used as the plex_url
value.
--dir-cache-time controls the first layer of directory caching which works at the mount layer. Being an independent caching mechanism from the cache
backend, it will manage its own entries based on the configured time.
–dir-cache-time controls the first layer of directory caching which works at the mount layer. Being an independent caching mechanism from the cache
backend, it will manage its own entries based on the configured time.
To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct one, try to set --dir-cache-time
to a lower time than --cache-info-age
. Default values are already configured in this way.
There are a couple of issues with Windows mount
functionality that still require some investigations. It should be considered as experimental thus far as fixes come in for this OS.
Future iterations of the cache backend will make use of the pooling functionality of the cloud provider to synchronize and at the same time make writing through it more tolerant to failures.
There are a couple of enhancements in track to add these but in the meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts.
-Some recommendations: - don't use a very small interval for entry informations (--cache-info-age
) - while writes aren't yet optimised, you can still write through cache
which gives you the advantage of adding the file in the cache at the same time if configured to do so.
Some recommendations: - don’t use a very small interval for entry informations (--cache-info-age
) - while writes aren’t yet optimised, you can still write through cache
which gives you the advantage of adding the file in the cache at the same time if configured to do so.
Future enhancements:
One common scenario is to keep your data encrypted in the cloud provider using the crypt
remote. crypt
uses a similar technique to wrap around an existing remote and handles this translation in a seamless way.
There is an issue with wrapping the remotes in this order: cloud remote -> crypt -> cache
-During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we're downloading the full file instead of small chunks. Organizing the remotes in this order yelds better results: cloud remote -> cache -> crypt
+During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we’re downloading the full file instead of small chunks. Organizing the remotes in this order yields better results: cloud remote -> cache -> crypt
cache
can not differentiate between relative and absolute paths for the wrapped remote. Any path given in the remote
config setting and on the command line will be passed to the wrapped remote as is, but for storing the chunks on disk the path will be made relative by removing any leading /
character.
This behavior is irrelevant for most backend types, but there are backends where a leading /
changes the effective directory, e.g. in the sftp
backend paths starting with a /
are relative to the root of the SSH server and paths without are relative to the user home directory. As a result sftp:bin
and sftp:/bin
will share the same cache folder, even if they represent a different directory on the SSH server.
This behavior is irrelevant for most backend types, but there are backends where a leading /
changes the effective directory, e.g. in the sftp
backend paths starting with a /
are relative to the root of the SSH server and paths without are relative to the user home directory. As a result sftp:bin
and sftp:/bin
will share the same cache folder, even if they represent a different directory on the SSH server.
Cache supports the new --rc
mode in rclone and can be remote controlled through the following end points: By default, the listener is disabled if you do not add the flag.
Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt.
@@ -6118,15 +6216,15 @@ chunk_total_size = 10GHere are the standard options specific to cache (Cache a remote).
-Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
+Remote to cache. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended).
The URL of the Plex server
The username of the Plex user
The password of the Plex user
The size of a chunk (partial file data).
Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur.
How long to cache file structure information (directory listings, file size, times etc). If all write operations are done through the cache then you can safely make this value very large as the cache store will also be updated in real time.
The total size that the chunks can take up on the local disk.
If the cache exceeds this value then it will start to delete the oldest chunks until it goes under this value.
Here are the advanced options specific to cache (Cache a remote).
-The plex token for authentication - auto set normally
Skip all certificate verifications when connecting to the Plex server
Directory to store file structure metadata DB. The remote name is used as the DB file name.
Directory to cache chunk files.
Path to where partial file data (chunks) are stored locally. The remote name is appended to the final path.
-This config follows the "--cache-db-path". If you specify a custom location for "--cache-db-path" and don't specify one for "--cache-chunk-path" then "--cache-chunk-path" will use the same path as "--cache-db-path".
+This config follows the “–cache-db-path”. If you specify a custom location for “–cache-db-path” and don’t specify one for “–cache-chunk-path” then “–cache-chunk-path” will use the same path as “–cache-db-path”.
Clear all the cached data for this remote on start.
How often should the cache perform cleanups of the chunk storage. The default value should be ok for most people. If you find that the cache goes over "cache-chunk-total-size" too often then try to lower this value to force it to perform cleanups more often.
+How often should the cache perform cleanups of the chunk storage. The default value should be ok for most people. If you find that the cache goes over “cache-chunk-total-size” too often then try to lower this value to force it to perform cleanups more often.
How many times to retry a read from a cache storage.
-Since reading from a cache stream is independent from downloading file data, readers can get to a point where there's no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn't able to provide file data anymore.
+Since reading from a cache stream is independent from downloading file data, readers can get to a point where there’s no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn’t able to provide file data anymore.
For really slow connections, increase this to a point where the stream is able to provide data but your experience will be very stuttering.
How many workers should run in parallel to download chunks.
Higher values will mean more parallel processing (better CPU needed) and more concurrent requests on the cloud provider. This impacts several aspects like the cloud provider API limits, more stress on the hardware that rclone runs on but it also means that streams will be more fluid and data will be available much more faster to readers.
Note: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use.
@@ -6293,10 +6391,10 @@ chunk_total_size = 10GDisable the in-memory cache for storing chunks during streaming.
By default, cache will keep file data during streaming in RAM as well to provide it to readers as fast as possible.
-This transient data is evicted as soon as it is read and the number of chunks stored doesn't exceed the number of workers. However, depending on other settings like "cache-chunk-size" and "cache-workers" this footprint can increase if there are parallel streams too (multiple files being read at the same time).
+This transient data is evicted as soon as it is read and the number of chunks stored doesn’t exceed the number of workers. However, depending on other settings like “cache-chunk-size” and “cache-workers” this footprint can increase if there are parallel streams too (multiple files being read at the same time).
If the hardware permits it, use this feature to provide an overall better performance during streaming but it can also be disabled if RAM is not available on the local machine.
Limits the number of requests per second to the source FS (-1 to disable)
This setting places a hard limit on the number of requests per second that cache will be doing to the cloud provider remote and try to respect that value by setting waits between reads.
-If you find that you're getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that.
+If you find that you’re getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that.
A good balance of all the other settings should make this setting useless but it is available to set for more special cases.
NOTE: This will limit the number of requests during streams but other API calls to the cloud provider like directory listings will still pass.
Cache file data on writes through the FS
If you need to read files immediately after you upload them through cache you can enable this flag to have their data stored in the cache store at the same time during upload.
Directory to keep temporary files until they are uploaded.
This is the path where cache will use as a temporary storage for new files that need to be uploaded to the cloud provider.
Specifying a value will enable this feature. Without it, it is completely disabled and files will be uploaded directly to the cloud provider
@@ -6335,7 +6433,7 @@ chunk_total_size = 10GHow long should files be stored in local cache before being uploaded
This is the duration that a file must wait in the temporary location cache-tmp-upload-path before it is selected for upload.
Note that only one file is uploaded at a time and it can take longer to start the upload if a queue formed for this purpose.
@@ -6345,7 +6443,7 @@ chunk_total_size = 10GHow long to wait for the DB to be available - 0 is unlimited
Only one process can have the DB open at any one time, so rclone waits for this duration for the DB to become available before it gives an error.
If you set it to 0 then it will wait forever.
@@ -6359,7 +6457,7 @@ chunk_total_size = 10GThe crypt
remote encrypts and decrypts another remote.
To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example.
-First check your chosen remote is working - we'll call it remote:path
in these docs. Note that anything inside remote:path
will be encrypted and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket
. If you just use s3:
then rclone will make encrypted bucket names too (if using file name encryption) which may or may not be what you want.
First check your chosen remote is working - we’ll call it remote:path
in these docs. Note that anything inside remote:path
will be encrypted and anything outside won’t. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket
. If you just use s3:
then rclone will make encrypted bucket names too (if using file name encryption) which may or may not be what you want.
Now configure crypt
using rclone config
. We will call this one secret
to differentiate it from the remote
.
No remotes found - make a new one
n) New remote
@@ -6452,7 +6550,7 @@ y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
-Important The password is stored in the config file is lightly obscured so it isn't immediately obvious what it is. It is in no way secure unless you use config file encryption.
+Important The password is stored in the config file is lightly obscured so it isn’t immediately obvious what it is. It is in no way secure unless you use config file encryption.
A long passphrase is recommended, or you can use a random one. Note that if you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible - all the secrets used are derived from those two passwords/passphrases.
Note that rclone does not encrypt
In normal use, make sure the remote has a :
in. If you specify the remote without a :
then rclone will use a local directory of that name. So if you use a remote of /path/to/secret/files
then rclone will encrypt stuff to that directory. If you use a remote of name
then rclone will put files in a directory called name
in the current directory.
If you specify the remote as remote:path/to/dir
then rclone will store encrypted files in path/to/dir
on the remote. If you are using file name encryption, then when you save files to secret:subdir/subfile
this will store them in the unencrypted path path/to/dir
but the subdir/subpath
bit will be encrypted.
Note that unless you want encrypted bucket names (which are difficult to manage because you won't know what directory they represent in web interfaces etc), you should probably specify a bucket, eg remote:secretbucket
when using bucket based remotes such as S3, Swift, Hubic, B2, GCS.
Note that unless you want encrypted bucket names (which are difficult to manage because you won’t know what directory they represent in web interfaces etc), you should probably specify a bucket, eg remote:secretbucket
when using bucket based remotes such as S3, Swift, Hubic, B2, GCS.
To test I made a little directory of files using "standard" file name encryption.
+To test I made a little directory of files using “standard” file name encryption.
plaintext/
├── file0.txt
├── file1.txt
@@ -6493,7 +6591,7 @@ $ rclone -q ls secret:
8 file2.txt
9 file3.txt
10 subsubdir/file4.txt
-If don't use file name encryption then the remote will look like this - note the .bin
extensions added to prevent the cloud provider attempting to interpret the data.
If don’t use file name encryption then the remote will look like this - note the .bin
extensions added to prevent the cloud provider attempting to interpret the data.
$ rclone -q ls remote:path
54 file0.txt.bin
57 subdir/file3.txt.bin
@@ -6504,22 +6602,22 @@ $ rclone -q ls secret:
Here are some of the features of the file name encryption modes
Off
-- doesn't hide file names or directory structure
+- doesn’t hide file names or directory structure
- allows for longer file names (~246 characters)
- can use sub paths and copy single files
Standard
- file names encrypted
-- file names can't be as long (~143 characters)
+- file names can’t be as long (~143 characters)
- can use sub paths and copy single files
- directory structure visible
- identical files names will have identical uploaded names
- can use shortcuts to shorten the directory recursion
Obfuscation
-This is a simple "rotate" of the filename, with each file having a rot distance based on the filename. We store the distance at the beginning of the filename. So a file called "hello" may become "53.jgnnq"
-This is not a strong encryption of filenames, but it may stop automated scanning tools from picking up on filename patterns. As such it's an intermediate between "off" and "standard". The advantage is that it allows for longer path segment names.
+This is a simple “rotate” of the filename, with each file having a rot distance based on the filename. We store the distance at the beginning of the filename. So a file called “hello” may become “53.jgnnq”
+This is not a strong encryption of filenames, but it may stop automated scanning tools from picking up on filename patterns. As such it’s an intermediate between “off” and “standard”. The advantage is that it allows for longer path segment names.
There is a possibility with some unicode based filenames that the obfuscation is weak and may map lower case characters to upper case equivalents. You can not rely on this for strong protection.
- file names very lightly obfuscated
@@ -6528,7 +6626,7 @@ $ rclone -q ls secret:
- directory structure visible
- identical files names will have identical uploaded names
-Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.
+Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using “Standard” file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.
There may be an even more secure file name encryption mode in the future which will address the long file name problem.
Directory name encryption
Crypt offers the option of encrypting dir names or leaving them intact. There are two options:
@@ -6539,42 +6637,42 @@ $ rclone -q ls secret:
Modified time and hashes
Crypt stores modification times using the underlying remote so support depends on that.
Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.
-Note that you should use the rclone cryptcheck
command to check the integrity of a crypted remote instead of rclone check
which can't check the checksums properly.
+Note that you should use the rclone cryptcheck
command to check the integrity of a crypted remote instead of rclone check
which can’t check the checksums properly.
Standard Options
Here are the standard options specific to crypt (Encrypt/Decrypt a remote).
---crypt-remote
-Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
+–crypt-remote
+Remote to encrypt/decrypt. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended).
- Config: remote
- Env Var: RCLONE_CRYPT_REMOTE
- Type: string
- Default: ""
---crypt-filename-encryption
+–crypt-filename-encryption
How to encrypt the filenames.
- Config: filename_encryption
- Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION
- Type: string
-- Default: "standard"
+- Default: “standard”
- Examples:
-- "off"
+
- “off”
-- Don't encrypt the file names. Adds a ".bin" extension only.
+- Don’t encrypt the file names. Adds a “.bin” extension only.
-- "standard"
+
- “standard”
- Encrypt the filenames see the docs for the details.
-- "obfuscate"
+
- “obfuscate”
- Very simple filename obfuscation.
---crypt-directory-name-encryption
+–crypt-directory-name-encryption
Option to either encrypt directory names or leave them intact.
- Config: directory_name_encryption
@@ -6583,17 +6681,17 @@ $ rclone -q ls secret:
- Default: true
- Examples:
-- "true"
+
- “true”
- Encrypt directory names.
-- "false"
+
- “false”
-- Don't encrypt directory names, leave them intact.
+- Don’t encrypt directory names, leave them intact.
---crypt-password
+–crypt-password
Password or pass phrase for encryption.
- Config: password
@@ -6601,7 +6699,7 @@ $ rclone -q ls secret:
- Type: string
- Default: ""
---crypt-password2
+–crypt-password2
Password or pass phrase for salt. Optional but recommended. Should be different to the previous password.
- Config: password2
@@ -6611,7 +6709,7 @@ $ rclone -q ls secret:
Advanced Options
Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).
---crypt-show-mapping
+–crypt-show-mapping
For all files listed show how the names encrypt.
If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name.
This is so you can work out which encrypted names are which decrypted names just in case you need to do something with the encrypted file names, or for debugging purposes.
@@ -6628,9 +6726,9 @@ $ rclone -q ls secret:
rclone sync
will check the checksums while copying
- you can use
rclone check
between the encrypted remotes
-- you don't decrypt and encrypt unnecessarily
+- you don’t decrypt and encrypt unnecessarily
-For example, let's say you have your original remote at remote:
with the encrypted version at eremote:
with path remote:crypt
. You would then set up the new remote remote2:
and then the encrypted version eremote2:
with path remote2:crypt
using the same passwords as eremote:
.
+For example, let’s say you have your original remote at remote:
with the encrypted version at eremote:
with path remote:crypt
. You would then set up the new remote remote2:
and then the encrypted version eremote2:
with path remote2:crypt
using the same passwords as eremote:
.
To sync the two remotes you would do
rclone sync remote:crypt remote2:crypt
And to check the integrity you would do
@@ -6651,7 +6749,7 @@ $ rclone -q ls secret:
- 16 Bytes of Poly1305 authenticator
- 1 - 65536 bytes XSalsa20 encrypted data
-64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big.
+64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can’t be too big.
This uses a 32 byte (256 bit key) key derived from the user password.
Examples
1 byte file will encrypt to
@@ -6669,12 +6767,12 @@ $ rclone -q ls secret:
Name encryption
File names are encrypted segment by segment - the path is broken up into /
separated strings and these are encrypted individually.
File segments are padded using using PKCS#7 to a multiple of 16 bytes before encryption.
-They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.
-This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system.
+They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper “A Parallelizable Enciphering Mode” by Halevi and Rogaway.
+This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can’t find it on the cloud storage system.
This means that
- filenames with the same name will encrypt the same
-- filenames which start the same won't have a common prefix
+- filenames which start the same won’t have a common prefix
This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password.
After encryption they are written out using a modified version of standard base32
encoding as described in RFC4648. The standard encoding is modified in two ways:
@@ -6684,7 +6782,7 @@ $ rclone -q ls secret:
base32
is used rather than the more efficient base64
so rclone can be used on case insensitive remotes (eg Windows, Amazon Drive).
Key derivation
-Rclone uses scrypt
with parameters N=16384, r=8, p=1
with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.
+Rclone uses scrypt
with parameters N=16384, r=8, p=1
with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn’t supply a salt then rclone uses an internal one.
scrypt
makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.
Dropbox
Paths are specified as remote:path
@@ -6760,12 +6858,12 @@ y/e/d> y
A leading /
for a Dropbox personal account will do nothing, but it will take an extra HTTP transaction so it should be avoided.
Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.
-This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use --size-only
or --checksum
flag to stop it.
This means that if you uploaded your data with an older version of rclone which didn’t support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don’t want this to happen use --size-only
or --checksum
flag to stop it.
Dropbox supports its own hash type which is checked for all transfers.
Here are the standard options specific to dropbox (Dropbox).
-Dropbox App Client Id Leave blank normally.
Dropbox App Client Secret Leave blank normally.
Here are the advanced options specific to dropbox (Dropbox).
-Upload chunk size. (< 150M).
Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.
@@ -6793,7 +6891,7 @@ y/e/d> yImpersonate this user when using a business account.
Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
-There are some file names such as thumbs.db
which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading
if it attempts to upload one of those file names, but the sync won't fail.
Note that Dropbox is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
+There are some file names such as thumbs.db
which Dropbox can’t store. There is a full list of them in the “Ignored Files” section of this document. Rclone will issue an error message File name disallowed - not uploading
if it attempts to upload one of those file names, but the sync won’t fail.
If you have more than 10,000 files in a directory then rclone purge dropbox:dir
will return the error Failed to purge: There are too many files involved in this operation
. As a work-around do an rclone delete dropbox:dir
followed by an rclone rmdir dropbox:dir
.
FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package.
@@ -6895,7 +6993,7 @@ y/e/d> yHere are the standard options specific to ftp (FTP Connection).
-FTP host to connect to
FTP username, leave blank for current username, ncw
FTP port, leave blank to use default (21)
FTP password
Here are the advanced options specific to ftp (FTP Connection).
+Maximum number of FTP simultaneous connections, 0 for unlimited
+Note that since FTP isn't HTTP based the following flags don't work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn't supported (but --contimeout
is).
Note that --bind
isn't supported.
FTP could support server side move but doesn't yet.
+Note that since FTP isn’t HTTP based the following flags don’t work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn’t supported (but --contimeout
is).
Note that --bind
isn’t supported.
FTP could support server side move but doesn’t yet.
Note that the ftp backend does not support the ftp_proxy
environment variable yet.
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, eg remote:bucket/path/to/dir
.
Sync /home/local/directory
to the remote bucket, deleting any excess files in the bucket.
rclone sync /home/local/directory remote:bucket
You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.
-To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User
permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.
To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file
prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials
with the actual contents of the file instead, or set the equivalent environment variable.
You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don’t have actively logged-in users, for example build machines.
+To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User
permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account’s credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.
To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file
prompt and rclone won’t use the browser based authentication flow. If you’d rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials
with the actual contents of the file instead, or set the equivalent environment variable.
If no other source of credentials is provided, rclone will fall back to Application Default Credentials this is useful both when you already have configured authentication for your developer account, or in production when running on a google compute host. Note that if running in docker, you may need to run additional commands on your google compute machine - see this page.
+Note that in the case application default credentials are used, there is no need to explicitly configure a project number.
+This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns.
+Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the “mtime” key in RFC3339 format accurate to 1ns.
Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
-Google Application Client Id Leave blank normally.
Google Application Client Secret Leave blank normally.
Project number. Optional - needed only for list/create/delete buckets - see your developer console.
Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login.
Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login.
Access Control List for new objects.
Access Control List for new buckets.
Access checks should use bucket-level IAM policies.
+If you want to upload objects to a bucket with Bucket Policy Only set then you will need to set this.
+When it is set, rclone:
+Docs: https://cloud.google.com/storage/docs/bucket-policy-only
+Location for the newly created buckets.
The storage class to use when storing objects in Google Cloud Storage.
The scope are
This is the default scope and allows full access to all files, except for the Application Data Folder (see below).
-Choose this one if you aren't sure.
+Choose this one if you aren’t sure.
This allows read only access to all files. Files may be listed and downloaded but not uploaded, renamed or deleted.
This can be useful if you are using rclone to backup data and you want to be sure confidential data on your drive is not visible to rclone.
Files created with this scope are visible in the web interface.
This gives rclone its own private area to store files. Rclone will not be able to see any other files on your drive and you won't be able to see rclone's files from the web interface either.
+This gives rclone its own private area to store files. Rclone will not be able to see any other files on your drive and you won’t be able to see rclone’s files from the web interface either.
This allows read only access to file names only. It does not allow rclone to download or upload data, or rename or delete files or directories.
You can set the root_folder_id
for rclone. This is the directory (identified by its Folder ID
) that rclone considers to be a the root of your drive.
You can set the root_folder_id
for rclone. This is the directory (identified by its Folder ID
) that rclone considers to be the root of your drive.
Normally you will leave this blank and rclone will determine the correct root to use itself.
-However you can set this to restrict rclone to a specific folder hierarchy or to access data within the "Computers" tab on the drive web interface (where files from Google's Backup and Sync desktop program go).
+However you can set this to restrict rclone to a specific folder hierarchy or to access data within the “Computers” tab on the drive web interface (where files from Google’s Backup and Sync desktop program go).
In order to do this you will have to find the Folder ID
of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the drive web interface.
So if the folder you want rclone to use has a URL which looks like https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh
in the browser, then you use 1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh
as the root_folder_id
in the config.
NB folders under the "Computers" tab seem to be read only (drive gives a 500 error) when using rclone.
-There doesn't appear to be an API to discover the folder IDs of the "Computers" tab - please contact us if you know otherwise!
-Note also that rclone can't access any data under the "Backups" tab on the google drive web interface yet.
+NB folders under the “Computers” tab seem to be read only (drive gives a 500 error) when using rclone.
+There doesn’t appear to be an API to discover the folder IDs of the “Computers” tab - please contact us if you know otherwise!
+Note also that rclone can’t access any data under the “Backups” tab on the google drive web interface yet.
You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.
-To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file
prompt during rclone config
and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials
with the actual contents of the file instead, or set the equivalent environment variable.
You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don’t have actively logged-in users, for example build machines.
+To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file
prompt during rclone config
and rclone won’t use the browser based authentication flow. If you’d rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials
with the actual contents of the file instead, or set the equivalent environment variable.
Let's say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual's Drive account, who IS a member of the domain. We'll call the domain example.com, and the user foo@example.com.
-There's a few steps we need to go through to accomplish this:
+Let’s say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual’s Drive account, who IS a member of the domain. We’ll call the domain example.com, and the user foo@example.com.
+There’s a few steps we need to go through to accomplish this:
https://www.googleapis.com/auth/drive
to grant access to Google Drive specifically.https://www.googleapis.com/auth/drive
to grant access to Google Drive specifically.rclone config
@@ -7489,7 +7616,7 @@ root_folder_id> # Can be left blank
service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes!
y/n> # Auto config, y
-rclone -v --drive-impersonate foo@example.com lsf gdrive:backup
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
It does this by combining multiple list
calls into a single API request.
This works by combining many '%s' in parents
filters into one expression. To list the contents of directories a, b and c, the the following requests will be send by the regular List
function:
This works by combining many '%s' in parents
filters into one expression. To list the contents of directories a, b and c, the following requests will be send by the regular List
function:
trashed=false and 'a' in parents
trashed=false and 'b' in parents
trashed=false and 'c' in parents
@@ -7568,11 +7695,11 @@ trashed=false and 'c' in parents
Google documents can be exported from and uploaded to Google Drive.
When rclone downloads a Google doc it chooses a format to download depending upon the --drive-export-formats
setting. By default the export formats are docx,xlsx,pptx,svg
which are a sensible default for an editable document.
When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list.
+When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can’t be exported to a format on the formats list, then rclone will choose a format from the default list.
If you prefer an archive copy then you might use --drive-export-formats pdf
, or if you prefer openoffice/libreoffice formats you might use --drive-export-formats ods,odt,odp
.
Note that rclone adds the extension to the google doc, so if it is calles My Spreadsheet
on google docs, it will be exported as My Spreadsheet.xlsx
or My Spreadsheet.pdf
etc.
Note that rclone adds the extension to the google doc, so if it is called My Spreadsheet
on google docs, it will be exported as My Spreadsheet.xlsx
or My Spreadsheet.pdf
etc.
When importing files into Google Drive, rclone will conververt all files with an extension in --drive-import-formats
to their associated document type. rclone will not convert any files by default, since the conversion is lossy process.
The conversion must result in a file with the same extension when the --drive-export-formats
rules are applied to the uploded document.
The conversion must result in a file with the same extension when the --drive-export-formats
rules are applied to the uploaded document.
Here are some examples for allowed and prohibited conversions.
This limitation can be disabled by specifying --drive-allow-import-name-change
. When using this flag, rclone can convert multiple files types resulting in the same document type at once, eg with --drive-import-formats docx,odt,txt
, all files having these extension would result in a doument represented as a docx file. This brings the additional risk of overwriting a document, if multiple files have the same stem. Many rclone operations will not handle this name change in any way. They assume an equal name when copying files and might copy the file again or delete them when the name changes.
This limitation can be disabled by specifying --drive-allow-import-name-change
. When using this flag, rclone can convert multiple files types resulting in the same document type at once, eg with --drive-import-formats docx,odt,txt
, all files having these extension would result in a document represented as a docx file. This brings the additional risk of overwriting a document, if multiple files have the same stem. Many rclone operations will not handle this name change in any way. They assume an equal name when copying files and might copy the file again or delete them when the name changes.
Here are the possible export extensions with their corresponding mime types. Most of these can also be used for importing, but there more that are not listed here. Some of these additional ones might only be available when the operating system provides the correct MIME type entries.
-This list can be changed by Google Drive at any time and might not represent the currently available converions.
-Google douments can also be exported as link files. These files will open a browser window for the Google Docs website of that dument when opened. The link file extension has to be specified as a --drive-export-formats
parameter. They will match all available Google Documents.
Google documents can also be exported as link files. These files will open a browser window for the Google Docs website of that document when opened. The link file extension has to be specified as a --drive-export-formats
parameter. They will match all available Google Documents.