diff --git a/MANUAL.html b/MANUAL.html index 51a73d999..be1581709 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -6,81 +6,82 @@
Nov 01, 2021
+Mar 18, 2022
Rclone is a command line program to manage files on cloud storage. It is a feature rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.
+Rclone is a command-line program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.
Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone's familiar syntax includes shell pipeline support, and --dry-run
protection. It is used at the command line, in scripts or via its API.
Users call rclone "The Swiss army knife of cloud storage", and "Technology indistinguishable from magic".
Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can check the integrity of your files. Where possible, rclone employs server-side transfers to minimise local bandwidth use and transfers from one provider to another without using local disk.
Virtual backends wrap local and cloud file systems to apply encryption, compression, chunking, hashing and joining.
Rclone mounts any local, cloud or virtual filesystem as a disk on Windows, macOS, linux and FreeBSD, and also serves these over SFTP, HTTP, WebDAV, FTP and DLNA.
-Rclone is mature, open source software originally inspired by rsync and written in Go. The friendly support community are familiar with varied use cases. Official Ubuntu, Debian, Fedora, Brew and Chocolatey repos. include rclone. For the latest version downloading from rclone.org is recommended.
-Rclone is widely used on Linux, Windows and Mac. Third party developers create innovative backup, restore, GUI and business process solutions using the rclone command line or API.
+Rclone is mature, open-source software originally inspired by rsync and written in Go. The friendly support community is familiar with varied use cases. Official Ubuntu, Debian, Fedora, Brew and Chocolatey repos. include rclone. For the latest version downloading from rclone.org is recommended.
+Rclone is widely used on Linux, Windows and Mac. Third-party developers create innovative backup, restore, GUI and business process solutions using the rclone command line or API.
Rclone does the heavy lifting of communicating with cloud storage.
Rclone helps you:
@@ -109,7 +110,7 @@ code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warni(There are many others, built on standard protocols such as WebDAV or S3, that work out of the box.)
Install manpage
sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/
-sudo mandb
+sudo mandb
Run rclone config
to setup. See rclone config docs for more details.
rclone config
You need to mount the host rclone config dir at /config/rclone
into the Docker container. Due to the fact that rclone updates tokens inside its config file, and that the update process involves a file rename, you need to mount the whole host rclone config dir, not just the single host rclone config file.
You need to mount a host data dir at /data
into the Docker container.
By default, the rclone binary inside a Docker container runs with UID=0 (root). As a result, all files created in a run will have UID=0. If your config and data files reside on the host with a non-root UID:GID, you need to pass these on the container start command line.
--rc-addr
to :5572
in order to connect to it from outside the container. An explanation about why this is necessary is present here.
+If you want to access the RC interface (either via the API or the Web UI), it is required to set the --rc-addr
to :5572
in order to connect to it from outside the container. An explanation about why this is necessary is present here.
host
should probably set it to listen to localhost only, with 127.0.0.1:5572
as the value for --rc-addr
Make sure you have at least Go go1.14 installed. Download go if necessary. The latest release is recommended. Then
-git clone https://github.com/rclone/rclone.git
-cd rclone
-go build
-# If on macOS and mount is wanted, instead run: make GOTAGS=cmount
-./rclone version
Make sure you have at least Go go1.15 installed. Download go if necessary. The latest release is recommended. Then
+git clone https://github.com/rclone/rclone.git
+cd rclone
+go build
+# If on macOS and mount is wanted, instead run: make GOTAGS=cmount
+./rclone version
This will leave you a checked out version of rclone you can modify and send pull requests with. If you use make
instead of go build
then the rclone build will have the correct version information in it.
You can also build the latest stable rclone with:
go get github.com/rclone/rclone
@@ -350,12 +354,12 @@ kill %1
For running rclone at system startup, you can create a Windows service that executes your rclone command, as an alternative to scheduled task configured to run at startup.
For mount commands, Rclone has a built-in Windows service integration via the third party WinFsp library it uses. Registering as a regular Windows service easy, as you just have to execute the built-in PowerShell command New-Service
(requires administrative privileges).
For mount commands, Rclone has a built-in Windows service integration via the third-party WinFsp library it uses. Registering as a regular Windows service easy, as you just have to execute the built-in PowerShell command New-Service
(requires administrative privileges).
Example of a PowerShell command that creates a Windows service for mounting some remote:/files
as drive letter X:
, for all users (service will be running as the local system account):
New-Service -Name Rclone -BinaryPathName 'c:\rclone\rclone.exe mount remote:/files X: --config c:\rclone\config\rclone.conf --log-file c:\rclone\logs\mount.txt'
The WinFsp service infrastructure supports incorporating services for file system implementations, such as rclone, into its own launcher service, as kind of "child services". This has the additional advantage that it also implements a network provider that integrates into Windows standard methods for managing network drives. This is currently not officially supported by Rclone, but with WinFsp version 2019.3 B2 / v1.5B2 or later it should be possible through path rewriting as described here.
-To Windows service running any rclone command, the excellent third party utility NSSM, the "Non-Sucking Service Manager", can be used. It includes some advanced features such as adjusting process periority, defining process environment variables, redirect to file anything written to stdout, and customized response to different exit codes, with a GUI to configure everything from (although it can also be used from command line ).
+To Windows service running any rclone command, the excellent third-party utility NSSM, the "Non-Sucking Service Manager", can be used. It includes some advanced features such as adjusting process periority, defining process environment variables, redirect to file anything written to stdout, and customized response to different exit codes, with a GUI to configure everything from (although it can also be used from command line ).
There are also several other alternatives. To mention one more, WinSW, "Windows Service Wrapper", is worth checking out. It requires .NET Framework, but it is preinstalled on newer versions of Windows, and it also provides alternative standalone distributions which includes necessary runtime (.NET 5). WinSW is a command-line only utility, where you have to manually create an XML file with service configuration. This may be a drawback for some, but it can also be an advantage as it is easy to back up and re-use the configuration settings, without having go through manual steps in a GUI. One thing to note is that by default it does not restart the service on error, one have to explicit enable this in the configuration file (via the "onfailure" parameter).
See the following for detailed instructions for
Checks the files in the source and destination match.
Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination.
+Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files that don't match. It doesn't alter the source or destination.
If you supply the --size-only
flag, it will only compare the sizes not the hashes as well. Use this for a quick check.
If you supply the --download
flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
If you supply the --checkfile HASH
flag with a valid hash name, the source:path
must point to a text file in the SUM format.
The --combined
flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files.
= path
means path was found in source and destination and was identical- path
means path was missing on the source, so only in the destination+ path
means path was missing on the destination, so only in the source* path
means path was present in source and destination but different.! path
means there was an error reading or hashing the source or dest.rclone check source:path dest:path [flags]
@@ -636,10 +642,10 @@ rclone --dry-run --min-size 100M delete remote:path
lsf
to list objects and directories in easy to parse formatlsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
ls
,lsl
,lsd
are designed to be human-readable. lsf
is designed to be human and machine-readable. lsjson
is designed to be machine-readable.
Note that ls
and lsl
recurse by default - use --max-depth 1
to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket based remotes).
+Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
rclone ls remote:path [flags]
-h, --help help for ls
@@ -671,10 +677,10 @@ rclone --dry-run --min-size 100M delete remote:path
lsf
to list objects and directories in easy to parse formatlsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
ls
,lsl
,lsd
are designed to be human-readable. lsf
is designed to be human and machine-readable. lsjson
is designed to be machine-readable.
Note that ls
and lsl
recurse by default - use --max-depth 1
to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket based remotes).
+Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
rclone lsd remote:path [flags]
-h, --help help for lsd
@@ -703,10 +709,10 @@ rclone --dry-run --min-size 100M delete remote:path
lsf
to list objects and directories in easy to parse formatlsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
ls
,lsl
,lsd
are designed to be human-readable. lsf
is designed to be human and machine-readable. lsjson
is designed to be machine-readable.
Note that ls
and lsl
recurse by default - use --max-depth 1
to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket based remotes).
+Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
rclone lsl remote:path [flags]
-h, --help help for lsl
@@ -720,6 +726,7 @@ rclone --dry-run --min-size 100M delete remote:path
Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces.
By default, the hash is requested from the remote. If MD5 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling MD5 for any remote.
+This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a relative path).
rclone md5sum remote:path [flags]
--base64 Output base64 encoded hashsum
@@ -737,6 +744,8 @@ rclone --dry-run --min-size 100M delete remote:path
Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces.
By default, the hash is requested from the remote. If SHA-1 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling SHA-1 for any remote.
+This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a relative path).
+This command can also hash data received on STDIN, if not passing a remote:path.
rclone sha1sum remote:path [flags]
--base64 Output base64 encoded hashsum
@@ -815,9 +824,9 @@ beta: 1.42.0.5 (released 2018-06-17)
Deduping by name is only useful with a small group of backends (e.g. Google Drive, Opendrive) that can have duplicate file names. It can be run on wrapping backends (e.g. crypt) if they wrap a backend which supports duplicate file names.
However if --by-hash is passed in then dedupe will find files with duplicate hashes instead which will work on any backend which supports at least one hash. This can be used to find files with duplicate content. This is known as deduping by hash.
If deduping by name, first rclone will merge directories with the same name. It will do this iteratively until all the identically named directories have been merged.
-Next, if deduping by name, for every group of duplicate file names / hashes, it will delete all but one identical files it finds without confirmation. This means that for most duplicated files the dedupe
command will not be interactive.
+Next, if deduping by name, for every group of duplicate file names / hashes, it will delete all but one identical file it finds without confirmation. This means that for most duplicated files the dedupe
command will not be interactive.
dedupe
considers files to be identical if they have the same file path and the same hash. If the backend does not support hashes (e.g. crypt wrapping Google Drive) then they will never be found to be identical. If you use the --size-only
flag then files will be considered identical if they have the same size (any hash will be ignored). This can be useful on crypt backends which do not support hashes.
-Next rclone will resolve the remaining duplicates. Exactly which action is taken depends on the dedupe mode. By default rclone will interactively query the user for each one.
+Next rclone will resolve the remaining duplicates. Exactly which action is taken depends on the dedupe mode. By default, rclone will interactively query the user for each one.
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive
/-i
flag.
Here is an example run.
Before - with duplicates
@@ -873,13 +882,13 @@ two-3.txt: renamed from: two.txt
--dedupe-mode rename
- removes identical files then renames the rest to be different.--dedupe-mode list
- lists duplicate dirs and files only and changes nothing.For example to rename all the identically named photos in your Google Photos directory, do
+For example, to rename all the identically named photos in your Google Photos directory, do
rclone dedupe --dedupe-mode rename "drive:Google Photos"
Or
rclone dedupe rename "drive:Google Photos"
rclone dedupe [mode] remote:path [flags]
--by-hash Find indentical hashes rather than names
+ --by-hash Find identical hashes rather than names
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename (default "interactive")
-h, --help help for dedupe
See the global flags page for global options not listed here.
@@ -913,7 +922,7 @@ Used: 7993453766
Free: 1411001220
Trashed: 104857602
Other: 8849156022
-A --json
flag generates conveniently computer readable output, e.g.
A --json
flag generates conveniently machine-readable output, e.g.
{
"total": 18253611008,
"used": 7993453766,
@@ -948,19 +957,19 @@ Other: 8849156022
Run a backend specific command.
+Run a backend-specific command.
This runs a backend specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions.
+This runs a backend-specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions.
You can discover what commands a backend implements by using
rclone backend help remote:
rclone backend help <backendname>
-You can also discover information about the backend using (see operations/fsinfo in the remote control docs for more info).
+You can also discover information about the backend using (see operations/fsinfo in the remote control docs for more info).
rclone backend features remote:
Pass options to the backend command with -o. This should be key=value or key, e.g.:
rclone backend stats remote:path stats -o format=json -o long
Pass arguments to the backend by placing them on the end of the line
rclone backend cleanup remote:path file1 file2 file3
-Note to run these commands on a running backend then see backend/command in the rc docs.
+Note to run these commands on a running backend then see backend/command in the rc docs.
rclone backend <command> remote:path [opts] <args> [flags]
-h, --help help for backend
@@ -971,9 +980,33 @@ rclone backend help <backendname>
Perform bidirectonal synchronization between two paths.
+Perform bidirectonal synchronization between two paths.
+Bisync provides a bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. On each successive run it will: - list files on Path1 and Path2, and check for changes on each side. Changes include New
, Newer
, Older
, and Deleted
files. - Propagate changes on Path1 to Path2, and vice-versa.
See full bisync description for details.
+rclone bisync remote1:path1 remote2:path2 [flags]
+ --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
+ --check-filename string Filename for --check-access (default: RCLONE_TEST)
+ --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true")
+ --filters-file string Read filtering patterns from a file
+ --force Bypass --max-delete safety check and run the sync. Consider using with --verbose
+ -h, --help help for bisync
+ --localtime Use local time in listings (default: UTC)
+ --no-cleanup Retain working files (useful for troubleshooting and testing).
+ --remove-empty-dirs Remove empty directories at the final cleanup step.
+ -1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first.
+ --workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync)
+See the global flags page for global options not listed here.
+Concatenates any files and sends them to stdout.
-rclone cat sends any files to standard output.
You can use it like this to output a single file
rclone cat remote:path/to/file
@@ -983,7 +1016,7 @@ rclone backend help <backendname>
rclone --include "*.txt" cat remote:path/to/dir
Use the --head
flag to print characters only at the start, --tail
for the end and --offset
and --count
to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1
is equivalent to --tail 1
.
rclone cat remote:path [flags]
- --count int Only print N characters (default -1)
--discard Discard the output instead of printing
--head int Only print the first N characters
@@ -991,13 +1024,13 @@ rclone backend help <backendname>
--offset int Start printing at offset N (or from end if -ve)
--tail int Only print the last N characters
See the global flags page for global options not listed here.
-Checks the files in the source against a SUM file.
-Checks that hashsums of source files match the SUM file. It compares hashes (MD5, SHA1, etc) and logs a report of files which don't match. It doesn't alter the file system.
If you supply the --download
flag, it will download the data from remote and calculate the contents hash on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
Note that hash values in the SUM file are treated as case insensitive.
@@ -1006,13 +1039,13 @@ rclone backend help <backendname>The --combined
flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files.
= path
means path was found in source and destination and was identical- path
means path was missing on the source, so only in the destination+ path
means path was missing on the destination, so only in the source* path
means path was present in source and destination but different.! path
means there was an error reading or hashing the source or dest.rclone checksum <hash> sumfile src:path [flags]
- --combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--download Check by hashing the contents
@@ -1023,18 +1056,18 @@ rclone backend help <backendname>
--missing-on-src string Report all files missing from the source to this file
--one-way Check one way only, source files must exist on remote
See the global flags page for global options not listed here.
-generate the autocompletion script for the specified shell
-Generate the autocompletion script for rclone for the specified shell. See each sub-command's help for details on how to use the generated script.
- -h, --help help for completion
See the global flags page for global options not listed here.
-generate the autocompletion script for bash
-Generate the autocompletion script for the bash shell.
This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager.
To load completions in your current shell session: $ source <(rclone completion bash)
To load completions for every new session, execute once: Linux: $ rclone completion bash > /etc/bash_completion.d/rclone MacOS: $ rclone completion bash > /usr/local/etc/bash_completion.d/rclone
You will need to start a new shell for this setup to take effect.
rclone completion bash
- -h, --help help for bash
- --no-descriptions disable completion descriptions
-See the global flags page for global options not listed here.
-generate the autocompletion script for fish
-Generate the autocompletion script for the fish shell.
-To load completions in your current shell session: $ rclone completion fish | source
-To load completions for every new session, execute once: $ rclone completion fish > ~/.config/fish/completions/rclone.fish
-You will need to start a new shell for this setup to take effect.
-rclone completion fish [flags]
-h, --help help for fish
+ -h, --help help for bash
--no-descriptions disable completion descriptions
See the global flags page for global options not listed here.
SEE ALSO
- rclone completion - generate the autocompletion script for the specified shell
-rclone completion powershell
-generate the autocompletion script for powershell
+rclone completion fish
+generate the autocompletion script for fish
Synopsis
-Generate the autocompletion script for powershell.
-To load completions in your current shell session: PS C:> rclone completion powershell | Out-String | Invoke-Expression
-To load completions for every new session, add the output of the above command to your powershell profile.
-rclone completion powershell [flags]
+Generate the autocompletion script for the fish shell.
+To load completions in your current shell session: $ rclone completion fish | source
+To load completions for every new session, execute once: $ rclone completion fish > ~/.config/fish/completions/rclone.fish
+You will need to start a new shell for this setup to take effect.
+rclone completion fish [flags]
Options
- -h, --help help for powershell
+ -h, --help help for fish
--no-descriptions disable completion descriptions
See the global flags page for global options not listed here.
SEE ALSO
- rclone completion - generate the autocompletion script for the specified shell
-rclone completion zsh
-generate the autocompletion script for zsh
+rclone completion powershell
+generate the autocompletion script for powershell
Synopsis
-Generate the autocompletion script for the zsh shell.
-If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once:
-$ echo "autoload -U compinit; compinit" >> ~/.zshrc
-To load completions for every new session, execute once: # Linux: $ rclone completion zsh > "${fpath[1]}/_rclone" # macOS: $ rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
-You will need to start a new shell for this setup to take effect.
-rclone completion zsh [flags]
+Generate the autocompletion script for powershell.
+To load completions in your current shell session: PS C:> rclone completion powershell | Out-String | Invoke-Expression
+To load completions for every new session, add the output of the above command to your powershell profile.
+rclone completion powershell [flags]
Options
- -h, --help help for zsh
+ -h, --help help for powershell
--no-descriptions disable completion descriptions
See the global flags page for global options not listed here.
SEE ALSO
- rclone completion - generate the autocompletion script for the specified shell
+rclone completion zsh
+generate the autocompletion script for zsh
+Synopsis
+Generate the autocompletion script for the zsh shell.
+If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once:
+$ echo "autoload -U compinit; compinit" >> ~/.zshrc
+To load completions for every new session, execute once: # Linux: $ rclone completion zsh > "${fpath[1]}/_rclone" # macOS: $ rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
+You will need to start a new shell for this setup to take effect.
+rclone completion zsh [flags]
+Options
+ -h, --help help for zsh
+ --no-descriptions disable completion descriptions
+See the global flags page for global options not listed here.
+SEE ALSO
+
+- rclone completion - generate the autocompletion script for the specified shell
+
rclone config create
Create a new remote with name, type and options.
-Synopsis
+Synopsis
Create a new remote of name
with type
and options. The options should be passed in pairs of key
value
or as key=value
.
-For example to make a swift remote of name myremote using auto config you would do:
+For example, to make a swift remote of name myremote using auto config you would do:
rclone config create myremote swift env_auth true
rclone config create myremote swift env_auth=true
So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this:
@@ -1164,7 +1197,7 @@ rclone config create myremote swift env_auth=true
If --all
is passed then rclone will ask all the config questions, not just the post config questions. Any parameters are used as defaults for questions as usual.
Note that bin/config.py
in the rclone source implements this protocol as a readable demonstration.
rclone config create name type [key value]* [flags]
-Options
+Options
--all Ask the full set of config questions
--continue Continue the configuration process with an answer
-h, --help help for create
@@ -1174,141 +1207,141 @@ rclone config create myremote swift env_auth=true
--result string Result - use with --continue
--state string State - use with --continue
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone config - Enter an interactive configuration session.
rclone config delete
Delete an existing remote.
rclone config delete name [flags]
-Options
+Options
-h, --help help for delete
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone config - Enter an interactive configuration session.
rclone config disconnect
Disconnects user from remote
-Synopsis
+Synopsis
This disconnects the remote: passed in to the cloud storage system.
This normally means revoking the oauth token.
To reconnect use "rclone config reconnect".
rclone config disconnect remote: [flags]
-Options
+Options
-h, --help help for disconnect
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone config - Enter an interactive configuration session.
rclone config dump
Dump the config file as JSON.
rclone config dump [flags]
-Options
+Options
-h, --help help for dump
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone config - Enter an interactive configuration session.
rclone config edit
Enter an interactive configuration session.
-Synopsis
+Synopsis
Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.
rclone config edit [flags]
-Options
+Options
-h, --help help for edit
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone config - Enter an interactive configuration session.
rclone config file
Show path of configuration file in use.
rclone config file [flags]
-Options
+Options
-h, --help help for file
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone config - Enter an interactive configuration session.
rclone config password
Update password in an existing remote.
-Synopsis
+Synopsis
Update an existing remote's password. The password should be passed in pairs of key
password
or as key=password
. The password
should be passed in in clear (unobscured).
-For example to set password of a remote of name myremote you would do:
+For example, to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
rclone config password myremote fieldname=mypassword
This command is obsolete now that "config update" and "config create" both support obscuring passwords directly.
rclone config password name [key value]+ [flags]
-Options
+Options
-h, --help help for password
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone config - Enter an interactive configuration session.
rclone config paths
Show paths used for configuration, cache, temp etc.
rclone config paths [flags]
-Options
+Options
-h, --help help for paths
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone config - Enter an interactive configuration session.
rclone config providers
List in JSON format all the providers and options.
rclone config providers [flags]
-Options
+Options
-h, --help help for providers
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone config - Enter an interactive configuration session.
rclone config reconnect
Re-authenticates user with remote.
-Synopsis
+Synopsis
This reconnects remote: passed in to the cloud storage system.
To disconnect the remote use "rclone config disconnect".
This normally means going through the interactive oauth flow again.
rclone config reconnect remote: [flags]
-Options
+Options
-h, --help help for reconnect
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone config - Enter an interactive configuration session.
rclone config show
Print (decrypted) config file, or the config for a single remote.
rclone config show [<remote>] [flags]
-Options
+Options
-h, --help help for show
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone config - Enter an interactive configuration session.
rclone config touch
Ensure configuration file exists.
rclone config touch [flags]
-Options
+Options
-h, --help help for touch
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone config - Enter an interactive configuration session.
rclone config update
Update options in an existing remote.
-Synopsis
+Synopsis
Update an existing remote's options. The options should be passed in pairs of key
value
or as key=value
.
-For example to update the env_auth field of a remote of name myremote you would do:
+For example, to update the env_auth field of a remote of name myremote you would do:
rclone config update myremote env_auth true
rclone config update myremote env_auth=true
If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus:
@@ -1361,7 +1394,7 @@ rclone config update myremote env_auth=true
If --all
is passed then rclone will ask all the config questions, not just the post config questions. Any parameters are used as defaults for questions as usual.
Note that bin/config.py
in the rclone source implements this protocol as a readable demonstration.
rclone config update name [key value]+ [flags]
- --all Ask the full set of config questions
--continue Continue the configuration process with an answer
-h, --help help for update
@@ -1371,26 +1404,26 @@ rclone config update myremote env_auth=true
--result string Result - use with --continue
--state string State - use with --continue
See the global flags page for global options not listed here.
-Prints info about logged in user of remote.
-This prints the details of the person logged in to the cloud storage system.
rclone config userinfo remote: [flags]
- -h, --help help for userinfo
--json Format output as JSON
See the global flags page for global options not listed here.
-Copy files from source to dest, skipping identical files.
-If source:path is a file or directory then it copies it to a file or directory named dest:path.
This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.
So
@@ -1405,35 +1438,35 @@ if src is directoryThis doesn't transfer files that are identical on src and dst, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.
Note: Use the -P
/--progress
flag to view real-time transfer statistics
rclone copyto source:path dest:path [flags]
- -h, --help help for copyto
See the global flags page for global options not listed here.
-Copy url content to dest.
-Download a URL's content and copy it to the destination without saving it in temporary storage.
Setting --auto-filename
will cause the file name to be retrieved from the URL (after any redirections) and used in the destination path. With --print-filename
in addition, the resulting file name will be printed.
Setting --no-clobber
will prevent overwriting file on the destination if there is one with the same name.
Setting --stdout
or making the output file name -
will cause the output to be written to standard output.
rclone copyurl https://example.com dest:path [flags]
- -a, --auto-filename Get the file name from the URL and use it for destination file path
-h, --help help for copyurl
--no-clobber Prevent overwriting file with same name
-p, --print-filename Print the resulting name from --auto-filename
--stdout Write the output to stdout rather than a file
See the global flags page for global options not listed here.
-Cryptcheck checks the integrity of a crypted remote.
-rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.
For it to work the underlying remote of the cryptedremote must support some kind of checksum.
It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.
@@ -1447,13 +1480,13 @@ if src is directoryThe --combined
flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files.
= path
means path was found in source and destination and was identical- path
means path was missing on the source, so only in the destination+ path
means path was missing on the destination, so only in the source* path
means path was present in source and destination but different.! path
means there was an error reading or hashing the source or dest.rclone cryptcheck remote:path cryptedremote:path [flags]
- --combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--error string Report all files with errors (hashing or reading) to this file
@@ -1463,13 +1496,13 @@ if src is directory
--missing-on-src string Report all files missing from the source to this file
--one-way Check one way only, source files must exist on remote
See the global flags page for global options not listed here.
-Cryptdecode returns unencrypted file names.
-rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.
If you supply the --reverse flag, it will return encrypted file names.
use it like this
@@ -1478,43 +1511,43 @@ if src is directory rclone cryptdecode --reverse encryptedremote: filename1 filename2Another way to accomplish this is by using the rclone backend encode
(or decode
)command. See the documentation on the crypt
overlay for more info.
rclone cryptdecode encryptedremote: encryptedfilename [flags]
- -h, --help help for cryptdecode
--reverse Reverse cryptdecode, encrypts filenames
See the global flags page for global options not listed here.
-Remove a single file from remote.
-Remove a single file from remote. Unlike delete
it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.
rclone deletefile remote:path [flags]
- -h, --help help for deletefile
-See the global flags page for global options not listed here.
Output completion script for a given shell.
+Remove a single file from remote.
Generates a shell completion script for rclone. Run with --help to list the supported shells.
+Remove a single file from remote. Unlike delete
it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.
rclone deletefile remote:path [flags]
-h, --help help for genautocomplete
+ -h, --help help for deletefile
See the global flags page for global options not listed here.
Output completion script for a given shell.
+Generates a shell completion script for rclone. Run with --help to list the supported shells.
+ -h, --help help for genautocomplete
+See the global flags page for global options not listed here.
+Output bash completion script for rclone.
-Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete bash
@@ -1523,16 +1556,16 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
If you supply a command line argument the script will be written there.
If output_file is "-", then the output will be written to stdout.
rclone genautocomplete bash [output_file] [flags]
- -h, --help help for bash
See the global flags page for global options not listed here.
-Output fish completion script for rclone.
-Generates a fish autocompletion script for rclone.
This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete fish
@@ -1541,16 +1574,16 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
If you supply a command line argument the script will be written there.
If output_file is "-", then the output will be written to stdout.
rclone genautocomplete fish [output_file] [flags]
- -h, --help help for fish
See the global flags page for global options not listed here.
-Output zsh completion script for rclone.
-Generates a zsh autocompletion script for rclone.
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete zsh
@@ -1559,30 +1592,31 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
If you supply a command line argument the script will be written there.
If output_file is "-", then the output will be written to stdout.
rclone genautocomplete zsh [output_file] [flags]
- -h, --help help for zsh
See the global flags page for global options not listed here.
-Output markdown docs for rclone to the directory supplied.
-This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
rclone gendocs output_directory [flags]
- -h, --help help for gendocs
See the global flags page for global options not listed here.
-Produces a hashsum file for all the objects in the path.
-Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.
By default, the hash is requested from the remote. If the hash is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote.
+This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a relative path).
Run without a hash to see the list of all supported hashes, e.g.
$ rclone hashsum
Supported hashes are:
@@ -1598,20 +1632,20 @@ Supported hashes are:
$ rclone hashsum MD5 remote:path
Note that hash names are case insensitive and values are output in lower case.
rclone hashsum <hash> remote:path [flags]
-Options
+Options
--base64 Output base64 encoded hashsum
-C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for hashsum
--output-file string Output hashsums to a file rather than the terminal
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone link
Generate public link to file/folder.
-Synopsis
+Synopsis
rclone link will create, retrieve or remove a public link to the given file or folder.
rclone link remote:path/to/file
rclone link remote:path/to/folder/
@@ -1621,32 +1655,32 @@ rclone link --expire 1d remote:path/to/file
Use the --unlink flag to remove existing public links to the file or folder. Note not all backends support "--unlink" flag - those that don't will just ignore it.
If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by default be created with the least constraints – e.g. no expiry, no password protection, accessible without account.
rclone link remote:path [flags]
-Options
+Options
--expire Duration The amount of time that the link will be valid (default off)
-h, --help help for link
--unlink Remove existing public link to file/folder
See the global flags page for global options not listed here.
-SEE ALSO
-
-- rclone - Show help for rclone commands, flags and backends.
-
-rclone listremotes
-List all the remotes in the config file.
-Synopsis
-rclone listremotes lists all the available remotes from the config file.
-When uses with the -l flag it lists the types too.
-rclone listremotes [flags]
-Options
- -h, --help help for listremotes
- --long Show the type as well as names
-See the global flags page for global options not listed here.
SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
+rclone listremotes
+List all the remotes in the config file.
+Synopsis
+rclone listremotes lists all the available remotes from the config file.
+When uses with the -l flag it lists the types too.
+rclone listremotes [flags]
+Options
+ -h, --help help for listremotes
+ --long Show the type as well as names
+See the global flags page for global options not listed here.
+SEE ALSO
+
+- rclone - Show help for rclone commands, flags and backends.
+
rclone lsf
List directories and objects in remote:path formatted for parsing.
-Synopsis
+Synopsis
List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.
Eg
$ rclone lsf swift:bucket
@@ -1674,10 +1708,10 @@ T - tier of storage if known, e.g. "Hot" or "Cool"
If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type.
-For example to emulate the md5sum command you can use
+For example, to emulate the md5sum command you can use
rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
Eg
-$ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket
+$ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket
7908e352297f0f530b84a756f188baa3 bevajer5jef
cd65ac234e6fea5925974a51cdd865cc canole
03b5341b4f234b9d984d03ad076bae91 diwogej7
@@ -1699,7 +1733,7 @@ test.log,22355
test.sh,449
"this file contains a comma, in the file name.txt",6
Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw flag.
-For example to find all the files modified within one day and copy those only (without traversing the whole directory structure):
+For example, to find all the files modified within one day and copy those only (without traversing the whole directory structure):
rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
rclone copy --files-from-raw new_files /path/to/local remote:path
Any of the filtering options can be applied to this command.
@@ -1711,12 +1745,12 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
lsf
to list objects and directories in easy to parse format
lsjson
to list objects and directories in JSON format
-ls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
+ls
,lsl
,lsd
are designed to be human-readable. lsf
is designed to be human and machine-readable. lsjson
is designed to be machine-readable.
Note that ls
and lsl
recurse by default - use --max-depth 1
to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
-Listing a non existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket based remotes).
+Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
rclone lsf remote:path [flags]
-Options
+Options
--absolute Put a leading / in front of path names
--csv Output in CSV format
-d, --dir-slash Append a slash to directory names (default true)
@@ -1728,13 +1762,13 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
-R, --recursive Recurse into the listing
-s, --separator string Separator for the items in the format (default ";")
See the global flags page for global options not listed here.
-List directories and objects in the path in JSON format.
-List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this
{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsBucket" : false, "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", }
@@ -1746,7 +1780,7 @@ rclone copy --files-from-raw new_files /path/to/local remote:pathIf --files-only is not specified directories in addition to the files will be returned.
if --stat is set then a single JSON blob will be returned about the item pointed to. This will return an error if the item isn't found. However on bucket based backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will return an empty directory as it isn't possible to tell empty directories from missing directories there.
The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name.
-If the directory is a bucket in a bucket based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true".
+If the directory is a bucket in a bucket-based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true".
The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (e.g. Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav, etc.) no digits will be shown ("2017-05-31T16:15:57+01:00").
The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line.
Any of the filtering options can be applied to this command.
@@ -1758,12 +1792,12 @@ rclone copy --files-from-raw new_files /path/to/local remote:pathlsf
to list objects and directories in easy to parse formatlsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
ls
,lsl
,lsd
are designed to be human-readable. lsf
is designed to be human and machine-readable. lsjson
is designed to be machine-readable.
Note that ls
and lsl
recurse by default - use --max-depth 1
to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket based remotes).
+Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
rclone lsjson remote:path [flags]
- --dirs-only Show only directories in the listing
-M, --encrypted Show the encrypted names
--files-only Show only files in the listing
@@ -1776,13 +1810,13 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
-R, --recursive Recurse into the listing
--stat Just return the info for the pointed to file
See the global flags page for global options not listed here.
-Mount the remote as file system on a mountpoint.
-rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the --daemon
flag to force background mode. On Windows you can run mount in foreground only, the flag is ignored.
The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not support the about feature at all, then 1 PiB is set as both the total and the free size.
To run rclone mount on Windows, you will need to download and install WinFsp.
-WinFsp is an open source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with cgofuse. Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows.
+WinFsp is an open-source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with cgofuse. Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows.
Unlike other operating systems, Microsoft Windows provides a different filesystem type for network and fixed drives. It optimises access on the assumption fixed disk drives are fast and reliable, while network drives have relatively high latency and less reliability. Some settings can also be differentiated between the two types, for example that Windows Explorer should just display icons and not create preview thumbnails for image and video files on network drives.
In most cases, rclone will mount the remote as a normal, fixed disk drive by default. However, you can also choose to mount it as a remote network drive, often described as a network share. If you mount an rclone remote using the default, fixed drive mode and experience unexpected program errors, freezes or other issues, consider mounting as a network drive instead.
@@ -1843,7 +1877,7 @@ rclone mount remote:path/to/files * --volname \\cloud\remoteNote that mapping to a directory path, instead of a drive letter, does not suffer from the same limitations.
Without the use of --vfs-cache-mode
this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without --vfs-cache-mode writes
or --vfs-cache-mode full
. See the VFS File Caching section for more info.
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
+The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
When rclone mount
is invoked on Unix with --daemon
flag, the main rclone program will wait for the background mount to become ready or until the timeout specified by the --daemon-wait
flag. On Linux it can check mount status using ProcFS so the flag in fact sets maximum time to wait, while the real wait can be less. On macOS / BSD the time to wait is constant and the check is performed only at the end. We advise you to set wait time on macOS reasonably.
Only supported on Linux, FreeBSD, OS X and Windows at the moment.
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
rclone mount remote:path /path/to/mountpoint [flags]
- --allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
--allow-other Allow access to other users (not supported on Windows)
--allow-root Allow access to root user (not supported on Windows)
@@ -2008,6 +2042,7 @@ WantedBy=multi-user.target
--daemon-wait duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
--debug-fuse Debug the FUSE internals - needs -v
--default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
+ --devname string Set the device name - default is remote:path
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
@@ -2041,13 +2076,13 @@ WantedBy=multi-user.target
--volname string Set the volume name (supported on Windows and OSX only)
--write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)
See the global flags page for global options not listed here.
-Move file or directory from source to dest.
-If source:path is a file or directory then it moves it to a file or directory named dest:path.
This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command.
So
@@ -2063,16 +2098,16 @@ if src is directoryImportant: Since this can cause data loss, test first with the --dry-run
or the --interactive
/-i
flag.
Note: Use the -P
/--progress
flag to view real-time transfer statistics.
rclone moveto source:path dest:path [flags]
- -h, --help help for moveto
See the global flags page for global options not listed here.
-Explore a remote with a text based user interface.
-This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".
To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.
Here are the keys - press '?' to toggle the help on and off
@@ -2093,33 +2128,33 @@ if src is directoryThis an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands.
Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously.
rclone ncdu remote:path [flags]
- -h, --help help for ncdu
See the global flags page for global options not listed here.
-Obscure password for use in the rclone config file.
-In the rclone config file, human readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident.
+In the rclone config file, human-readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident.
Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token.
This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline.
echo "secretpassword" | rclone obscure -
If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself.
If you want to encrypt the config file then please use config file encryption - see rclone config for more info.
rclone obscure password [flags]
- -h, --help help for obscure
See the global flags page for global options not listed here.
-Run a command against a running rclone.
-This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"
A username and password can be passed in with --user and --pass.
Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.
@@ -2138,7 +2173,7 @@ if src is directoryrclone rc --loopback operations/about fs=/
Use "rclone rc" to see a list of all possible commands.
rclone rc commands parameter [flags]
- -a, --arg stringArray Argument placed in the "arg" array
-h, --help help for rc
--json string Input JSON - use instead of key=value args
@@ -2149,13 +2184,13 @@ if src is directory
--url string URL to connect to rclone remote control (default "http://localhost:5572/")
--user string Username to use to rclone remote control
See the global flags page for global options not listed here.
-Copies standard input to file on remote.
-rclone rcat reads from standard input (stdin) and copies it to a single remote file.
echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file
@@ -2165,48 +2200,48 @@ ffmpeg - | rclone rcat remote:path/to/file
|--size| should be the exact size of the input stream in bytes. If the size of the stream is different in length to the |--size| passed in then the transfer will likely fail.
Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move
it to the destination.
rclone rcat remote:path [flags]
- -h, --help help for rcat
--size int File size hint to preallocate (default -1)
See the global flags page for global options not listed here.
-Run rclone listening to remote control commands only.
-This runs rclone so that it only listens to remote control commands.
-This is useful if you are controlling rclone via the rc API.
-If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run.
-See the rc documentation for more info on the rc flags.
-rclone rcd <path to files to serve>* [flags]
- -h, --help help for rcd
-See the global flags page for global options not listed here.
Remove empty directories under the path.
+Run rclone listening to remote control commands only.
This recursively removes any empty directories (including directories that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the --leave-root
flag.
Use command rmdir
to delete just the empty directory given by path, not recurse.
This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete
command will delete files but leave the directory structure (unless used with option --rmdirs
).
To delete a path and any objects in it, use purge
command.
rclone rmdirs remote:path [flags]
+This runs rclone so that it only listens to remote control commands.
+This is useful if you are controlling rclone via the rc API.
+If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run.
+See the rc documentation for more info on the rc flags.
+rclone rcd <path to files to serve>* [flags]
-h, --help help for rmdirs
- --leave-root Do not remove root directory if empty
+ -h, --help help for rcd
See the global flags page for global options not listed here.
Remove empty directories under the path.
+This recursively removes any empty directories (including directories that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the --leave-root
flag.
Use command rmdir
to delete just the empty directory given by path, not recurse.
This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete
command will delete files but leave the directory structure (unless used with option --rmdirs
).
To delete a path and any objects in it, use purge
command.
rclone rmdirs remote:path [flags]
+ -h, --help help for rmdirs
+ --leave-root Do not remove root directory if empty
+See the global flags page for global options not listed here.
+Update the rclone binary.
-This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and cryptographically signed signature.
If used without flags (or with implied --stable
flag), this command will install the latest stable release. However, some issues may be fixed (or features added) only in the latest beta release. In such cases you should run the command with the --beta
flag, i.e. rclone selfupdate --beta
. You can check in advance what version would be installed by adding the --check
flag, then repeat the command without it when you are satisfied.
Sometimes the rclone team may recommend you a concrete beta or stable rclone release to troubleshoot your issue or add a bleeding edge feature. The --version VER
flag, if given, will update to the concrete version instead of the latest one. If you omit micro version from VER
(for example 1.53
), the latest matching micro version will be used.
Note: Windows forbids deletion of a currently running executable so this command will rename the old executable to 'rclone.old.exe' upon success.
Please note that this command was not available before rclone version 1.55. If it fails for you with the message unknown command "selfupdate"
then you will need to update manually following the install instructions located at https://rclone.org/install/
rclone selfupdate [flags]
- --beta Install beta release
--check Check for latest release, do not download
-h, --help help for selfupdate
@@ -2225,21 +2260,21 @@ ffmpeg - | rclone rcat remote:path/to/file
--stable Install stable release (this is the default)
--version string Install the given rclone version (default: latest)
See the global flags page for global options not listed here.
-Serve a remote over a protocol.
-rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, e.g.
rclone serve http remote:
Each subcommand has its own options which you can see in their help.
rclone serve <protocol> [opts] <remote> [flags]
- -h, --help help for serve
See the global flags page for global options not listed here.
-Serve remote:path over DLNA
-rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.
Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
rclone serve dlna remote:path [flags]
- --addr string The ip:port or :port to bind the DLNA http server to (default ":7879")
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
@@ -2388,13 +2423,13 @@ ffmpeg - | rclone rcat remote:path/to/file
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
See the global flags page for global options not listed here.
-Serve any remote on docker's volume plugin API.
-This command implements the Docker volume plugin API allowing docker to use rclone as a data storage mechanism for various cloud providers. rclone provides docker volume plugin based on it.
To create a docker plugin, one must create a Unix or TCP socket that Docker will look for when you use the plugin and then it listens for commands from docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example:
sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
@@ -2502,7 +2537,7 @@ ffmpeg - | rclone rcat remote:path/to/file
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
rclone serve docker [flags]
- --allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
--allow-other Allow access to other users (not supported on Windows)
--allow-root Allow access to root user (not supported on Windows)
@@ -2514,6 +2549,7 @@ ffmpeg - | rclone rcat remote:path/to/file
--daemon-wait duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
--debug-fuse Debug the FUSE internals - needs -v
--default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
+ --devname string Set the device name - default is remote:path
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
@@ -2551,13 +2587,13 @@ ffmpeg - | rclone rcat remote:path/to/file
--volname string Set the volume name (supported on Windows and OSX only)
--write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)
See the global flags page for global options not listed here.
-Serve remote:path over FTP.
-rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it.
Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
@@ -2695,7 +2731,7 @@ ffmpeg - | rclone rcat remote:path/to/fileNote that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve ftp remote:path [flags]
- --addr string IPaddress:Port or :Port to bind server to (default "localhost:2121")
--auth-proxy string A program to use to create the backend from the auth
--cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -2729,13 +2765,13 @@ ffmpeg - | rclone rcat remote:path/to/file
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
See the global flags page for global options not listed here.
-Serve the remote over HTTP.
-rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.
You can use the filter flags (e.g. --include, --exclude) to control what is served.
The server will log errors. Use -v to see access logs.
@@ -2748,7 +2784,9 @@ ffmpeg - | rclone rcat remote:path/to/file--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
---cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:
+--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
+--template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:
Description | +Pattern | +Matches | +Does not match | +
---|---|---|---|
Wildcard | +*.jpg |
+/file.jpg |
+/file.png |
+
+ | + | /dir/file.jpg |
+/dir/file.png |
+
Rooted | +/*.jpg |
+/file.jpg |
+/file.png |
+
+ | + | /file2.jpg |
+/dir/file.jpg |
+
Alternates | +*.{jpg,png} |
+/file.jpg |
+/file.gif |
+
+ | + | /dir/file.gif |
+/dir/file.gif |
+
Path Wildcard | +dir/** |
+/dir/anyfile |
+file.png |
+
+ | + | /subdir/dir/subsubdir/anyfile |
+/subdir/file.png |
+
Any Char | +*.t?t |
+/file.txt |
+/file.qxt |
+
+ | + | /dir/file.tzt |
+/dir/file.png |
+
Range | +*.[a-z] |
+/file.a |
+/file.0 |
+
+ | + | /dir/file.b |
+/dir/file.1 |
+
Escape | +*.\?\?\? |
+/file.??? |
+/file.abc |
+
+ | + | /dir/file.??? |
+/dir/file.def |
+
Class | +*.\d\d\d |
+/file.012 |
+/file.abc |
+
+ | + | /dir/file.345 |
+/dir/file.def |
+
Regexp | +*.{{jpe?g}} |
+/file.jpeg |
+/file.png |
+
+ | + | /dir/file.jpg |
+/dir/file.jpeeg |
+
Rooted Regexp | +/{{.*\.jpe?g}} |
+/file.jpeg |
+/file.png |
+
+ | + | /file.jpg |
+/dir/file.jpg |
+
Rclone path/file name filters are made up of one or more of the following flags:
--files-from-raw
- Read list of source-file names without any processingThis flag is the same as --files-from
except that input is read in a raw manner. Lines with leading / trailing whitespace, and lines starting with ;
or #
are read without any processing. rclone lsf has a compatible format that can be used to export file lists from remotes for input to --files-from-raw
.
--ignore-case
- make searches case insensitiveBy default rclone filter patterns are case sensitive. The --ignore-case
flag makes all of the filters patterns on the command line case insensitive.
By default, rclone filter patterns are case sensitive. The --ignore-case
flag makes all of the filters patterns on the command line case insensitive.
E.g. --include "zaphod.txt"
does not match a file Zaphod.txt
. With --ignore-case
a match is made.
Rclone commands with filter patterns containing shell metacharacters may not as work as expected in your shell and may require quoting.
@@ -5005,7 +5201,7 @@ dir1/dir2/dir3/.ignoreWhen you run the rclone rcd --rc-web-gui
this is what happens
login_token
so it can log straight in.If using rclone rc
this could be passed as
rclone rc operations/sync ... _config='{"CheckSum": true}'
Any config parameters you don't set will inherit the global defaults which were set with command line flags or environment variables.
-Note that it is possible to set some values as strings or integers - see data types for more info. Here is an example setting the equivalent of --buffer-size
in string or integer format.
Note that it is possible to set some values as strings or integers - see data types for more info. Here is an example setting the equivalent of --buffer-size
in string or integer format.
"_config":{"BufferSize": "42M"}
"_config":{"BufferSize": 44040192}
If you wish to check the _config
assignment has worked properly then calling options/local
will show what the value got set to.
If using rclone rc
this could be passed as
rclone rc ... _filter='{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}'
Any filter parameters you don't set will inherit the global defaults which were set with command line flags or environment variables.
-Note that it is possible to set some values as strings or integers - see data types for more info. Here is an example setting the equivalent of --buffer-size
in string or integer format.
Note that it is possible to set some values as strings or integers - see data types for more info. Here is an example setting the equivalent of --buffer-size
in string or integer format.
"_filter":{"MinSize": "42M"}
"_filter":{"MinSize": 44040192}
If you wish to check the _filter
assignment has worked properly then calling options/local
will show what the value got set to.
The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified.
-In either case "rate" is returned as a human readable string, and "bytesPerSecond" is returned as a number.
+In either case "rate" is returned as a human-readable string, and "bytesPerSecond" is returned as a number.
This takes the following parameters:
This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.
Authentication is required for this call.
+This takes the following parameters
+drive:path1
drive:path2
true
by default, false
disables comparison of final listings, only
will skip sync, only compare listings from the last runSee bisync command help and full bisync description for more information.
+Authentication is required for this call.
This takes the following parameters:
rclone rc vfs/refresh dir=home/junk dir2=data/misc
If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use --fast-list if enabled.
This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.
+This returns stats for the selected VFS.
+{
+ // Status of the disk cache - only present if --vfs-cache-mode > off
+ "diskCache": {
+ "bytesUsed": 0,
+ "erroredFiles": 0,
+ "files": 0,
+ "hashType": 1,
+ "outOfSpace": false,
+ "path": "/home/user/.cache/rclone/vfs/local/mnt/a",
+ "pathMeta": "/home/user/.cache/rclone/vfsMeta/local/mnt/a",
+ "uploadsInProgress": 0,
+ "uploadsQueued": 0
+ },
+ "fs": "/mnt/a",
+ "inUse": 1,
+ // Status of the in memory metadata cache
+ "metadataCache": {
+ "dirs": 1,
+ "files": 0
+ },
+ // Options as returned by options/get
+ "opt": {
+ "CacheMaxAge": 3600000000000,
+ // ...
+ "WriteWait": 1000000000
+ }
+}
+This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.
Rclone implements a simple HTTP based protocol.
Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values.
All calls must made using POST.
The input objects can be supplied using URL parameters, POST parameters or by supplying "Content-Type: application/json" and a JSON blob in the body. There are examples of these below using curl
.
The response will be a JSON blob in the body of the response. This is formatted to be reasonably human readable.
+The response will be a JSON blob in the body of the response. This is formatted to be reasonably human-readable.
If an error occurs then there will be an HTTP error status (e.g. 500) and the body of the response will contain a JSON encoded error object, e.g.
{
@@ -6159,12 +6404,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
5-second execution trace: wget http://localhost:5572/debug/pprof/trace?seconds=5
Goroutine blocking profile
Contended mutexes:
@@ -6195,6 +6440,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R
+Akamai Netstorage
+MD5, SHA256
+Yes
+No
+No
+R
+
+
Amazon Drive
MD5
No
@@ -6202,7 +6455,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R
-
+
Amazon S3 (or S3 compatible)
MD5
Yes
@@ -6210,7 +6463,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R/W
-
+
Backblaze B2
SHA1
Yes
@@ -6218,7 +6471,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R/W
-
+
Box
SHA1
Yes
@@ -6226,7 +6479,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
+
Citrix ShareFile
MD5
Yes
@@ -6234,7 +6487,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
+
Dropbox
DBHASH ¹
Yes
@@ -6242,7 +6495,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
+
Enterprise File Fabric
-
Yes
@@ -6250,7 +6503,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R/W
-
+
FTP
-
No
@@ -6258,7 +6511,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
+
Google Cloud Storage
MD5
Yes
@@ -6266,7 +6519,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R/W
-
+
Google Drive
MD5
Yes
@@ -6274,7 +6527,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
R/W
-
+
Google Photos
-
No
@@ -6282,7 +6535,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
R
-
+
HDFS
-
Yes
@@ -6290,7 +6543,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
+
HTTP
-
No
@@ -6298,7 +6551,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R
-
+
Hubic
MD5
Yes
@@ -6306,7 +6559,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R/W
-
+
Jottacloud
MD5
Yes
@@ -6314,7 +6567,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R
-
+
Koofr
MD5
No
@@ -6322,7 +6575,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
+
Mail.ru Cloud
Mailru ⁶
Yes
@@ -6330,7 +6583,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
+
Mega
-
No
@@ -6338,7 +6591,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
-
-
+
Memory
MD5
Yes
@@ -6346,7 +6599,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
+
Microsoft Azure Blob Storage
MD5
Yes
@@ -6354,7 +6607,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R/W
-
+
Microsoft OneDrive
SHA1 ⁵
Yes
@@ -6362,7 +6615,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R
-
+
OpenDrive
MD5
Yes
@@ -6370,7 +6623,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Partial ⁸
-
-
+
OpenStack Swift
MD5
Yes
@@ -6378,7 +6631,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R/W
-
+
pCloud
MD5, SHA1 ⁷
Yes
@@ -6386,7 +6639,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
W
-
+
premiumize.me
-
No
@@ -6394,7 +6647,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R
-
+
put.io
CRC-32
Yes
@@ -6402,7 +6655,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
R
-
+
QingStor
MD5
No
@@ -6410,7 +6663,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R/W
-
+
Seafile
-
No
@@ -6418,7 +6671,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
+
SFTP
MD5, SHA1 ²
Yes
@@ -6426,7 +6679,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
+
Sia
-
No
@@ -6434,7 +6687,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
+
SugarSync
-
No
@@ -6442,15 +6695,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
-Tardigrade
+
+Storj
-
Yes
No
No
-
-
+
Uptobox
-
No
@@ -6458,7 +6711,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
-
-
+
WebDAV
MD5, SHA1 ³
Yes ⁴
@@ -6466,7 +6719,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
+
Yandex Disk
MD5
Yes
@@ -6474,7 +6727,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R
-
+
Zoho WorkDrive
-
No
@@ -6482,7 +6735,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
-
+
The local filesystem
All
Yes
@@ -6887,7 +7140,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
The MIME type can be important if you are serving files directly to HTTP from the storage system.
If you are copying from a remote which supports reading (R
) to a remote which supports writing (W
) then rclone will preserve the MIME types. Otherwise they will be guessed from the extension, or the remote itself may assign the MIME type.
Optional Features
-All rclone remotes support a base command set. Other features depend upon backend specific capabilities.
+All rclone remotes support a base command set. Other features depend upon backend-specific capabilities.
@@ -7065,8 +7318,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
HDFS
Yes
No
-No
-No
+Yes
+Yes
No
No
Yes
@@ -7296,10 +7549,10 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
-Tardigrade
+Storj
Yes †
No
-No
+Yes
No
No
Yes
@@ -7377,7 +7630,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Purge
This deletes a directory quicker than just deleting all the files in the directory.
-† Note Swift, Hubic, and Tardigrade implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.
+† Note Swift, Hubic, and Storj implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.
‡ StreamUpload is not supported with Nextcloud
Copy
Used when copying an object to and from the same remote. This known as a server-side copy so you can copy a file without downloading it and uploading it again. It is used if you use rclone copy
or rclone move
if the remote doesn't support Move
directly.
@@ -7403,7 +7656,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Backends without about capability cannot determine free space for an rclone mount, or use policy mfs
(most free space) as a member of an rclone union remote.
EmptyDir
-The remote supports empty directories. See Limitations for details. Most Object/Bucket based remotes do not support this.
+The remote supports empty directories. See Limitations for details. Most Object/Bucket-based remotes do not support this.
Global Flags
This describes the global flags available to every rclone command split into two groups, non backend and backend flags.
Non Backend Flags
@@ -7507,13 +7760,13 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--rc-enable-metrics Enable prometheus metrics on /metrics
--rc-files string Path to local files to serve on the HTTP server
--rc-htpasswd string htpasswd file - if not provided no authentication is done
- --rc-job-expire-duration duration expire finished async jobs older than this value (default 1m0s)
- --rc-job-expire-interval duration interval to check for expired async jobs (default 10s)
+ --rc-job-expire-duration duration Expire finished async jobs older than this value (default 1m0s)
+ --rc-job-expire-interval duration Interval to check for expired async jobs (default 10s)
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
--rc-no-auth Don't require auth for certain methods
--rc-pass string Password for authentication
- --rc-realm string realm for authentication (default "rclone")
+ --rc-realm string Realm for authentication (default "rclone")
--rc-serve Enable the serving of remote objects
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
@@ -7552,14 +7805,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.57.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.58.0")
-v, --verbose count Print lots more stuff (repeat for more)
Backend Flags
These flags are available for every command. They control the backends and may be set in the config file.
--acd-auth-url string Auth server URL
--acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret
- --acd-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --acd-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob
--acd-token-url string Token server url
@@ -7568,9 +7821,9 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--azureblob-access-tier string Access tier of blob: hot, cool or archive
--azureblob-account string Storage Account Name
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting
- --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100 MiB) (default 4Mi)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi)
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
- --azureblob-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
+ --azureblob-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-key string Storage Account Key
--azureblob-list-chunk int Size of blob list (default 5000)
@@ -7583,6 +7836,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--azureblob-public-access string Public access level of a container: blob or container
--azureblob-sas-url string SAS URL for container level access only
--azureblob-service-principal-file string Path to file containing credentials for use with a service principal
+ --azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
@@ -7592,7 +7846,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
--b2-download-url string Custom endpoint for downloads
- --b2-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --b2-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
@@ -7608,7 +7862,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
- --box-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-list-chunk int Size of listing chunk 1-1000 (default 1000)
--box-owned-by string Only show items owned by the login (email address) passed in
--box-root-folder-id string Fill in for rclone to use a non root folder as its starting point
@@ -7645,6 +7899,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--compress-remote string Remote to compress
-L, --copy-links Follow symlinks and copy the pointed to item
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true)
+ --crypt-filename-encoding string How to encode the encrypted filename to text string (default "base32")
--crypt-filename-encryption string How to encrypt the filenames (default "standard")
--crypt-no-data-encryption Option to either encrypt file data or leave it unencrypted
--crypt-password string Password or pass phrase for encryption (obscured)
@@ -7659,8 +7914,9 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
+ --drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
--drive-disable-http2 Disable drive using http2 (default true)
- --drive-encoding MultiEncoder This sets the encoding for the backend (default InvalidUtf8)
+ --drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
--drive-formats string Deprecated: See export_formats
--drive-impersonate string Impersonate this user when using a service account
@@ -7677,6 +7933,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--drive-shared-with-me Only show files that are shared with me
--drive-size-as-quota Show sizes as storage quota usage, not actual size
--drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only
+ --drive-skip-dangling-shortcuts If set skip dangling shortcut files
--drive-skip-gdocs Skip google documents in all listings
--drive-skip-shortcuts If set skip shortcut files
--drive-starred-only Only show files that are starred
@@ -7699,40 +7956,41 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
- --dropbox-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
+ --dropbox-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
--dropbox-token string OAuth Access Token as a JSON blob
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
- --fichier-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --fichier-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
- --filefabric-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --filefabric-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder
--filefabric-token string Session Token
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
+ --ftp-ask-password Allow asking for FTP password when needed
--ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s)
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-disable-epsv Disable using EPSV even if server advertises support
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
- --ftp-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
+ --ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-pass string FTP password (obscured)
- --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-port int FTP port number (default 21)
--ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s)
--ftp-tls Use Implicit FTPS (FTP over TLS)
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
- --ftp-user string FTP username, leave blank for current username, $USER
+ --ftp-user string FTP username (default "$USER")
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL
@@ -7740,7 +7998,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
- --gcs-encoding MultiEncoder This sets the encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-location string Location for the newly created buckets
--gcs-object-acl string Access Control List for new objects
--gcs-project-number string Project number
@@ -7751,7 +8009,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--gphotos-auth-url string Auth server URL
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
- --gphotos-encoding MultiEncoder This sets the encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gphotos-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
@@ -7763,38 +8021,39 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
- --hdfs-encoding MultiEncoder This sets the encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
+ --hdfs-encoding MultiEncoder The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
--hdfs-namenode string Hadoop name node and port
--hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string Hadoop user name
--http-headers CommaSepList Set HTTP headers for all transactions
- --http-no-head Don't use HEAD requests to find file sizes in dir listing
+ --http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of http host to connect to
--hubic-auth-url string Auth server URL
--hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--hubic-client-id string OAuth Client Id
--hubic-client-secret string OAuth Client Secret
- --hubic-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8)
+ --hubic-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
--hubic-no-chunk Don't chunk files during streaming upload
--hubic-token string OAuth Access Token as a JSON blob
--hubic-token-url string Token server url
- --jottacloud-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
+ --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
--jottacloud-trashed-only Only show files that are in the trash
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
- --koofr-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
- --koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
+ --koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use
- --koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
+ --koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
+ --koofr-provider string Choose your storage provider
--koofr-setmtime Does the backend support setting modification time (default true)
- --koofr-user string Your Koofr user name
+ --koofr-user string Your user name
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
- --local-encoding MultiEncoder This sets the encoding for the backend (default Slash,Dot)
+ --local-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
--local-no-check-updated Don't check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
@@ -7803,7 +8062,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--local-unicode-normalization Apply unicode NFC normalization to paths and filenames
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
- --mailru-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
@@ -7811,18 +8070,23 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk (default 32Mi)
--mailru-user string User name (usually email)
--mega-debug Output more debug from Mega
- --mega-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
--mega-user string User name
+ --netstorage-account string Set the NetStorage account name
+ --netstorage-host string Domain+path of NetStorage host to connect to
+ --netstorage-protocol string Select between HTTP or HTTPS protocol (default "https")
+ --netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured)
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only)
--onedrive-auth-url string Auth server URL
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
+ --onedrive-disable-site-permission Disable the request for Sites.Read.All permission
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
- --onedrive-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
@@ -7830,27 +8094,28 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--onedrive-list-chunk int Size of listing chunk (default 1000)
--onedrive-no-versions Remove all versions on modifying operations
--onedrive-region string Choose national cloud region for OneDrive (default "global")
+ --onedrive-root-folder-id string ID of the root folder
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
- --opendrive-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
+ --opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
- --pcloud-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
- --premiumizeme-encoding MultiEncoder This sets the encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
- --putio-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
--qingstor-connection-retries int Number of connection retries (default 3)
- --qingstor-encoding MultiEncoder This sets the encoding for the backend (default Slash,Ctl,InvalidUtf8)
+ --qingstor-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API
--qingstor-env-auth Get QingStor credentials from runtime
--qingstor-secret-access-key string QingStor Secret Access Key (password)
@@ -7865,12 +8130,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
- --s3-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --s3-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
--s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
--s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
+ --s3-list-url-encode Tristate Whether to url encode listings: true/false/unset (default unset)
+ --s3-list-version int Version of ListObjects to use: 1,2 or 0 for auto
--s3-location-constraint string Location constraint - must be set to match the Region
--s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
--s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
@@ -7894,10 +8161,11 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-upload-concurrency int Concurrency for multipart uploads (default 4)
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
+ --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-v2-auth If true use v2 authentication
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
- --seafile-encoding MultiEncoder This sets the encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
+ --seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
--seafile-library string Name of the library
--seafile-library-key string Library password (for encrypted libraries only) (obscured)
--seafile-pass string Password (obscured)
@@ -7917,7 +8185,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--sftp-md5sum-command string The command used to read md5 hashes
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
--sftp-path-override string Override path used by SSH connection
- --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-port int SSH port number (default 22)
--sftp-pubkey-file string Optional path to public key file
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
--sftp-set-modtime Set the modified time on the remote if set (default true)
@@ -7926,23 +8194,28 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp")
--sftp-use-fstat If set use fstat instead of stat
--sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods
- --sftp-user string SSH username, leave blank for current username, $USER
+ --sftp-user string SSH username (default "$USER")
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
- --sharefile-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls
--sharefile-root-folder-id string ID of the root folder
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
--sia-api-password string Sia Daemon API Password (obscured)
--sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
- --sia-encoding MultiEncoder This sets the encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
+ --sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent")
--skip-links Don't warn about skipped symlinks
+ --storj-access-grant string Access grant
+ --storj-api-key string API key
+ --storj-passphrase string Encryption passphrase
+ --storj-provider string Choose an authentication method (default "existing")
+ --storj-satellite-address string Satellite address (default "us-central-1.storj.io")
--sugarsync-access-key-id string Sugarsync Access Key ID
--sugarsync-app-id string Sugarsync App ID
--sugarsync-authorization string Sugarsync authorization
--sugarsync-authorization-expiry string Sugarsync authorization expiry
--sugarsync-deleted-id string Sugarsync deleted folder id
- --sugarsync-encoding MultiEncoder This sets the encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
+ --sugarsync-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
--sugarsync-hard-delete Permanently delete files if true
--sugarsync-private-access-key string Sugarsync Private Access Key
--sugarsync-refresh-token string Sugarsync refresh token
@@ -7956,7 +8229,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8)
+ --swift-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form
--swift-key string API key or password (OS_PASSWORD)
@@ -7970,21 +8243,16 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-user string User name to log in (OS_USERNAME)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID)
- --tardigrade-access-grant string Access grant
- --tardigrade-api-key string API key
- --tardigrade-passphrase string Encryption passphrase
- --tardigrade-provider string Choose an authentication method (default "existing")
- --tardigrade-satellite-address string Satellite address (default "us-central-1.tardigrade.io")
--union-action-policy string Policy to choose upstream on ACTION category (default "epall")
--union-cache-time int Cache time of usage and free space (in seconds) (default 120)
--union-create-policy string Policy to choose upstream on CREATE category (default "epmfs")
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
--uptobox-access-token string Your access token
- --uptobox-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
+ --uptobox-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
- --webdav-encoding string This sets the encoding for the backend
+ --webdav-encoding string The encoding for the backend
--webdav-headers CommaSepList Set HTTP headers for all transactions
--webdav-pass string Password (obscured)
--webdav-url string URL of http host to connect to
@@ -7993,13 +8261,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
- --yandex-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --yandex-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --yandex-hard-delete Delete files permanently rather than putting them into the trash
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
- --zoho-encoding MultiEncoder This sets the encoding for the backend (default Del,Ctl,InvalidUtf8)
+ --zoho-encoding MultiEncoder The encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
@@ -8061,8 +8330,8 @@ docker volume ls
Creating Volumes via CLI
Volumes can be created with docker volume create. Here are a few examples:
docker volume create vol1 -d rclone -o remote=storj: -o vfs-cache-mode=full
-docker volume create vol2 -d rclone -o remote=:tardigrade,access_grant=xxx:heimdall
-docker volume create vol3 -d rclone -o type=tardigrade -o path=heimdall -o tardigrade-access-grant=xxx -o poll-interval=0
+docker volume create vol2 -d rclone -o remote=:storj,access_grant=xxx:heimdall
+docker volume create vol3 -d rclone -o type=storj -o path=heimdall -o storj-access-grant=xxx -o poll-interval=0
Note the -d rclone
flag that tells docker to request volume from the rclone driver. This works even if you installed managed driver by its full name rclone/docker-volume-rclone
because you provided the --alias rclone
option.
Volumes can be inspected as follows:
docker volume list
@@ -8184,6 +8453,654 @@ docker volume create my_vol -d rclone -o opt1=new_val1 ...
docker volume list
docker volume inspect my_vol
If docker refuses to remove the volume, you should find containers or swarm services that use it and stop them first.
+Getting started
+
+- Install rclone and setup your remotes.
+- Bisync will create its working directory at
~/.cache/rclone/bisync
on Linux or C:\Users\MyLogin\AppData\Local\rclone\bisync
on Windows. Make sure that this location is writable.
+- Run bisync with the
--resync
flag, specifying the paths to the local and remote sync directory roots.
+- For successive sync runs, leave off the
--resync
flag.
+- Consider using a filters file for excluding unnecessary files and directories from the sync.
+- Consider setting up the --check-access feature for safety.
+- On Linux, consider setting up a crontab entry. bisync can safely run in concurrent cron jobs thanks to lock files it maintains.
+
+Here is a typical run log (with timestamps removed for clarity):
+rclone bisync /testdir/path1/ /testdir/path2/ --verbose
+INFO : Synching Path1 "/testdir/path1/" with Path2 "/testdir/path2/"
+INFO : Path1 checking for diffs
+INFO : - Path1 File is new - file11.txt
+INFO : - Path1 File is newer - file2.txt
+INFO : - Path1 File is newer - file5.txt
+INFO : - Path1 File is newer - file7.txt
+INFO : - Path1 File was deleted - file4.txt
+INFO : - Path1 File was deleted - file6.txt
+INFO : - Path1 File was deleted - file8.txt
+INFO : Path1: 7 changes: 1 new, 3 newer, 0 older, 3 deleted
+INFO : Path2 checking for diffs
+INFO : - Path2 File is new - file10.txt
+INFO : - Path2 File is newer - file1.txt
+INFO : - Path2 File is newer - file5.txt
+INFO : - Path2 File is newer - file6.txt
+INFO : - Path2 File was deleted - file3.txt
+INFO : - Path2 File was deleted - file7.txt
+INFO : - Path2 File was deleted - file8.txt
+INFO : Path2: 7 changes: 1 new, 3 newer, 0 older, 3 deleted
+INFO : Applying changes
+INFO : - Path1 Queue copy to Path2 - /testdir/path2/file11.txt
+INFO : - Path1 Queue copy to Path2 - /testdir/path2/file2.txt
+INFO : - Path2 Queue delete - /testdir/path2/file4.txt
+NOTICE: - WARNING New or changed in both paths - file5.txt
+NOTICE: - Path1 Renaming Path1 copy - /testdir/path1/file5.txt..path1
+NOTICE: - Path1 Queue copy to Path2 - /testdir/path2/file5.txt..path1
+NOTICE: - Path2 Renaming Path2 copy - /testdir/path2/file5.txt..path2
+NOTICE: - Path2 Queue copy to Path1 - /testdir/path1/file5.txt..path2
+INFO : - Path2 Queue copy to Path1 - /testdir/path1/file6.txt
+INFO : - Path1 Queue copy to Path2 - /testdir/path2/file7.txt
+INFO : - Path2 Queue copy to Path1 - /testdir/path1/file1.txt
+INFO : - Path2 Queue copy to Path1 - /testdir/path1/file10.txt
+INFO : - Path1 Queue delete - /testdir/path1/file3.txt
+INFO : - Path2 Do queued copies to - Path1
+INFO : - Path1 Do queued copies to - Path2
+INFO : - Do queued deletes on - Path1
+INFO : - Do queued deletes on - Path2
+INFO : Updating listings
+INFO : Validating listings for Path1 "/testdir/path1/" vs Path2 "/testdir/path2/"
+INFO : Bisync successful
+Command line syntax
+$ rclone bisync --help
+Usage:
+ rclone bisync remote1:path1 remote2:path2 [flags]
+
+Positional arguments:
+ Path1, Path2 Local path, or remote storage with ':' plus optional path.
+ Type 'rclone listremotes' for list of configured remotes.
+
+Optional Flags:
+ --check-access Ensure expected `RCLONE_TEST` files are found on
+ both Path1 and Path2 filesystems, else abort.
+ --check-filename FILENAME Filename for `--check-access` (default: `RCLONE_TEST`)
+ --check-sync CHOICE Controls comparison of final listings:
+ `true | false | only` (default: true)
+ If set to `only`, bisync will only compare listings
+ from the last run but skip actual sync.
+ --filters-file PATH Read filtering patterns from a file
+ --max-delete PERCENT Safety check on maximum percentage of deleted files allowed.
+ If exceeded, the bisync run will abort. (default: 50%)
+ --force Bypass `--max-delete` safety check and run the sync.
+ Consider using with `--verbose`
+ --remove-empty-dirs Remove empty directories at the final cleanup step.
+ -1, --resync Performs the resync run.
+ Warning: Path1 files may overwrite Path2 versions.
+ Consider using `--verbose` or `--dry-run` first.
+ --localtime Use local time in listings (default: UTC)
+ --no-cleanup Retain working files (useful for troubleshooting and testing).
+ --workdir PATH Use custom working directory (useful for testing).
+ (default: `~/.cache/rclone/bisync`)
+ -n, --dry-run Go through the motions - No files are copied/deleted.
+ -v, --verbose Increases logging verbosity.
+ May be specified more than once for more details.
+ -h, --help help for bisync
+Arbitrary rclone flags may be specified on the bisync command line, for example rclone bsync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s
Note that interactions of various rclone flags with bisync process flow has not been fully tested yet.
+Paths
+Path1 and Path2 arguments may be references to any mix of local directory paths (absolute or relative), UNC paths (//server/share/path
), Windows drive paths (with a drive letter and :
) or configured remotes with optional subdirectory paths. Cloud references are distinguished by having a :
in the argument (see Windows support below).
+Path1 and Path2 are treated equally, in that neither has priority for file changes, and access efficiency does not change whether a remote is on Path1 or Path2.
+The listings in bisync working directory (default: ~/.cache/rclone/bisync
) are named based on the Path1 and Path2 arguments so that separate syncs to individual directories within the tree may be set up, e.g.: path_to_local_tree..dropbox_subdir.lst
.
+Any empty directories after the sync on both the Path1 and Path2 filesystems are not deleted by default. If the --remove-empty-dirs
flag is specified, then both paths will have any empty directories purged as the last step in the process.
+Command-line flags
+--resync
+This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. Path2 files that do not exist in Path1 will be copied to Path1, and the process will then sync the Path1 tree to Path2.
+The base directories on the both Path1 and Path2 filesystems must exist or bisync will fail. This is required for safety - that bisync can verify that both paths are valid.
+When using --resync
a newer version of a file on the Path2 filesystem will be overwritten by the Path1 filesystem version. Carefully evaluate deltas using --dry-run.
+For a resync run, one of the paths may be empty (no files in the path tree). The resync run should result in files on both paths, else a normal non-resync run will fail.
+For a non-resync run, either path being empty (no files in the tree) fails with Empty current PathN listing. Cannot sync to an empty directory: X.pathN.lst
This is a safety check that an unexpected empty path does not result in deleting everything in the other path.
+--check-access
+Access check files are an additional safety measure against data loss. bisync will ensure it can find matching RCLONE_TEST
files in the same places in the Path1 and Path2 filesystems. Time stamps and file contents are not important, just the names and locations. Place one or more RCLONE_TEST
files in the Path1 or Path2 filesystem and then do either a run without --check-access
or a --resync
to set matching files on both filesystems. If you have symbolic links in your sync tree it is recommended to place RCLONE_TEST
files in the linked-to directory tree to protect against bisync assuming a bunch of deleted files if the linked-to tree should not be accessible. Also see the --check-filename
flag.
+--max-delete
+As a safety check, if greater than the --max-delete
percent of files were deleted on either the Path1 or Path2 filesystem, then bisync will abort with a warning message, without making any changes. The default --max-delete
is 50%
. One way to trigger this limit is to rename a directory that contains more than half of your files. This will appear to bisync as a bunch of deleted files and a bunch of new files. This safety check is intended to block bisync from deleting all of the files on both filesystems due to a temporary network access issue, or if the user had inadvertently deleted the files on one side or the other. To force the sync either set a different delete percentage limit, e.g. --max-delete 75
(allows up to 75% deletion), or use --force
to bypass the check.
+Also see the all files changed check.
+--filters-file
+By using rclone filter features you can exclude file types or directory sub-trees from the sync. See the bisync filters section and generic --filter-from documentation. An example filters file contains filters for non-allowed files for synching with Dropbox.
+If you make changes to your filters file then bisync requires a run with --resync
. This is a safety feature, which avoids existing files on the Path1 and/or Path2 side from seeming to disappear from view (since they are excluded in the new listings), which would fool bisync into seeing them as deleted (as compared to the prior run listings), and then bisync would proceed to delete them for real.
+To block this from happening bisync calculates an MD5 hash of the filters file and stores the hash in a .md5
file in the same place as your filters file. On the next runs with --filters-file
set, bisync re-calculates the MD5 hash of the current filters file and compares it to the hash stored in .md5
file. If they don't match the run aborts with a critical error and thus forces you to do a --resync
, likely avoiding a disaster.
+--check-sync
+Enabled by default, the check-sync function checks that all of the same files exist in both the Path1 and Path2 history listings. This check-sync integrity check is performed at the end of the sync run by default. Any untrapped failing copy/deletes between the two paths might result in differences between the two listings and in the untracked file content differences between the two paths. A resync run would correct the error.
+Note that the default-enabled integrity check locally executes a load of both the final Path1 and Path2 listings, and thus adds to the run time of a sync. Using --check-sync=false
will disable it and may significantly reduce the sync run times for very large numbers of files.
+The check may be run manually with --check-sync=only
. It runs only the integrity check and terminates without actually synching.
+Operation
+Runtime flow details
+bisync retains the listings of the Path1
and Path2
filesystems from the prior run. On each successive run it will:
+
+- list files on
path1
and path2
, and check for changes on each side. Changes include New
, Newer
, Older
, and Deleted
files.
+- Propagate changes on
path1
to path2
, and vice-versa.
+
+Safety measures
+
+- Lock file prevents multiple simultaneous runs when taking a while. This can be particularly useful if bisync is run by cron scheduler.
+- Handle change conflicts non-destructively by creating
..path1
and ..path2
file versions.
+- File system access health check using
RCLONE_TEST
files (see the --check-access
flag).
+- Abort on excessive deletes - protects against a failed listing being interpreted as all the files were deleted. See the
--max-delete
and --force
flags.
+- If something evil happens, bisync goes into a safe state to block damage by later runs. (See Error Handling)
+
+Normal sync checks
+
+
+
+
+
+
+
+
+
+Type
+Description
+Result
+Implementation
+
+
+
+
+Path2 new
+File is new on Path2, does not exist on Path1
+Path2 version survives
+rclone copy
Path2 to Path1
+
+
+Path2 newer
+File is newer on Path2, unchanged on Path1
+Path2 version survives
+rclone copy
Path2 to Path1
+
+
+Path2 deleted
+File is deleted on Path2, unchanged on Path1
+File is deleted
+rclone delete
Path1
+
+
+Path1 new
+File is new on Path1, does not exist on Path2
+Path1 version survives
+rclone copy
Path1 to Path2
+
+
+Path1 newer
+File is newer on Path1, unchanged on Path2
+Path1 version survives
+rclone copy
Path1 to Path2
+
+
+Path1 older
+File is older on Path1, unchanged on Path2
+Path1 version survives
+rclone copy
Path1 to Path2
+
+
+Path2 older
+File is older on Path2, unchanged on Path1
+Path2 version survives
+rclone copy
Path2 to Path1
+
+
+Path1 deleted
+File no longer exists on Path1
+File is deleted
+rclone delete
Path2
+
+
+
+Unusual sync checks
+
+
+
+
+
+
+
+
+
+Type
+Description
+Result
+Implementation
+
+
+
+
+Path1 new AND Path2 new
+File is new on Path1 AND new on Path2
+Files renamed to _Path1 and _Path2
+rclone copy
_Path2 file to Path1, rclone copy
_Path1 file to Path2
+
+
+Path2 newer AND Path1 changed
+File is newer on Path2 AND also changed (newer/older/size) on Path1
+Files renamed to _Path1 and _Path2
+rclone copy
_Path2 file to Path1, rclone copy
_Path1 file to Path2
+
+
+Path2 newer AND Path1 deleted
+File is newer on Path2 AND also deleted on Path1
+Path2 version survives
+rclone copy
Path2 to Path1
+
+
+Path2 deleted AND Path1 changed
+File is deleted on Path2 AND changed (newer/older/size) on Path1
+Path1 version survives
+rclone copy
Path1 to Path2
+
+
+Path1 deleted AND Path2 changed
+File is deleted on Path1 AND changed (newer/older/size) on Path2
+Path2 version survives
+rclone copy
Path2 to Path1
+
+
+
+All files changed check
+if all prior existing files on either of the filesystems have changed (e.g. timestamps have changed due to changing the system's timezone) then bisync will abort without making any changes. Any new files are not considered for this check. You could use --force
to force the sync (whichever side has the changed timestamp files wins). Alternately, a --resync
may be used (Path1 versions will be pushed to Path2). Consider the situation carefully and perhaps use --dry-run
before you commit to the changes.
+Modification time
+Bisync relies on file timestamps to identify changed files and will refuse to operate if backend lacks the modification time support.
+If you or your application should change the content of a file without changing the modification time then bisync will not notice the change, and thus will not copy it to the other side.
+Note that on some cloud storage systems it is not possible to have file timestamps that match precisely between the local and other filesystems.
+Bisync's approach to this problem is by tracking the changes on each side separately over time with a local database of files in that side then applying the resulting changes on the other side.
+Error handling
+Certain bisync critical errors, such as file copy/move failing, will result in a bisync lockout of following runs. The lockout is asserted because the sync status and history of the Path1 and Path2 filesystems cannot be trusted, so it is safer to block any further changes until someone checks things out. The recovery is to do a --resync
again.
+It is recommended to use --resync --dry-run --verbose
initially and carefully review what changes will be made before running the --resync
without --dry-run
.
+Most of these events come up due to a error status from an internal call. On such a critical error the {...}.path1.lst
and {...}.path2.lst
listing files are renamed to extension .lst-err
, which blocks any future bisync runs (since the normal .lst
files are not found). Bisync keeps them under bisync
subdirectory of the rclone cache direcory, typically at ${HOME}/.cache/rclone/bisync/
on Linux.
+Some errors are considered temporary and re-running the bisync is not blocked. The critical return blocks further bisync runs.
+Lock file
+When bisync is running, a lock file is created in the bisync working directory, typically at ~/.cache/rclone/bisync/PATH1..PATH2.lck
on Linux. If bisync should crash or hang, the lock file will remain in place and block any further runs of bisync for the same paths. Delete the lock file as part of debugging the situation. The lock file effectively blocks follow-on (e.g., scheduled by cron) runs when the prior invocation is taking a long time. The lock file contains PID of the blocking process, which may help in debug.
+Note that while concurrent bisync runs are allowed, be very cautious that there is no overlap in the trees being synched between concurrent runs, lest there be replicated files, deleted files and general mayhem.
+Return codes
+rclone bisync
returns the following codes to calling program: - 0
on a successful run, - 1
for a non-critical failing run (a rerun may be successful), - 2
for a critically aborted run (requires a --resync
to recover).
+Limitations
+Supported backends
+Bisync is considered BETA and has been tested with the following backends: - Local filesystem - Google Drive - Dropbox - OneDrive - S3 - SFTP
+It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we'll update the list. Run the test suite to check for proper operation as described below.
+First release of rclone bisync
requires that underlying backend supported the modification time feature and will refuse to run otherwise. This limitation will be lifted in a future rclone bisync
release.
+Concurrent modifications
+When using Local, FTP or SFTP remotes rclone does not create temporary files at the destination when copying, and thus if the connection is lost the created file may be corrupt, which will likely propagate back to the original path on the next sync, resulting in data loss. This will be solved in a future release, there is no workaround at the moment.
+Files that change during a bisync run may result in data loss. This has been seen in a highly dynamic environment, where the filesystem is getting hammered by running processes during the sync. The solution is to sync at quiet times or filter out unnecessary directories and files.
+Empty directories
+New empty directories on one path are not propagated to the other side. This is because bisync (and rclone) natively works on files not directories. The following sequence is a workaround but will not propagate the delete of an empty directory to the other side:
+rclone bisync PATH1 PATH2
+rclone copy PATH1 PATH2 --filter "+ */" --filter "- **" --create-empty-src-dirs
+rclone copy PATH2 PATH2 --filter "+ */" --filter "- **" --create-empty-src-dirs
+Renamed directories
+Renaming a folder on the Path1 side results is deleting all files on the Path2 side and then copying all files again from Path1 to Path2. Bisync sees this as all files in the old directory name as deleted and all files in the new directory name as new. Similarly, renaming a directory on both sides to the same name will result in creating ..path1
and ..path2
files on both sides. Currently the most effective and efficient method of renaming a directory is to rename it on both sides, then do a --resync
.
+Case sensitivity
+Synching with case-insensitive filesystems, such as Windows or Box
, can result in file name conflicts. This will be fixed in a future release. The near term workaround is to make sure that files on both sides don't have spelling case differences (Smile.jpg
vs. smile.jpg
).
+Windows support
+Bisync has been tested on Windows 8.1, Windows 10 Pro 64-bit and on Windows Github runners.
+Drive letters are allowed, including drive letters mapped to network drives (rclone bisync J:\localsync GDrive:
). If a drive letter is omitted, the shell current drive is the default. Drive letters are a single character follows by :
, so cloud names must be more than one character long.
+Absolute paths (with or without a drive letter), and relative paths (with or without a drive letter) are supported.
+Working directory is created at C:\Users\MyLogin\AppData\Local\rclone\bisync
.
+Note that bisync output may show a mix of forward /
and back \
slashes.
+Be careful of case independent directory and file naming on Windows vs. case dependent Linux
+Filtering
+See filtering documentation for how filter rules are written and interpreted.
+Bisync's --filters-file
flag slightly extends the rclone's --filter-from filtering mechanism. For a given bisync run you may provide only one --filters-file
. The --include*
, --exclude*
, and --filter
flags are also supported.
+How to filter directories
+Filtering portions of the directory tree is a critical feature for synching.
+Examples of directory trees (always beneath the Path1/Path2 root level) you may want to exclude from your sync: - Directory trees containing only software build intermediate files. - Directory trees containing application temporary files and data such as the Windows C:\Users\MyLogin\AppData\
tree. - Directory trees containing files that are large, less important, or are getting thrashed continuously by ongoing processes.
+On the other hand, there may be only select directories that you actually want to sync, and exclude all others. See the Example include-style filters for Windows user directories below.
+Filters file writing guidelines
+
+- Begin with excluding directory trees:
+
+- e.g. `- /AppData/`
+**
on the end is not necessary. Once a given directory level is excluded then everything beneath it won't be looked at by rclone.
+- Exclude such directories that are unneeded, are big, dynamically thrashed, or where there may be access permission issues.
+- Excluding such dirs first will make rclone operations (much) faster.
+- Specific files may also be excluded, as with the Dropbox exclusions example below.
+
+- Decide if its easier (or cleaner) to:
+
+- Include select directories and therefore exclude everything else -- or --
+- Exclude select directories and therefore include everything else
+
+- Include select directories:
+
+- Add lines like: `+ /Documents/PersonalFiles/**` to select which directories to include in the sync.
+**
on the end specifies to include the full depth of the specified tree.
+- With Include-style filters, files at the Path1/Path2 root are not included. They may be included with `+ /*`.
+- Place RCLONE_TEST files within these included directory trees. They will only be looked for in these directory trees.
+- Finish by excluding everything else by adding `- **` at the end of the filters file.
+- Disregard step 4.
+
+- Exclude select directories:
+
+- Add more lines like in step 1. For example:
-/Desktop/tempfiles/
, or `- /testdir/. Again, a
**` on the end is not necessary.
+- Do not add a `- **` in the file. Without this line, everything will be included that has not be explicitly excluded.
+- Disregard step 3.
+
+
+A few rules for the syntax of a filter file expanding on filtering documentation:
+
+- Lines may start with spaces and tabs - rclone strips leading whitespace.
+- If the first non-whitespace character is a
#
then the line is a comment and will be ignored.
+- Blank lines are ignored.
+- The first non-whitespace character on a filter line must be a
+
or -
.
+- Exactly 1 space is allowed between the
+/-
and the path term.
+- Only forward slashes (
/
) are used in path terms, even on Windows.
+- The rest of the line is taken as the path term. Trailing whitespace is taken literally, and probably is an error.
+
+Example include-style filters for Windows user directories
+This Windows include-style example is based on the sync root (Path1) set to C:\Users\MyLogin
. The strategy is to select specific directories to be synched with a network drive (Path2).
+
+- `- /AppData/` excludes an entire tree of Windows stored stuff that need not be synched. In my case, AppData has >11 GB of stuff I don't care about, and there are some subdirectories beneath AppData that are not accessible to my user login, resulting in bisync critical aborts.
+- Windows creates cache files starting with both upper and lowercase
NTUSER
at C:\Users\MyLogin
. These files may be dynamic, locked, and are generally don't care.
+- There are just a few directories with my data that I do want synched, in the form of `+ /
. By selecting only the directory trees I want to avoid the dozen plus directories that various apps make at
C:`.
+- Include files in the root of the sync point,
C:\Users\MyLogin
, by adding the `+ /*` line.
+- This is an Include-style filters file, therefore it ends with `- **` which excludes everything not explicitly included.
+
+- /AppData/
+- NTUSER*
+- ntuser*
++ /Documents/Family/**
++ /Documents/Sketchup/**
++ /Documents/Microcapture_Photo/**
++ /Documents/Microcapture_Video/**
++ /Desktop/**
++ /Pictures/**
++ /*
+- **
+Note also that Windows implements several "library" links such as C:\Users\MyLogin\My Documents\My Music
pointing to C:\Users\MyLogin\Music
. rclone sees these as links, so you must add --links
to the bisync command line if you which to follow these links. I find that I get permission errors in trying to follow the links, so I don't include the rclone --links
flag, but then you get lots of Can't follow symlink…
noise from rclone about not following the links. This noise can be quashed by adding --quiet
to the bisync command line.
+Example exclude-style filters files for use with Dropbox
+
+- Dropbox disallows synching the listed temporary and configuration/data files. The `-
` filters exclude these files where ever they may occur in the sync tree. Consider adding similar exclusions for file types you don't need to sync, such as core dump and software build files.
+- bisync testing creates
/testdir/
at the top level of the sync tree, and usually deletes the tree after the test. If a normal sync should run while the /testdir/
tree exists the --check-access
phase may fail due to unbalanced RCLONE_TEST files. The `- /testdir/` filter blocks this tree from being synched. You don't need this exclusion if you are not doing bisync development testing.
+- Everything else beneath the Path1/Path2 root will be synched.
+- RCLONE_TEST files may be placed anywhere within the tree, including the root.
+
+Example filters file for Dropbox
+# Filter file for use with bisync
+# See https://rclone.org/filtering/ for filtering rules
+# NOTICE: If you make changes to this file you MUST do a --resync run.
+# Run with --dry-run to see what changes will be made.
+
+# Dropbox wont sync some files so filter them away here.
+# See https://help.dropbox.com/installs-integrations/sync-uploads/files-not-syncing
+- .dropbox.attr
+- ~*.tmp
+- ~$*
+- .~*
+- desktop.ini
+- .dropbox
+
+# Used for bisync testing, so excluded from normal runs
+- /testdir/
+
+# Other example filters
+#- /TiBU/
+#- /Photos/
+How --check-access handles filters
+At the start of a bisync run, listings are gathered for Path1 and Path2 while using the user's --filters-file
. During the check access phase, bisync scans these listings for RCLONE_TEST
files. Any RCLONE_TEST
files hidden by the --filters-file
are not in the listings and thus not checked during the check access phase.
+Troubleshooting
+Reading bisync logs
+Here are two normal runs. The first one has a newer file on the remote. The second has no deltas between local and remote.
+2021/05/16 00:24:38 INFO : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/"
+2021/05/16 00:24:38 INFO : Path1 checking for diffs
+2021/05/16 00:24:38 INFO : - Path1 File is new - file.txt
+2021/05/16 00:24:38 INFO : Path1: 1 changes: 1 new, 0 newer, 0 older, 0 deleted
+2021/05/16 00:24:38 INFO : Path2 checking for diffs
+2021/05/16 00:24:38 INFO : Applying changes
+2021/05/16 00:24:38 INFO : - Path1 Queue copy to Path2 - dropbox:/file.txt
+2021/05/16 00:24:38 INFO : - Path1 Do queued copies to - Path2
+2021/05/16 00:24:38 INFO : Updating listings
+2021/05/16 00:24:38 INFO : Validating listings for Path1 "/path/to/local/tree/" vs Path2 "dropbox:/"
+2021/05/16 00:24:38 INFO : Bisync successful
+
+2021/05/16 00:36:52 INFO : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/"
+2021/05/16 00:36:52 INFO : Path1 checking for diffs
+2021/05/16 00:36:52 INFO : Path2 checking for diffs
+2021/05/16 00:36:52 INFO : No changes found
+2021/05/16 00:36:52 INFO : Updating listings
+2021/05/16 00:36:52 INFO : Validating listings for Path1 "/path/to/local/tree/" vs Path2 "dropbox:/"
+2021/05/16 00:36:52 INFO : Bisync successful
+Dry run oddity
+The --dry-run
messages may indicate that it would try to delete some files. For example, if a file is new on Path2 and does not exist on Path1 then it would normally be copied to Path1, but with --dry-run
enabled those copies don't happen, which leads to the attempted delete on the Path2, blocked again by --dry-run: ... Not deleting as --dry-run
.
+This whole confusing situation is an artifact of the --dry-run
flag. Scrutinize the proposed deletes carefully, and if the files would have been copied to Path1 then the threatened deletes on Path2 may be disregarded.
+Retries
+Rclone has built in retries. If you run with --verbose
you'll see error and retry messages such as shown below. This is usually not a bug. If at the end of the run you see Bisync successful
and not Bisync critical error
or Bisync aborted
then the run was successful, and you can ignore the error messages.
+The following run shows an intermittent fail. Lines 5 and _6- are low level messages. Line 6 is a bubbled-up warning message, conveying the error. Rclone normally retries failing commands, so there may be numerous such messages in the log.
+Since there are no final error/warning messages on line 7, rclone has recovered from failure after a retry, and the overall sync was successful.
+1: 2021/05/14 00:44:12 INFO : Synching Path1 "/path/to/local/tree" with Path2 "dropbox:"
+2: 2021/05/14 00:44:12 INFO : Path1 checking for diffs
+3: 2021/05/14 00:44:12 INFO : Path2 checking for diffs
+4: 2021/05/14 00:44:12 INFO : Path2: 113 changes: 22 new, 0 newer, 0 older, 91 deleted
+5: 2021/05/14 00:44:12 ERROR : /path/to/local/tree/objects/af: error listing: unexpected end of JSON input
+6: 2021/05/14 00:44:12 NOTICE: WARNING listing try 1 failed. - dropbox:
+7: 2021/05/14 00:44:12 INFO : Bisync successful
+This log shows a Critical failure which requires a --resync
to recover from. See the Runtime Error Handling section.
+2021/05/12 00:49:40 INFO : Google drive root '': Waiting for checks to finish
+2021/05/12 00:49:40 INFO : Google drive root '': Waiting for transfers to finish
+2021/05/12 00:49:40 INFO : Google drive root '': not deleting files as there were IO errors
+2021/05/12 00:49:40 ERROR : Attempt 3/3 failed with 3 errors and: not deleting files as there were IO errors
+2021/05/12 00:49:40 ERROR : Failed to sync: not deleting files as there were IO errors
+2021/05/12 00:49:40 NOTICE: WARNING rclone sync try 3 failed. - /path/to/local/tree/
+2021/05/12 00:49:40 ERROR : Bisync aborted. Must run --resync to recover.
+Denied downloads of "infected" or "abusive" files
+Google Drive has a filter for certain file types (.exe
, .apk
, et cetera) that by default cannot be copied from Google Drive to the local filesystem. If you are having problems, run with --verbose
to see specifically which files are generating complaints. If the error is This file has been identified as malware or spam and cannot be downloaded
, consider using the flag --drive-acknowledge-abuse.
+Google Doc files
+Google docs exist as virtual files on Google Drive and cannot be transferred to other filesystems natively. While it is possible to export a Google doc to a normal file (with .xlsx
extension, for example), it's not possible to import a normal file back into a Google document.
+Bisync's handling of Google Doc files is to flag them in the run log output for user's attention and ignore them for any file transfers, deletes, or syncs. They will show up with a length of -1
in the listings. This bisync run is otherwise successful:
+2021/05/11 08:23:15 INFO : Synching Path1 "/path/to/local/tree/base/" with Path2 "GDrive:"
+2021/05/11 08:23:15 INFO : ...path2.lst-new: Ignoring incorrect line: "- -1 - - 2018-07-29T08:49:30.136000000+0000 GoogleDoc.docx"
+2021/05/11 08:23:15 INFO : Bisync successful
+Usage examples
+Cron
+Rclone does not yet have a built-in capability to monitor the local file system for changes and must be blindly run periodically. On Windows this can be done using a Task Scheduler, on Linux you can use Cron which is described below.
+The 1st example runs a sync every 5 minutes between a local directory and an OwnCloud server, with output logged to a runlog file:
+# Minute (0-59)
+# Hour (0-23)
+# Day of Month (1-31)
+# Month (1-12 or Jan-Dec)
+# Day of Week (0-6 or Sun-Sat)
+# Command
+ */5 * * * * /path/to/rclone bisync /local/files MyCloud: --check-access --filters-file /path/to/bysync-filters.txt --log-file /path/to//bisync.log
+See crontab syntax). for the details of crontab time interval expressions.
+If you run rclone bisync
as a cron job, redirect stdout/stderr to a file. The 2nd example runs a sync to Dropbox every hour and logs all stdout (via the >>
) and stderr (via 2>&1
) to a log file.
+0 * * * * /path/to/rclone bisync /path/to/local/dropbox Dropbox: --check-access --filters-file /home/user/filters.txt >> /path/to/logs/dropbox-run.log 2>&1
+Sharing an encrypted folder tree between hosts
+bisync can keep a local folder in sync with a cloud service, but what if you have some highly sensitive files to be synched?
+Usage of a cloud service is for exchanging both routine and sensitive personal files between one's home network, one's personal notebook when on the road, and with one's work computer. The routine data is not sensitive. For the sensitive data, configure an rclone crypt remote to point to a subdirectory within the local disk tree that is bisync'd to Dropbox, and then set up an bisync for this local crypt directory to a directory outside of the main sync tree.
+Linux server setup
+
+/path/to/DBoxroot
is the root of my local sync tree. There are numerous subdirectories.
+/path/to/DBoxroot/crypt
is the root subdirectory for files that are encrypted. This local directory target is setup as an rclone crypt remote named Dropcrypt:
. See rclone.conf snippet below.
+/path/to/my/unencrypted/files
is the root of my sensitive files - not encrypted, not within the tree synched to Dropbox.
+- To sync my local unencrypted files with the encrypted Dropbox versions I manually run
bisync /path/to/my/unencrypted/files DropCrypt:
. This step could be bundled into a script to run before and after the full Dropbox tree sync in the last step, thus actively keeping the sensitive files in sync.
+bisync /path/to/DBoxroot Dropbox:
runs periodically via cron, keeping my full local sync tree in sync with Dropbox.
+
+Windows notebook setup
+
+- The Dropbox client runs keeping the local tree
C:\Users\MyLogin\Dropbox
always in sync with Dropbox. I could have used rclone bisync
instead.
+- A separate directory tree at
C:\Users\MyLogin\Documents\DropLocal
hosts the tree of unencrypted files/folders.
+- To sync my local unencrypted files with the encrypted Dropbox versions I manually run the following command:
rclone bisync C:\Users\MyLogin\Documents\DropLocal Dropcrypt:
.
+- The Dropbox client then syncs the changes with Dropbox.
+
+rclone.conf snippet
+[Dropbox]
+type = dropbox
+...
+
+[Dropcrypt]
+type = crypt
+remote = /path/to/DBoxroot/crypt # on the Linux server
+remote = C:\Users\MyLogin\Dropbox\crypt # on the Windows notebook
+filename_encryption = standard
+directory_name_encryption = true
+password = ...
+...
+Testing
+You should read this section only if you are developing for rclone. You need to have rclone source code locally to work with bisync tests.
+Bisync has a dedicated test framework implemented in the bisync_test.go
file located in the rclone source tree. The test suite is based on the go test
command. Series of tests are stored in subdirectories below the cmd/bisync/testdata
directory. Individual tests can be invoked by their directory name, e.g. go test . -case basic -remote local -remote2 gdrive: -v
+Tests will make a temporary folder on remote and purge it afterwards. If during test run there are intermittent errors and rclone retries, these errors will be captured and flagged as invalid MISCOMPAREs. Rerunning the test will let it pass. Consider such failures as noise.
+Test command syntax
+usage: go test ./cmd/bisync [options...]
+
+Options:
+ -case NAME Name(s) of the test case(s) to run. Multiple names should
+ be separated by commas. You can remove the `test_` prefix
+ and replace `_` by `-` in test name for convenience.
+ If not `all`, the name(s) should map to a directory under
+ `./cmd/bisync/testdata`.
+ Use `all` to run all tests (default: all)
+ -remote PATH1 `local` or name of cloud service with `:` (default: local)
+ -remote2 PATH2 `local` or name of cloud service with `:` (default: local)
+ -no-compare Disable comparing test results with the golden directory
+ (default: compare)
+ -no-cleanup Disable cleanup of Path1 and Path2 testdirs.
+ Useful for troubleshooting. (default: cleanup)
+ -golden Store results in the golden directory (default: false)
+ This flag can be used with multiple tests.
+ -debug Print debug messages
+ -stop-at NUM Stop test after given step number. (default: run to the end)
+ Implies `-no-compare` and `-no-cleanup`, if the test really
+ ends prematurely. Only meaningful for a single test case.
+ -refresh-times Force refreshing the target modtime, useful for Dropbox
+ (default: false)
+ -verbose Run tests verbosely
+Note: unlike rclone flags which must be prefixed by double dash (--
), the test command flags can be equally prefixed by a single -
or double dash.
+Running tests
+
+go test . -case basic -remote local -remote2 local
runs the test_basic
test case using only the local filesystem, synching one local directory with another local directory. Test script output is to the console, while commands within scenario.txt have their output sent to the .../workdir/test.log
file, which is finally compared to the golden copy.
+- The first argument after
go test
should be a relative name of the directory containing bisync source code. If you run tests right from there, the argument will be .
(current directory) as in most examples below. If you run bisync tests from the rclone source directory, the command should be go test ./cmd/bisync ...
.
+- The test engine will mangle rclone output to ensure comparability with golden listings and logs.
+- Test scenarios are located in
./cmd/bisync/testdata
. The test -case
argument should match the full name of a subdirectory under that directory. Every test subdirectory name on disk must start with test_
, this prefix can be omitted on command line for brevity. Also, underscores in the name can be replaced by dashes for convenience.
+go test . -remote local -remote2 local -case all
runs all tests.
+- Path1 and Path2 may either be the keyword
local
or may be names of configured cloud services. go test . -remote gdrive: -remote2 dropbox: -case basic
will run the test between these two services, without transferring any files to the local filesystem.
+- Test run stdout and stderr console output may be directed to a file, e.g.
go test . -remote gdrive: -remote2 local -case all > runlog.txt 2>&1
+
+Test execution flow
+
+- The base setup in the
initial
directory of the testcase is applied on the Path1 and Path2 filesystems (via rclone copy the initial directory to Path1, then rclone sync Path1 to Path2).
+- The commands in the scenario.txt file are applied, with output directed to the
test.log
file in the test working directory. Typically, the first actual command in the scenario.txt
file is to do a --resync
, which establishes the baseline {...}.path1.lst
and {...}.path2.lst
files in the test working directory (.../workdir/
relative to the temporary test directory). Various commands and listing snapshots are done within the test.
+- Finally, the contents of the test working directory are compared to the contents of the testcase's golden directory.
+
+Notes about testing
+
+- Test cases are in individual directories beneath
./cmd/bisync/testdata
. A command line reference to a test is understood to reference a directory beneath testdata
. For example, go test ./cmd/bisync -case dry-run -remote gdrive: -remote2 local
refers to the test case in ./cmd/bisync/testdata/test_dry_run
.
+- The test working directory is located at
.../workdir
relative to a temporary test directory, usually under /tmp
on Linux.
+- The local test sync tree is created at a temporary directory named like
bisync.XXX
under system temporary directory.
+- The remote test sync tree is located at a temporary directory under
<remote:>/bisync.XXX/
.
+path1
and/or path2
subdirectories are created in a temporary directory under the respective local or cloud test remote.
+- By default, the Path1 and Path2 test dirs and workdir will be deleted after each test run. The
-no-cleanup
flag disables purging these directories when validating and debugging a given test. These directories will be flushed before running another test, independent of the -no-cleanup
usage.
+- You will likely want to add `- /testdir/
to your normal bisync
--filters-fileso that normal syncs do not attempt to sync the test temporary directories, which may have
RCLONE_TESTmiscompares in some testcases which would otherwise trip the
--check-accesssystem. The
--check-accessmechanism is hard-coded to ignore
RCLONE_TESTfiles beneath
bisync/testdata`, so the test cases may reside on the synched tree even if there are check file mismatches in the test tree.
+- Some Dropbox tests can fail, notably printing the following message:
src and dst identical but can't set mod time without deleting and re-uploading
This is expected and happens due a way Dropbox handles modificaion times. You should use the -refresh-times
test flag to make up for this.
+- If Dropbox tests hit request limit for you and print error message
too_many_requests/...: Too many requests or write operations.
then follow the Dropbox App ID instructions.
+
+Updating golden results
+Sometimes even a slight change in the bisync source can cause little changes spread around many log files. Updating them manually would be a nighmare.
+The -golden
flag will store the test.log
and *.lst
listings from each test case into respective golden directories. Golden results will automatically contain generic strings instead of local or cloud paths which means that they should match when run with a different cloud service.
+Your normal workflow might be as follows: 1. Git-clone the rclone sources locally 2. Modify bisync source and check that it builds 3. Run the whole test suite go test ./cmd/bisync -remote local
4. If some tests show log difference, recheck them individually, e.g.: go test ./cmd/bisync -remote local -case basic
5. If you are convinced with the difference, goldenize all tests at once: go test ./cmd/bisync -remote local -golden
6. Use word diff: git diff --word-diff ./cmd/bisync/testdata/
. Please note that normal line-level diff is generally useless here. 7. Check the difference carefully! 8. Commit the change (git commit
) only if you are sure. If unsure, save your code changes then wipe the log diffs from git: git reset [--hard]
.
+Structure of test scenarios
+
+<testname>/initial/
contains a tree of files that will be set as the initial condition on both Path1 and Path2 testdirs.
+<testname>/modfiles/
contains files that will be used to modify the Path1 and/or Path2 filesystems.
+<testname>/golden/
contains the expected content of the test working directory (workdir
) at the completion of the testcase.
+<testname>/scenario.txt
contains the body of the test, in the form of various commands to modify files, run bisync, and snapshot listings. Output from these commands is captured to .../workdir/test.log
for comparison to the golden files.
+
+Supported test commands
+
+test <some message>
Print the line to the console and to the test.log
: test sync is working correctly with options x, y, z
+copy-listings <prefix>
Save a copy of all .lst
listings in the test working directory with the specified prefix: save-listings exclude-pass-run
+move-listings <prefix>
Similar to copy-listings
but removes the source
+purge-children <dir>
This will delete all child files and purge all child subdirs under given directory but keep the parent intact. This behavior is important for tests with Google Drive because removing and re-creating the parent would change its ID.
+delete-file <file>
Delete a single file.
+delete-glob <dir> <pattern>
Delete a group of files located one level deep in the given directory with names maching a given glob pattern.
+touch-glob YYYY-MM-DD <dir> <pattern>
Change modification time on a group of files.
+touch-copy YYYY-MM-DD <source-file> <dest-dir>
Change file modification time then copy it to destination.
+copy-file <source-file> <dest-dir>
Copy a single file to given directory.
+copy-as <source-file> <dest-file>
Similar to above but destination must include both directory and the new file name at destination.
+copy-dir <src> <dst>
and sync-dir <src> <dst>
Copy/sync a directory. Equivalent of rclone copy
and rclone sync
.
+list-dirs <dir>
Equivalent to rclone lsf -R --dirs-only <dir>
+bisync [options]
Runs bisync against -remote
and -remote2
.
+
+Supported substitution terms
+
+{testdir/}
- the root dir of the testcase
+{datadir/}
- the modfiles
dir under the testcase root
+{workdir/}
- the temporary test working directory
+{path1/}
- the root of the Path1 test directory tree
+{path2/}
- the root of the Path2 test directory tree
+{session}
- base name of the test listings
+{/}
- OS-specific path separator
+{spc}
, {tab}
, {eol}
- whitespace
+{chr:HH}
- raw byte with given hexadecimal code
+
+Substitution results of the terms named like {dir/}
will end with /
(or backslash on Windows), so it is not necessary to include slash in the usage, for example delete-file {path1/}file1.txt
.
+Benchmarks
+This section is work in progress.
+Here are a few data points for scale, execution times, and memory usage.
+The first set of data was taken between a local disk to Dropbox. The speedtest.net download speed was ~170 Mbps, and upload speed was ~10 Mbps. 500 files (~9.5 MB each) had been already synched. 50 files were added in a new directory, each ~9.5 MB, ~475 MB total.
+
+
+
+
+
+
+
+
+Change
+Operations and times
+Overall run time
+
+
+
+
+500 files synched (nothing to move)
+1x listings for Path1 & Path2
+1.5 sec
+
+
+500 files synched with --check-access
+1x listings for Path1 & Path2
+1.5 sec
+
+
+50 new files on remote
+Queued 50 copies down: 27 sec
+29 sec
+
+
+Moved local dir
+Queued 50 copies up: 410 sec, 50 deletes up: 9 sec
+421 sec
+
+
+Moved remote dir
+Queued 50 copies down: 31 sec, 50 deletes down: <1 sec
+33 sec
+
+
+Delete local dir
+Queued 50 deletes up: 9 sec
+13 sec
+
+
+
+This next data is from a user's application. They had ~400GB of data over 1.96 million files being sync'ed between a Windows local disk and some remote cloud. The file full path length was on average 35 characters (which factors into load time and RAM required).
+
+- Loading the prior listing into memory (1.96 million files, listing file size 140 MB) took ~30 sec and occupied about 1 GB of RAM.
+- Getting a fresh listing of the local file system (producing the 140 MB output file) took about XXX sec.
+- Getting a fresh listing of the remote file system (producing the 140 MB output file) took about XXX sec. The network download speed was measured at XXX Mb/s.
+- Once the prior and current Path1 and Path2 listings were loaded (a total of four to be loaded, two at a time), determining the deltas was pretty quick (a few seconds for this test case), and the transfer time for any files to be copied was dominated by the network bandwidth.
+
+References
+rclone's bisync implementation was derived from the rclonesync-V2 project, including documentation and test mechanisms, with [@cjnaz](https://github.com/cjnaz)'s full support and encouragement.
+rclone bisync
is similar in nature to a range of other projects:
+
+- unison
+- syncthing
+- cjnaz/rclonesync
+- ConorWilliams/rsinc
+- jwink3101/syncrclone
+- DavideRossi/upback
+
+Bisync adopts the differential synchronization technique, which is based on keeping history of changes performed by both synchronizing sides. See the Dual Shadow Method section in the Neil Fraser's article.
+Also note a number of academic publications by Benjamin Pierce about Unison and synchronization in general.
1Fichier
This is a backend for the 1fichier cloud storage service. Note that a Premium subscription is required to use the API.
Paths are specified as remote:path
@@ -8193,7 +9110,7 @@ docker volume inspect my_vol
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -8309,50 +9226,55 @@ y/e/d> y
Here are the standard options specific to fichier (1Fichier).
--fichier-api-key
Your API Key, get it from https://1fichier.com/console/params.pl.
+Properties:
- Config: api_key
- Env Var: RCLONE_FICHIER_API_KEY
- Type: string
-- Default: ""
+- Required: false
Advanced options
Here are the advanced options specific to fichier (1Fichier).
--fichier-shared-folder
If you want to download a shared folder, add this parameter.
+Properties:
- Config: shared_folder
- Env Var: RCLONE_FICHIER_SHARED_FOLDER
- Type: string
-- Default: ""
+- Required: false
--fichier-file-password
If you want to download a shared file that is password protected, add this parameter.
NB Input to this must be obscured - see rclone obscure.
+Properties:
- Config: file_password
- Env Var: RCLONE_FICHIER_FILE_PASSWORD
- Type: string
-- Default: ""
+- Required: false
--fichier-folder-password
If you want to list the files in a shared folder that is password protected, add this parameter.
NB Input to this must be obscured - see rclone obscure.
+Properties:
- Config: folder_password
- Env Var: RCLONE_FICHIER_FOLDER_PASSWORD
- Type: string
-- Default: ""
+- Required: false
--fichier-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_FICHIER_ENCODING
- Type: MultiEncoder
- Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
-Limitations
+Limitations
rclone about
is not supported by the 1Fichier backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
Alias
@@ -8365,7 +9287,7 @@ y/e/d> y
Here is an example of how to make an alias called remote
for local folder. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -8416,11 +9338,12 @@ e/n/d/r/c/s/q> q
--alias-remote
Remote or path to alias.
Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
+Properties:
- Config: remote
- Env Var: RCLONE_ALIAS_REMOTE
- Type: string
-- Default: ""
+- Required: true
Amazon Drive
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage service run by Amazon for consumers.
@@ -8431,12 +9354,12 @@ e/n/d/r/c/s/q> q
Configuration
The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config
walks you through it.
The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it.
-Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own client_id
and client_secret
with Amazon Drive, or use a third party oauth proxy in which case you will need to enter client_id
, client_secret
, auth_url
and token_url
.
+Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own client_id
and client_secret
with Amazon Drive, or use a third-party oauth proxy in which case you will need to enter client_id
, client_secret
, auth_url
and token_url
.
Note also if you are not using Amazon's auth_url
and token_url
, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize
will not work.
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
r) Rename remote
c) Copy remote
@@ -8527,56 +9450,62 @@ y/e/d> y
--acd-client-id
OAuth Client Id.
Leave blank normally.
+Properties:
- Config: client_id
- Env Var: RCLONE_ACD_CLIENT_ID
- Type: string
-- Default: ""
+- Required: false
--acd-client-secret
OAuth Client Secret.
Leave blank normally.
+Properties:
- Config: client_secret
- Env Var: RCLONE_ACD_CLIENT_SECRET
- Type: string
-- Default: ""
+- Required: false
Advanced options
Here are the advanced options specific to amazon cloud drive (Amazon Drive).
--acd-token
OAuth Access Token as a JSON blob.
+Properties:
- Config: token
- Env Var: RCLONE_ACD_TOKEN
- Type: string
-- Default: ""
+- Required: false
--acd-auth-url
Auth server URL.
Leave blank to use the provider defaults.
+Properties:
- Config: auth_url
- Env Var: RCLONE_ACD_AUTH_URL
- Type: string
-- Default: ""
+- Required: false
--acd-token-url
Token server url.
Leave blank to use the provider defaults.
+Properties:
- Config: token_url
- Env Var: RCLONE_ACD_TOKEN_URL
- Type: string
-- Default: ""
+- Required: false
--acd-checkpoint
Checkpoint for internal polling (debug).
+Properties:
- Config: checkpoint
- Env Var: RCLONE_ACD_CHECKPOINT
- Type: string
-- Default: ""
+- Required: false
--acd-upload-wait-per-gb
Additional time per GiB to wait after a failed complete upload to see if it appears.
@@ -8585,6 +9514,7 @@ y/e/d> y
You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually.
These values were determined empirically by observing lots of uploads of big files for a range of file sizes.
Upload with the "-v" flag to see more info about what rclone is doing in this situation.
+Properties:
- Config: upload_wait_per_gb
- Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
@@ -8595,6 +9525,7 @@ y/e/d> y
Files >= this size will be downloaded via their tempLink.
Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10 GiB. The default for this is 9 GiB which shouldn't need to be changed.
To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage.
+Properties:
- Config: templink_threshold
- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
@@ -8602,15 +9533,16 @@ y/e/d> y
- Default: 9Gi
--acd-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_ACD_ENCODING
- Type: MultiEncoder
- Default: Slash,InvalidUtf8,Dot
-Limitations
+Limitations
Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries
flag) which should hopefully work around this problem.
Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.
@@ -8628,9 +9560,12 @@ y/e/d> y
- Dreamhost
- IBM COS S3
- Minio
+- RackCorp Object Storage
- Scaleway
+- Seagate Lyve Cloud
- SeaweedFS
- StackPath
+- Storj
- Tencent Cloud Object Storage (COS)
- Wasabi
@@ -8649,7 +9584,7 @@ y/e/d> y
First run
rclone config
This will guide you through an interactive setup process.
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -8743,7 +9678,7 @@ Choose a number from below, or type in your own value
region> 1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
-endpoint>
+endpoint>
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia, or Pacific Northwest.
@@ -8821,6 +9756,8 @@ Choose a number from below, or type in your own value
\ "DEEP_ARCHIVE"
8 / Intelligent-Tiering storage class
\ "INTELLIGENT_TIERING"
+ 9 / Glacier Instant Retrieval storage class
+ \ "GLACIER_IR"
storage_class> 1
Remote config
--------------------
@@ -8831,23 +9768,23 @@ env_auth = false
access_key_id = XXX
secret_access_key = YYY
region = us-east-1
-endpoint =
-location_constraint =
+endpoint =
+location_constraint =
acl = private
-server_side_encryption =
-storage_class =
+server_side_encryption =
+storage_class =
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
-y/e/d>
+y/e/d>
Modified time
The modified time is stored as metadata on the object as X-Amz-Meta-Mtime
as floating point since the epoch, accurate to 1 ns.
If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive storage the object will be uploaded rather than copied.
Note that reading this from the object takes an additional HEAD
request as the metadata isn't returned in object listings.
Reducing costs
Avoiding HEAD requests to read the modification time
-By default rclone will use the modification time of objects stored in S3 for syncing. This is stored in object metadata which unfortunately takes an extra HEAD request to read which can be expensive (in time and money).
+By default, rclone will use the modification time of objects stored in S3 for syncing. This is stored in object metadata which unfortunately takes an extra HEAD request to read which can be expensive (in time and money).
The modification time is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient on S3 because it requires an extra API call to retrieve the metadata.
The extra API calls can be avoided when syncing (using rclone sync
or rclone copy
) in a few different ways, each with its own tradeoffs.
@@ -8884,11 +9821,11 @@ y/e/d>
rclone sync --fast-list --checksum /path/to/source s3:bucket
--fast-list
trades off API transactions for memory use. As a rough guide rclone uses 1k of memory per object stored, so using --fast-list
on a sync of a million objects will use roughly 1 GiB of RAM.
If you are only copying a small number of files into a big repository then using --no-traverse
is a good idea. This finds objects directly instead of through directory listings. You can do a "top-up" sync very cheaply by using --max-age
and --no-traverse
to copy only recent files, eg
-rclone copy --min-age 24h --no-traverse /path/to/source s3:bucket
+rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket
You'd then do a full rclone sync
less often.
Note that --fast-list
isn't required in the top-up sync.
Avoiding HEAD requests after PUT
-By default rclone will HEAD every object it uploads. It does this to check the object got uploaded correctly.
+By default, rclone will HEAD every object it uploads. It does this to check the object got uploaded correctly.
You can disable this with the --s3-no-head option - see there for more details.
Setting this flag increases the chance for undetected upload failures.
Hashes
@@ -9019,7 +9956,7 @@ y/e/d>
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
- }
+ }
]
}
Notes on above:
@@ -9035,15 +9972,22 @@ y/e/d>
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
In this case you need to restore the object(s) in question before using rclone.
Note that rclone only speaks the S3 API it does not speak the Glacier Vault API, so rclone cannot directly access Glacier Vaults.
+Object-lock enabled S3 bucket
+According to AWS's documentation on S3 Object Lock:
+
+If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
+
+As mentioned in the Hashes section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail. A simple solution is to set the --s3-upload-cutoff 0
and force all the files to be uploaded as multipart.
Standard options
-Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
+Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS).
--s3-provider
Choose your S3 provider.
+Properties:
- Config: provider
- Env Var: RCLONE_S3_PROVIDER
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "AWS"
@@ -9070,6 +10014,10 @@ y/e/d>
- IBM COS S3
+- "LyveCloud"
+
+- Seagate Lyve Cloud
+
- "Minio"
- Minio Object Storage
@@ -9078,6 +10026,10 @@ y/e/d>
- Netease Object Storage (NOS)
+- "RackCorp"
+
+- RackCorp Object Storage
+
- "Scaleway"
- Scaleway Object Storage
@@ -9090,6 +10042,10 @@ y/e/d>
- StackPath Object Storage
+- "Storj"
+
+- Storj (S3 Compatible Gateway)
+
- "TencentCOS"
- Tencent Cloud Object Storage (COS)
@@ -9107,6 +10063,7 @@ y/e/d>
--s3-env-auth
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
+Properties:
- Config: env_auth
- Env Var: RCLONE_S3_ENV_AUTH
@@ -9127,28 +10084,32 @@ y/e/d>
--s3-access-key-id
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
+Properties:
- Config: access_key_id
- Env Var: RCLONE_S3_ACCESS_KEY_ID
- Type: string
-- Default: ""
+- Required: false
--s3-secret-access-key
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
+Properties:
- Config: secret_access_key
- Env Var: RCLONE_S3_SECRET_ACCESS_KEY
- Type: string
-- Default: ""
+- Required: false
--s3-region
Region to connect to.
+Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
+- Provider: AWS
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "us-east-1"
@@ -9280,12 +10241,103 @@ y/e/d>
--s3-region
-Region to connect to.
+region - the location where your bucket will be created and your data stored.
+Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
+- Provider: RackCorp
- Type: string
-- Default: ""
+- Required: false
+- Examples:
+
+- "global"
+
+- Global CDN (All locations) Region
+
+- "au"
+
+- Australia (All states)
+
+- "au-nsw"
+
+- NSW (Australia) Region
+
+- "au-qld"
+
+- QLD (Australia) Region
+
+- "au-vic"
+
+- VIC (Australia) Region
+
+- "au-wa"
+
+- Perth (Australia) Region
+
+- "ph"
+
+- Manila (Philippines) Region
+
+- "th"
+
+- Bangkok (Thailand) Region
+
+- "hk"
+
+- HK (Hong Kong) Region
+
+- "mn"
+
+- Ulaanbaatar (Mongolia) Region
+
+- "kg"
+
+- Bishkek (Kyrgyzstan) Region
+
+- "id"
+
+- Jakarta (Indonesia) Region
+
+- "jp"
+
+- Tokyo (Japan) Region
+
+- "sg"
+
+- SG (Singapore) Region
+
+- "de"
+
+- Frankfurt (Germany) Region
+
+- "us"
+
+- USA (AnyCast) Region
+
+- "us-east-1"
+
+- New York (USA) Region
+
+- "us-west-1"
+
+- Freemont (USA) Region
+
+- "nz"
+
+- Auckland (New Zealand) Region
+
+
+
+--s3-region
+Region to connect to.
+Properties:
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Provider: Scaleway
+- Type: string
+- Required: false
- Examples:
- "nl-ams"
@@ -9298,14 +10350,16 @@ y/e/d>
---s3-region
+--s3-region
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
+Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
+- Provider: !AWS,Alibaba,RackCorp,Scaleway,Storj,TencentCOS
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -9323,20 +10377,24 @@ y/e/d>
--s3-endpoint
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
+Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
+- Provider: AWS
- Type: string
-- Default: ""
+- Required: false
--s3-endpoint
Endpoint for IBM COS S3 API.
Specify if using an IBM COS On Premise.
+Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
+- Provider: IBMCOS
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "s3.us.cloud-object-storage.appdomain.cloud"
@@ -9591,11 +10649,13 @@ y/e/d>
--s3-endpoint
Endpoint for OSS API.
+Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
+- Provider: Alibaba
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "oss-accelerate.aliyuncs.com"
@@ -9702,11 +10762,13 @@ y/e/d>
--s3-endpoint
Endpoint for Scaleway Object Storage.
+Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
+- Provider: Scaleway
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "s3.nl-ams.scw.cloud"
@@ -9721,11 +10783,13 @@ y/e/d>
--s3-endpoint
Endpoint for StackPath Object Storage.
+Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
+- Provider: StackPath
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "s3.us-east-2.stackpathstorage.com"
@@ -9743,12 +10807,39 @@ y/e/d>
--s3-endpoint
-Endpoint for Tencent COS API.
+Endpoint of the Shared Gateway.
+Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
+- Provider: Storj
- Type: string
-- Default: ""
+- Required: false
+- Examples:
+
+- "gateway.eu1.storjshare.io"
+
+- EU1 Shared Gateway
+
+- "gateway.us1.storjshare.io"
+
+- US1 Shared Gateway
+
+- "gateway.ap1.storjshare.io"
+
+- Asia-Pacific Shared Gateway
+
+
+
+--s3-endpoint
+Endpoint for Tencent COS API.
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Provider: TencentCOS
+- Type: string
+- Required: false
- Examples:
- "cos.ap-beijing.myqcloud.com"
@@ -9829,14 +10920,105 @@ y/e/d>
---s3-endpoint
-Endpoint for S3 API.
-Required when using an S3 clone.
+--s3-endpoint
+Endpoint for RackCorp Object Storage.
+Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
+- Provider: RackCorp
- Type: string
-- Default: ""
+- Required: false
+- Examples:
+
+- "s3.rackcorp.com"
+
+- Global (AnyCast) Endpoint
+
+- "au.s3.rackcorp.com"
+
+- Australia (Anycast) Endpoint
+
+- "au-nsw.s3.rackcorp.com"
+
+- Sydney (Australia) Endpoint
+
+- "au-qld.s3.rackcorp.com"
+
+- Brisbane (Australia) Endpoint
+
+- "au-vic.s3.rackcorp.com"
+
+- Melbourne (Australia) Endpoint
+
+- "au-wa.s3.rackcorp.com"
+
+- Perth (Australia) Endpoint
+
+- "ph.s3.rackcorp.com"
+
+- Manila (Philippines) Endpoint
+
+- "th.s3.rackcorp.com"
+
+- Bangkok (Thailand) Endpoint
+
+- "hk.s3.rackcorp.com"
+
+- HK (Hong Kong) Endpoint
+
+- "mn.s3.rackcorp.com"
+
+- Ulaanbaatar (Mongolia) Endpoint
+
+- "kg.s3.rackcorp.com"
+
+- Bishkek (Kyrgyzstan) Endpoint
+
+- "id.s3.rackcorp.com"
+
+- Jakarta (Indonesia) Endpoint
+
+- "jp.s3.rackcorp.com"
+
+- Tokyo (Japan) Endpoint
+
+- "sg.s3.rackcorp.com"
+
+- SG (Singapore) Endpoint
+
+- "de.s3.rackcorp.com"
+
+- Frankfurt (Germany) Endpoint
+
+- "us.s3.rackcorp.com"
+
+- USA (AnyCast) Endpoint
+
+- "us-east-1.s3.rackcorp.com"
+
+- New York (USA) Endpoint
+
+- "us-west-1.s3.rackcorp.com"
+
+- Freemont (USA) Endpoint
+
+- "nz.s3.rackcorp.com"
+
+- Auckland (New Zealand) Endpoint
+
+
+
+--s3-endpoint
+Endpoint for S3 API.
+Required when using an S3 clone.
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Provider: !AWS,IBMCOS,TencentCOS,Alibaba,Scaleway,StackPath,Storj,RackCorp
+- Type: string
+- Required: false
- Examples:
- "objects-us-east-1.dream.io"
@@ -9859,6 +11041,18 @@ y/e/d>
- SeaweedFS S3 localhost
+- "s3.us-east-1.lyvecloud.seagate.com"
+
+- Seagate Lyve Cloud US East 1 (Virginia)
+
+- "s3.us-west-1.lyvecloud.seagate.com"
+
+- Seagate Lyve Cloud US West 1 (California)
+
+- "s3.ap-southeast-1.lyvecloud.seagate.com"
+
+- Seagate Lyve Cloud AP Southeast 1 (Singapore)
+
- "s3.wasabisys.com"
- Wasabi US East endpoint
@@ -9873,18 +11067,24 @@ y/e/d>
- "s3.ap-northeast-1.wasabisys.com"
-- Wasabi AP Northeast endpoint
+- Wasabi AP Northeast 1 (Tokyo) endpoint
+
+- "s3.ap-northeast-2.wasabisys.com"
+
+- Wasabi AP Northeast 2 (Osaka) endpoint
--s3-location-constraint
Location constraint - must be set to match the Region.
Used when creating buckets only.
+Properties:
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Provider: AWS
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -9992,11 +11192,13 @@ y/e/d>
--s3-location-constraint
Location constraint - must match endpoint when using IBM Cloud Public.
For on-prem COS, do not make a selection from this list, hit enter.
+Properties:
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Provider: IBMCOS
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "us-standard"
@@ -10130,24 +11332,117 @@ y/e/d>
--s3-location-constraint
-Location constraint - must be set to match the Region.
-Leave blank if not sure. Used when creating buckets only.
+Location constraint - the location where your bucket will be located and your data stored.
+Properties:
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Provider: RackCorp
- Type: string
-- Default: ""
+- Required: false
+- Examples:
+
+- "global"
+
+- Global CDN Region
+
+- "au"
+
+- Australia (All locations)
+
+- "au-nsw"
+
+- NSW (Australia) Region
+
+- "au-qld"
+
+- QLD (Australia) Region
+
+- "au-vic"
+
+- VIC (Australia) Region
+
+- "au-wa"
+
+- Perth (Australia) Region
+
+- "ph"
+
+- Manila (Philippines) Region
+
+- "th"
+
+- Bangkok (Thailand) Region
+
+- "hk"
+
+- HK (Hong Kong) Region
+
+- "mn"
+
+- Ulaanbaatar (Mongolia) Region
+
+- "kg"
+
+- Bishkek (Kyrgyzstan) Region
+
+- "id"
+
+- Jakarta (Indonesia) Region
+
+- "jp"
+
+- Tokyo (Japan) Region
+
+- "sg"
+
+- SG (Singapore) Region
+
+- "de"
+
+- Frankfurt (Germany) Region
+
+- "us"
+
+- USA (AnyCast) Region
+
+- "us-east-1"
+
+- New York (USA) Region
+
+- "us-west-1"
+
+- Freemont (USA) Region
+
+- "nz"
+
+- Auckland (New Zealand) Region
+
+
+
+--s3-location-constraint
+Location constraint - must be set to match the Region.
+Leave blank if not sure. Used when creating buckets only.
+Properties:
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Provider: !AWS,IBMCOS,Alibaba,RackCorp,Scaleway,StackPath,Storj,TencentCOS
+- Type: string
+- Required: false
--s3-acl
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one.
+Properties:
- Config: acl
- Env Var: RCLONE_S3_ACL
+- Provider: !Storj
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "default"
@@ -10216,11 +11511,13 @@ y/e/d>
--s3-server-side-encryption
The server-side encryption algorithm used when storing this object in S3.
+Properties:
- Config: server_side_encryption
- Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
+- Provider: AWS,Ceph,Minio
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -10239,11 +11536,13 @@ y/e/d>
--s3-sse-kms-key-id
If using KMS ID you must provide the ARN of Key.
+Properties:
- Config: sse_kms_key_id
- Env Var: RCLONE_S3_SSE_KMS_KEY_ID
+- Provider: AWS,Ceph,Minio
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -10258,11 +11557,13 @@ y/e/d>
--s3-storage-class
The storage class to use when storing new objects in S3.
+Properties:
- Config: storage_class
- Env Var: RCLONE_S3_STORAGE_CLASS
+- Provider: AWS
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -10297,15 +11598,21 @@ y/e/d>
- Intelligent-Tiering storage class
+- "GLACIER_IR"
+
+- Glacier Instant Retrieval storage class
+
--s3-storage-class
The storage class to use when storing new objects in OSS.
+Properties:
- Config: storage_class
- Env Var: RCLONE_S3_STORAGE_CLASS
+- Provider: Alibaba
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -10328,11 +11635,13 @@ y/e/d>
--s3-storage-class
The storage class to use when storing new objects in Tencent COS.
+Properties:
- Config: storage_class
- Env Var: RCLONE_S3_STORAGE_CLASS
+- Provider: TencentCOS
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -10355,11 +11664,13 @@ y/e/d>
--s3-storage-class
The storage class to use when storing new objects in S3.
+Properties:
- Config: storage_class
- Env Var: RCLONE_S3_STORAGE_CLASS
+- Provider: Scaleway
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -10379,16 +11690,17 @@ y/e/d>
Advanced options
-Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS).
+Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS).
--s3-bucket-acl
Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when only when creating buckets. If it isn't set then "acl" is used instead.
+Properties:
- Config: bucket_acl
- Env Var: RCLONE_S3_BUCKET_ACL
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "private"
@@ -10416,19 +11728,23 @@ y/e/d>
--s3-requester-pays
Enables requester pays option when interacting with S3 bucket.
+Properties:
- Config: requester_pays
- Env Var: RCLONE_S3_REQUESTER_PAYS
+- Provider: AWS
- Type: bool
- Default: false
--s3-sse-customer-algorithm
If using SSE-C, the server-side encryption algorithm used when storing this object in S3.
+Properties:
- Config: sse_customer_algorithm
- Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM
+- Provider: AWS,Ceph,Minio
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -10443,11 +11759,13 @@ y/e/d>
--s3-sse-customer-key
If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.
+Properties:
- Config: sse_customer_key
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY
+- Provider: AWS,Ceph,Minio
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -10459,11 +11777,13 @@ y/e/d>
--s3-sse-customer-key-md5
If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
+Properties:
- Config: sse_customer_key_md5
- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5
+- Provider: AWS,Ceph,Minio
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -10475,6 +11795,7 @@ y/e/d>
--s3-upload-cutoff
Cutoff for switching to chunked upload.
Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB.
+Properties:
- Config: upload_cutoff
- Env Var: RCLONE_S3_UPLOAD_CUTOFF
@@ -10488,6 +11809,7 @@ y/e/d>
If you are transferring large files over high-speed links and you have enough memory, then increasing this will speed up the transfers.
Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit.
Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5 MiB and there can be at most 10,000 chunks, this means that by default the maximum size of a file you can stream upload is 48 GiB. If you wish to stream upload larger files then you will need to increase chunk_size.
+Properties:
- Config: chunk_size
- Env Var: RCLONE_S3_CHUNK_SIZE
@@ -10499,6 +11821,7 @@ y/e/d>
This option defines the maximum number of multipart chunks to use when doing a multipart upload.
This can be useful if a service does not support the AWS S3 specification of 10,000 chunks.
Rclone will automatically increase the chunk size when uploading a large file of a known size to stay below this number of chunks limit.
+Properties:
- Config: max_upload_parts
- Env Var: RCLONE_S3_MAX_UPLOAD_PARTS
@@ -10509,6 +11832,7 @@ y/e/d>
Cutoff for switching to multipart copy.
Any files larger than this that need to be server-side copied will be copied in chunks of this size.
The minimum is 0 and the maximum is 5 GiB.
+Properties:
- Config: copy_cutoff
- Env Var: RCLONE_S3_COPY_CUTOFF
@@ -10518,6 +11842,7 @@ y/e/d>
--s3-disable-checksum
Don't store MD5 checksum with object metadata.
Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.
+Properties:
- Config: disable_checksum
- Env Var: RCLONE_S3_DISABLE_CHECKSUM
@@ -10530,34 +11855,38 @@ y/e/d>
If this variable is empty rclone will look for the "AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is empty it will default to the current user's home directory.
Linux/OSX: "$HOME/.aws/credentials"
Windows: "%USERPROFILE%\.aws\credentials"
+Properties:
- Config: shared_credentials_file
- Env Var: RCLONE_S3_SHARED_CREDENTIALS_FILE
- Type: string
-- Default: ""
+- Required: false
--s3-profile
Profile to use in the shared credentials file.
If env_auth = true then rclone can use a shared credentials file. This variable controls which profile is used in that file.
If empty it will default to the environment variable "AWS_PROFILE" or "default" if that environment variable is also not set.
+Properties:
- Config: profile
- Env Var: RCLONE_S3_PROFILE
- Type: string
-- Default: ""
+- Required: false
--s3-session-token
An AWS session token.
+Properties:
- Config: session_token
- Env Var: RCLONE_S3_SESSION_TOKEN
- Type: string
-- Default: ""
+- Required: false
--s3-upload-concurrency
Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded concurrently.
If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.
+Properties:
- Config: upload_concurrency
- Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
@@ -10568,6 +11897,7 @@ Windows: "%USERPROFILE%\.aws\credentials"
If true use path style access if false use virtual hosted style.
If this is true (the default) then rclone will use path style access, if false then rclone will use virtual path style. See the AWS S3 docs for more info.
Some providers (e.g. AWS, Aliyun OSS, Netease COS, or Tencent COS) require this set to false - rclone will do this automatically based on the provider setting.
+Properties:
- Config: force_path_style
- Env Var: RCLONE_S3_FORCE_PATH_STYLE
@@ -10578,6 +11908,7 @@ Windows: "%USERPROFILE%\.aws\credentials"
If true use v2 authentication.
If this is false (the default) then rclone will use v4 authentication. If it is set then rclone will use v2 authentication.
Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
+Properties:
- Config: v2_auth
- Env Var: RCLONE_S3_V2_AUTH
@@ -10587,9 +11918,11 @@ Windows: "%USERPROFILE%\.aws\credentials"
--s3-use-accelerate-endpoint
If true use the AWS S3 accelerated endpoint.
See: AWS S3 Transfer acceleration
+Properties:
- Config: use_accelerate_endpoint
- Env Var: RCLONE_S3_USE_ACCELERATE_ENDPOINT
+- Provider: AWS
- Type: bool
- Default: false
@@ -10597,25 +11930,51 @@ Windows: "%USERPROFILE%\.aws\credentials"
If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
It should be set to true for resuming uploads across different sessions.
WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up.
+Properties:
- Config: leave_parts_on_error
- Env Var: RCLONE_S3_LEAVE_PARTS_ON_ERROR
+- Provider: AWS
- Type: bool
- Default: false
--s3-list-chunk
Size of listing chunk (response list for each ListObject S3 request).
This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification. Most services truncate the response list to 1000 objects even if requested more than that. In AWS S3 this is a global maximum and cannot be changed, see AWS S3. In Ceph, this can be increased with the "rgw list buckets max chunk" option.
+Properties:
- Config: list_chunk
- Env Var: RCLONE_S3_LIST_CHUNK
- Type: int
- Default: 1000
+--s3-list-version
+Version of ListObjects to use: 1,2 or 0 for auto.
+When S3 originally launched it only provided the ListObjects call to enumerate objects in a bucket.
+However in May 2016 the ListObjectsV2 call was introduced. This is much higher performance and should be used if at all possible.
+If set to the default, 0, rclone will guess according to the provider set which list objects method to call. If it guesses wrong, then it may be set manually here.
+Properties:
+
+- Config: list_version
+- Env Var: RCLONE_S3_LIST_VERSION
+- Type: int
+- Default: 0
+
+--s3-list-url-encode
+Whether to url encode listings: true/false/unset
+Some providers support URL encoding listings and where this is available this is more reliable when using control characters in file names. If this is set to unset (the default) then rclone will choose according to the provider setting what to apply, but you can override rclone's choice here.
+Properties:
+
+- Config: list_url_encode
+- Env Var: RCLONE_S3_LIST_URL_ENCODE
+- Type: Tristate
+- Default: unset
+
--s3-no-check-bucket
If set, don't attempt to check the bucket exists or create it.
This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already.
It can also be needed if the user you are using does not have bucket creation permissions. Before v1.52.0 this would have passed silently due to a bug.
+Properties:
- Config: no_check_bucket
- Env Var: RCLONE_S3_NO_CHECK_BUCKET
@@ -10639,6 +11998,7 @@ Windows: "%USERPROFILE%\.aws\credentials"
For multipart uploads these items aren't read.
If an source object of unknown length is uploaded then rclone will do a HEAD request.
Setting this flag increases the chance for undetected upload failures, in particular an incorrect size, so it isn't recommended for normal operation. In practice the chance of an undetected upload failure is very small even with this flag.
+Properties:
- Config: no_head
- Env Var: RCLONE_S3_NO_HEAD
@@ -10647,6 +12007,7 @@ Windows: "%USERPROFILE%\.aws\credentials"
--s3-no-head-object
If set, do not do HEAD before GET when getting objects.
+Properties:
- Config: no_head_object
- Env Var: RCLONE_S3_NO_HEAD_OBJECT
@@ -10654,8 +12015,9 @@ Windows: "%USERPROFILE%\.aws\credentials"
- Default: false
--s3-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_S3_ENCODING
@@ -10665,6 +12027,7 @@ Windows: "%USERPROFILE%\.aws\credentials"
--s3-memory-pool-flush-time
How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.
+Properties:
- Config: memory_pool_flush_time
- Env Var: RCLONE_S3_MEMORY_POOL_FLUSH_TIME
@@ -10673,6 +12036,7 @@ Windows: "%USERPROFILE%\.aws\credentials"
--s3-memory-pool-use-mmap
Whether to use mmap buffers in internal memory pool.
+Properties:
- Config: memory_pool_use_mmap
- Env Var: RCLONE_S3_MEMORY_POOL_USE_MMAP
@@ -10683,6 +12047,7 @@ Windows: "%USERPROFILE%\.aws\credentials"
Disable usage of http2 for S3 backends.
There is currently an unsolved issue with the s3 (specifically minio) backend and HTTP/2. HTTP/2 is enabled by default for the s3 backend but can be disabled here. When the issue is solved this flag will be removed.
See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rclone/issues/3631
+Properties:
- Config: disable_http2
- Env Var: RCLONE_S3_DISABLE_HTTP2
@@ -10691,11 +12056,22 @@ Windows: "%USERPROFILE%\.aws\credentials"
--s3-download-url
Custom endpoint for downloads. This is usually set to a CloudFront CDN URL as AWS S3 offers cheaper egress for data downloaded through the CloudFront network.
+Properties:
- Config: download_url
- Env Var: RCLONE_S3_DOWNLOAD_URL
- Type: string
-- Default: ""
+- Required: false
+
+--s3-use-multipart-etag
+Whether to use ETag in multipart uploads for verification
+This should be true, false or left unset to use the default for the provider.
+Properties:
+
+- Config: use_multipart_etag
+- Env Var: RCLONE_S3_USE_MULTIPART_ETAG
+- Type: Tristate
+- Default: unset
Backend commands
Here are the commands specific to the s3 backend.
@@ -10703,7 +12079,7 @@ Windows: "%USERPROFILE%\.aws\credentials"
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
See the "rclone backend" command for more info on how to pass options and arguments.
-These can be run on a running backend using the rc command backend/command.
+These can be run on a running backend using the rc command backend/command.
restore
Restore objects from GLACIER to normal storage
rclone backend restore remote: [options] [<arguments>+]
@@ -10778,14 +12154,14 @@ rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
type = s3
provider = AWS
env_auth = false
-access_key_id =
-secret_access_key =
+access_key_id =
+secret_access_key =
region = us-east-1
-endpoint =
-location_constraint =
+endpoint =
+location_constraint =
acl = private
-server_side_encryption =
-storage_class =
+server_side_encryption =
+storage_class =
Then use it as normal with the name of the public bucket, e.g.
rclone lsd anons3:1000genomes
You will be able to list and copy data but not upload it.
@@ -10804,7 +12180,7 @@ secret_access_key = YOUR_SECRET_KEY
endpoint = http://[IP of Snowball]:8080
upload_cutoff = 0
Ceph
-Ceph is an open source unified, distributed storage system designed for excellent performance, reliability and scalability. It has an S3 compatible object storage interface.
+Ceph is an open-source, unified, distributed storage system designed for excellent performance, reliability and scalability. It has an S3 compatible object storage interface.
To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:
[ceph]
type = s3
@@ -10885,7 +12261,7 @@ rclone copy /path/to/files spaces:my-new-space
- Run rclone config and select n for a new remote.
2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
- No remotes found - make a new one
+ No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -11079,8 +12455,21 @@ region = us-east-1
endpoint = http://192.168.1.106:9000
location_constraint =
server_side_encryption =
-So once set up, for example to copy files into a bucket
+So once set up, for example, to copy files into a bucket
rclone copy /path/to/files minio:bucket
+RackCorp
+RackCorp Object Storage is an S3 compatible object storage platform from your friendly cloud provider RackCorp. The service is fast, reliable, well priced and located in many strategic locations unserviced by others, to ensure you can maintain data sovereignty.
+Before you can use RackCorp Object Storage, you'll need to "sign up" for an account on our "portal". Next you can create an access key
, a secret key
and buckets
, in your location of choice with ease. These details are required for the next steps of configuration, when rclone config
asks for your access_key_id
and secret_access_key
.
+Your config should end up looking a bit like this:
+[RCS3-demo-config]
+type = s3
+provider = RackCorp
+env_auth = true
+access_key_id = YOURACCESSKEY
+secret_access_key = YOURSECRETACCESSKEY
+region = au-nsw
+endpoint = s3.rackcorp.com
+location_constraint = au-nsw
Scaleway
Scaleway The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
Scaleway provides an S3 interface which can be configured for use with rclone like this:
@@ -11096,8 +12485,102 @@ location_constraint =
acl = private
server_side_encryption =
storage_class =
+Seagate Lyve Cloud
+Seagate Lyve Cloud is an S3 compatible object storage platform from Seagate intended for enterprise use.
+Here is a config run through for a remote called remote
- you may choose a different name of course. Note that to create an access key and secret key you will need to create a service account first.
+$ rclone config
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Choose s3
backend
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
+ \ (s3)
+[snip]
+Storage> s3
+Choose LyveCloud
as S3 provider
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / Seagate Lyve Cloud
+ \ (LyveCloud)
+[snip]
+provider> LyveCloud
+Take the default (just press enter) to enter access key and secret in the config file.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth>
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> XXX
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> YYY
+Leave region blank
+Region to connect to.
+Leave blank if you are using an S3 clone and you don't have a region.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Use this if unsure.
+ 1 | Will use v4 signatures and an empty region.
+ \ ()
+ / Use this only if v4 signatures don't work.
+ 2 | E.g. pre Jewel/v10 CEPH.
+ \ (other-v2-signature)
+region>
+Choose an endpoint from the list
+Endpoint for S3 API.
+Required when using an S3 clone.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Seagate Lyve Cloud US East 1 (Virginia)
+ \ (s3.us-east-1.lyvecloud.seagate.com)
+ 2 / Seagate Lyve Cloud US West 1 (California)
+ \ (s3.us-west-1.lyvecloud.seagate.com)
+ 3 / Seagate Lyve Cloud AP Southeast 1 (Singapore)
+ \ (s3.ap-southeast-1.lyvecloud.seagate.com)
+endpoint> 1
+Leave location constraint blank
+Location constraint - must be set to match the Region.
+Leave blank if not sure. Used when creating buckets only.
+Enter a value. Press Enter to leave empty.
+location_constraint>
+Choose default ACL (private
).
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn't copy the ACL from the source but rather writes a fresh one.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+[snip]
+acl>
+And the config file should end up looking like this:
+[remote]
+type = s3
+provider = LyveCloud
+access_key_id = XXX
+secret_access_key = YYY
+endpoint = s3.us-east-1.lyvecloud.seagate.com
SeaweedFS
-SeaweedFS is a distributed storage system for blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store. It has an S3 compatible object storage interface.
+SeaweedFS is a distributed storage system for blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store. It has an S3 compatible object storage interface. SeaweedFS can also act as a gateway to remote S3 compatible object store to cache data and metadata with asynchronous write back, for fast local speed and minimize access cost.
Assuming the SeaweedFS are configured with weed shell
as such:
> s3.bucket.create -name foo
> s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply
@@ -11133,7 +12616,7 @@ endpoint = localhost:8333
Wasabi
Wasabi is a cloud-based object storage service for a broad range of applications and use cases. Wasabi is designed for individuals and organizations that require a high-performance, reliable, and secure data storage infrastructure at minimal cost.
Wasabi provides an S3 interface which can be configured for use with rclone like this.
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s> n
@@ -11233,7 +12716,7 @@ storage_class =
Here is an example of making an Alibaba Cloud (Aliyun) OSS configuration. First run:
rclone config
This will guide you through an interactive setup process.
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -11338,7 +12821,7 @@ y/e/d> y
- Run
rclone config
and select n
for a new remote.
rclone config
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -11449,7 +12932,74 @@ Name Type
cos s3
Netease NOS
For Netease NOS configure as per the configurator rclone config
setting the provider Netease
. This will automatically set force_path_style = false
which is necessary for it to run properly.
-Limitations
+Storj
+Storj is a decentralized cloud storage which can be used through its native protocol or an S3 compatible gateway.
+The S3 compatible gateway is configured using rclone config
with a type of s3
and with a provider name of Storj
. Here is an example run of the configurator.
+Type of storage to configure.
+Storage> s3
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth> 1
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> XXXX (as shown when creating the access grant)
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> XXXX (as shown when creating the access grant)
+Option endpoint.
+Endpoint of the Shared Gateway.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / EU1 Shared Gateway
+ \ (gateway.eu1.storjshare.io)
+ 2 / US1 Shared Gateway
+ \ (gateway.us1.storjshare.io)
+ 3 / Asia-Pacific Shared Gateway
+ \ (gateway.ap1.storjshare.io)
+endpoint> 1 (as shown when creating the access grant)
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+Note that s3 credentials are generated when you create an access grant.
+Backend quirks
+
+--chunk-size
is forced to be 64 MiB or greater. This will use more memory than the default of 5 MiB.
+- Server side copy is disabled as it isn't currently supported in the gateway.
+- GetTier and SetTier are not supported.
+
+Backend bugs
+Due to issue #39 uploading multipart files via the S3 gateway causes them to lose their metadata. For rclone's purpose this means that the modification time is not stored, nor is any MD5SUM (if one is available from the source).
+This has the following consequences:
+
+- Using
rclone rcat
will fail as the medatada doesn't match after upload
+- Uploading files with
rclone mount
will fail for the same reason
+
+- This can worked around by using
--vfs-cache-mode writes
or --vfs-cache-mode full
or setting --s3-upload-cutoff
large
+
+- Files uploaded via a multipart upload won't have their modtimes
+
+- This will mean that
rclone sync
will likely keep trying to upload files bigger than --s3-upload-cutoff
+- This can be worked around with
--checksum
or --size-only
or setting --s3-upload-cutoff
large
+- The maximum value for
--s3-upload-cutoff
is 5GiB though
+
+
+One general purpose workaround is to set --s3-upload-cutoff 5G
. This means that rclone will upload files smaller than 5GiB as single parts. Note that this can be set in the config file with upload_cutoff = 5G
or configured in the advanced settings. If you regularly transfer files larger than 5G then using --checksum
or --size-only
in rclone sync
is the recommended workaround.
+Comparison with the native protocol
+Use the the native protocol to take advantage of client-side encryption as well as to achieve the best possible download performance. Uploads will be erasure-coded locally, thus a 1gb upload will result in 2.68gb of data being uploaded to storage nodes across the network.
+Use this backend and the S3 compatible Hosted Gateway to increase upload performance and reduce the load on your systems and network. Uploads will be encrypted and erasure-coded server-side, thus a 1GB upload will result in only in 1GB of data being uploaded to storage nodes across the network.
+For more detailed comparison please check the documentation of the storj backend.
+Limitations
rclone about
is not supported by the S3 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
Backblaze B2
@@ -11459,7 +13009,7 @@ cos s3
Here is an example of making a b2 configuration. First run
rclone config
This will guide you through an interactive setup process. To authenticate you will either need your Account ID (a short hex number) and Master Application Key (a long hex number) OR an Application Key, which is the recommended method. See below for further details on generating and using an Application Key.
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
q) Quit config
n/q> n
@@ -11612,22 +13162,25 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
Here are the standard options specific to b2 (Backblaze B2).
--b2-account
Account ID or Application Key ID.
+Properties:
- Config: account
- Env Var: RCLONE_B2_ACCOUNT
- Type: string
-- Default: ""
+- Required: true
--b2-key
Application Key.
+Properties:
- Config: key
- Env Var: RCLONE_B2_KEY
- Type: string
-- Default: ""
+- Required: true
--b2-hard-delete
Permanently delete files on remote removal, otherwise hide files.
+Properties:
- Config: hard_delete
- Env Var: RCLONE_B2_HARD_DELETE
@@ -11639,11 +13192,12 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
--b2-endpoint
Endpoint for the service.
Leave blank normally.
+Properties:
- Config: endpoint
- Env Var: RCLONE_B2_ENDPOINT
- Type: string
-- Default: ""
+- Required: false
--b2-test-mode
A flag string for X-Bz-Test-Mode header for debugging.
@@ -11654,15 +13208,17 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
- "force_cap_exceeded"
These will be set in the "X-Bz-Test-Mode" header which is documented in the b2 integrations checklist.
+Properties:
- Config: test_mode
- Env Var: RCLONE_B2_TEST_MODE
- Type: string
-- Default: ""
+- Required: false
--b2-versions
Include old versions in directory listings.
Note that when using this no file write operations are permitted, so you can't upload files or delete them.
+Properties:
- Config: versions
- Env Var: RCLONE_B2_VERSIONS
@@ -11673,6 +13229,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
Cutoff for switching to chunked upload.
Files above this size will be uploaded in chunks of "--b2-chunk-size".
This value should be set no larger than 4.657 GiB (== 5 GB).
+Properties:
- Config: upload_cutoff
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
@@ -11683,6 +13240,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
Cutoff for switching to multipart copy.
Any files larger than this that need to be server-side copied will be copied in chunks of this size.
The minimum is 0 and the maximum is 4.6 GiB.
+Properties:
- Config: copy_cutoff
- Env Var: RCLONE_B2_COPY_CUTOFF
@@ -11694,6 +13252,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
When uploading large files, chunk the file into this size.
Must fit in memory. These chunks are buffered in memory and there might a maximum of "--transfers" chunks in progress at once.
5,000,000 Bytes is the minimum size.
+Properties:
- Config: chunk_size
- Env Var: RCLONE_B2_CHUNK_SIZE
@@ -11703,6 +13262,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
--b2-disable-checksum
Disable checksums for large (> upload cutoff) files.
Normally rclone will calculate the SHA1 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.
+Properties:
- Config: disable_checksum
- Env Var: RCLONE_B2_DISABLE_CHECKSUM
@@ -11712,15 +13272,19 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
--b2-download-url
Custom endpoint for downloads.
This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. Rclone works with private buckets by sending an "Authorization" header. If the custom endpoint rewrites the requests for authentication, e.g., in Cloudflare Workers, this header needs to be handled properly. Leave blank if you want to use the endpoint provided by Backblaze.
+The URL provided here SHOULD have the protocol and SHOULD NOT have a trailing slash or specify the /file/bucket subpath as rclone will request files with "{download_url}/file/{bucket_name}/{path}".
+Example: > https://mysubdomain.mydomain.tld (No trailing "/", "file" or "bucket")
+Properties:
- Config: download_url
- Env Var: RCLONE_B2_DOWNLOAD_URL
- Type: string
-- Default: ""
+- Required: false
--b2-download-auth-duration
Time before the authorization token will expire in s or suffix ms|s|m|h|d.
The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week.
+Properties:
- Config: download_auth_duration
- Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION
@@ -11729,6 +13293,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
--b2-memory-pool-flush-time
How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.
+Properties:
- Config: memory_pool_flush_time
- Env Var: RCLONE_B2_MEMORY_POOL_FLUSH_TIME
@@ -11737,6 +13302,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
--b2-memory-pool-use-mmap
Whether to use mmap buffers in internal memory pool.
+Properties:
- Config: memory_pool_use_mmap
- Env Var: RCLONE_B2_MEMORY_POOL_USE_MMAP
@@ -11744,15 +13310,16 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
- Default: false
--b2-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_B2_ENCODING
- Type: MultiEncoder
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
rclone about
is not supported by the B2 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
Box
@@ -11763,7 +13330,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -11956,41 +13523,46 @@ y/e/d> y
--box-client-id
OAuth Client Id.
Leave blank normally.
+Properties:
- Config: client_id
- Env Var: RCLONE_BOX_CLIENT_ID
- Type: string
-- Default: ""
+- Required: false
--box-client-secret
OAuth Client Secret.
Leave blank normally.
+Properties:
- Config: client_secret
- Env Var: RCLONE_BOX_CLIENT_SECRET
- Type: string
-- Default: ""
+- Required: false
--box-box-config-file
Box App config.json location
Leave blank normally.
Leading ~
will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}
.
+Properties:
- Config: box_config_file
- Env Var: RCLONE_BOX_BOX_CONFIG_FILE
- Type: string
-- Default: ""
+- Required: false
--box-access-token
Box App Primary Access Token
Leave blank normally.
+Properties:
- Config: access_token
- Env Var: RCLONE_BOX_ACCESS_TOKEN
- Type: string
-- Default: ""
+- Required: false
--box-box-sub-type
+Properties:
- Config: box_sub_type
- Env Var: RCLONE_BOX_BOX_SUB_TYPE
@@ -12012,32 +13584,36 @@ y/e/d> y
Here are the advanced options specific to box (Box).
--box-token
OAuth Access Token as a JSON blob.
+Properties:
- Config: token
- Env Var: RCLONE_BOX_TOKEN
- Type: string
-- Default: ""
+- Required: false
--box-auth-url
Auth server URL.
Leave blank to use the provider defaults.
+Properties:
- Config: auth_url
- Env Var: RCLONE_BOX_AUTH_URL
- Type: string
-- Default: ""
+- Required: false
--box-token-url
Token server url.
Leave blank to use the provider defaults.
+Properties:
- Config: token_url
- Env Var: RCLONE_BOX_TOKEN_URL
- Type: string
-- Default: ""
+- Required: false
--box-root-folder-id
Fill in for rclone to use a non root folder as its starting point.
+Properties:
- Config: root_folder_id
- Env Var: RCLONE_BOX_ROOT_FOLDER_ID
@@ -12046,6 +13622,7 @@ y/e/d> y
--box-upload-cutoff
Cutoff for switching to multipart upload (>= 50 MiB).
+Properties:
- Config: upload_cutoff
- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
@@ -12054,6 +13631,7 @@ y/e/d> y
--box-commit-retries
Max number of times to try committing a multipart file.
+Properties:
- Config: commit_retries
- Env Var: RCLONE_BOX_COMMIT_RETRIES
@@ -12062,6 +13640,7 @@ y/e/d> y
--box-list-chunk
Size of listing chunk 1-1000.
+Properties:
- Config: list_chunk
- Env Var: RCLONE_BOX_LIST_CHUNK
@@ -12070,22 +13649,24 @@ y/e/d> y
--box-owned-by
Only show items owned by the login (email address) passed in.
+Properties:
- Config: owned_by
- Env Var: RCLONE_BOX_OWNED_BY
- Type: string
-- Default: ""
+- Required: false
--box-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_BOX_ENCODING
- Type: MultiEncoder
- Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
-Limitations
+Limitations
Note that Box is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Box file names can't have the \
character in. rclone maps this to and from an identical looking unicode equivalent \
(U+FF3C Fullwidth Reverse Solidus).
Box only supports filenames up to 255 characters in length.
@@ -12102,7 +13683,7 @@ y/e/d> y
Here is an example of how to make a remote called test-cache
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
r) Rename remote
c) Copy remote
@@ -12256,40 +13837,45 @@ chunk_total_size = 10G
--cache-remote
Remote to cache.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
+Properties:
- Config: remote
- Env Var: RCLONE_CACHE_REMOTE
- Type: string
-- Default: ""
+- Required: true
--cache-plex-url
The URL of the Plex server.
+Properties:
- Config: plex_url
- Env Var: RCLONE_CACHE_PLEX_URL
- Type: string
-- Default: ""
+- Required: false
--cache-plex-username
The username of the Plex user.
+Properties:
- Config: plex_username
- Env Var: RCLONE_CACHE_PLEX_USERNAME
- Type: string
-- Default: ""
+- Required: false
--cache-plex-password
The password of the Plex user.
NB Input to this must be obscured - see rclone obscure.
+Properties:
- Config: plex_password
- Env Var: RCLONE_CACHE_PLEX_PASSWORD
- Type: string
-- Default: ""
+- Required: false
--cache-chunk-size
The size of a chunk (partial file data).
Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur.
+Properties:
- Config: chunk_size
- Env Var: RCLONE_CACHE_CHUNK_SIZE
@@ -12313,6 +13899,7 @@ chunk_total_size = 10G
--cache-info-age
How long to cache file structure information (directory listings, file size, times, etc.). If all write operations are done through the cache then you can safely make this value very large as the cache store will also be updated in real time.
+Properties:
- Config: info_age
- Env Var: RCLONE_CACHE_INFO_AGE
@@ -12337,6 +13924,7 @@ chunk_total_size = 10G
--cache-chunk-total-size
The total size that the chunks can take up on the local disk.
If the cache exceeds this value then it will start to delete the oldest chunks until it goes under this value.
+Properties:
- Config: chunk_total_size
- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
@@ -12362,23 +13950,26 @@ chunk_total_size = 10G
Here are the advanced options specific to cache (Cache a remote).
--cache-plex-token
The plex token for authentication - auto set normally.
+Properties:
- Config: plex_token
- Env Var: RCLONE_CACHE_PLEX_TOKEN
- Type: string
-- Default: ""
+- Required: false
--cache-plex-insecure
Skip all certificate verification when connecting to the Plex server.
+Properties:
- Config: plex_insecure
- Env Var: RCLONE_CACHE_PLEX_INSECURE
- Type: string
-- Default: ""
+- Required: false
--cache-db-path
Directory to store file structure metadata DB.
The remote name is used as the DB file name.
+Properties:
- Config: db_path
- Env Var: RCLONE_CACHE_DB_PATH
@@ -12389,6 +13980,7 @@ chunk_total_size = 10G
Directory to cache chunk files.
Path to where partial file data (chunks) are stored locally. The remote name is appended to the final path.
This config follows the "--cache-db-path". If you specify a custom location for "--cache-db-path" and don't specify one for "--cache-chunk-path" then "--cache-chunk-path" will use the same path as "--cache-db-path".
+Properties:
- Config: chunk_path
- Env Var: RCLONE_CACHE_CHUNK_PATH
@@ -12397,6 +13989,7 @@ chunk_total_size = 10G
--cache-db-purge
Clear all the cached data for this remote on start.
+Properties:
- Config: db_purge
- Env Var: RCLONE_CACHE_DB_PURGE
@@ -12406,6 +13999,7 @@ chunk_total_size = 10G
--cache-chunk-clean-interval
How often should the cache perform cleanups of the chunk storage.
The default value should be ok for most people. If you find that the cache goes over "cache-chunk-total-size" too often then try to lower this value to force it to perform cleanups more often.
+Properties:
- Config: chunk_clean_interval
- Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL
@@ -12416,6 +14010,7 @@ chunk_total_size = 10G
How many times to retry a read from a cache storage.
Since reading from a cache stream is independent from downloading file data, readers can get to a point where there's no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn't able to provide file data anymore.
For really slow connections, increase this to a point where the stream is able to provide data but your experience will be very stuttering.
+Properties:
- Config: read_retries
- Env Var: RCLONE_CACHE_READ_RETRIES
@@ -12426,6 +14021,7 @@ chunk_total_size = 10G
How many workers should run in parallel to download chunks.
Higher values will mean more parallel processing (better CPU needed) and more concurrent requests on the cloud provider. This impacts several aspects like the cloud provider API limits, more stress on the hardware that rclone runs on but it also means that streams will be more fluid and data will be available much more faster to readers.
Note: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use.
+Properties:
- Config: workers
- Env Var: RCLONE_CACHE_WORKERS
@@ -12437,6 +14033,7 @@ chunk_total_size = 10G
By default, cache will keep file data during streaming in RAM as well to provide it to readers as fast as possible.
This transient data is evicted as soon as it is read and the number of chunks stored doesn't exceed the number of workers. However, depending on other settings like "cache-chunk-size" and "cache-workers" this footprint can increase if there are parallel streams too (multiple files being read at the same time).
If the hardware permits it, use this feature to provide an overall better performance during streaming but it can also be disabled if RAM is not available on the local machine.
+Properties:
- Config: chunk_no_memory
- Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY
@@ -12449,6 +14046,7 @@ chunk_total_size = 10G
If you find that you're getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that.
A good balance of all the other settings should make this setting useless but it is available to set for more special cases.
NOTE: This will limit the number of requests during streams but other API calls to the cloud provider like directory listings will still pass.
+Properties:
- Config: rps
- Env Var: RCLONE_CACHE_RPS
@@ -12458,6 +14056,7 @@ chunk_total_size = 10G
--cache-writes
Cache file data on writes through the FS.
If you need to read files immediately after you upload them through cache you can enable this flag to have their data stored in the cache store at the same time during upload.
+Properties:
- Config: writes
- Env Var: RCLONE_CACHE_WRITES
@@ -12468,16 +14067,18 @@ chunk_total_size = 10G
Directory to keep temporary files until they are uploaded.
This is the path where cache will use as a temporary storage for new files that need to be uploaded to the cloud provider.
Specifying a value will enable this feature. Without it, it is completely disabled and files will be uploaded directly to the cloud provider
+Properties:
- Config: tmp_upload_path
- Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH
- Type: string
-- Default: ""
+- Required: false
--cache-tmp-wait-time
How long should files be stored in local cache before being uploaded.
This is the duration that a file must wait in the temporary location cache-tmp-upload-path before it is selected for upload.
Note that only one file is uploaded at a time and it can take longer to start the upload if a queue formed for this purpose.
+Properties:
- Config: tmp_wait_time
- Env Var: RCLONE_CACHE_TMP_WAIT_TIME
@@ -12488,6 +14089,7 @@ chunk_total_size = 10G
How long to wait for the DB to be available - 0 is unlimited.
Only one process can have the DB open at any one time, so rclone waits for this duration for the DB to become available before it gives an error.
If you set it to 0 then it will wait forever.
+Properties:
- Config: db_wait_time
- Env Var: RCLONE_CACHE_DB_WAIT_TIME
@@ -12500,7 +14102,7 @@ chunk_total_size = 10G
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
See the "rclone backend" command for more info on how to pass options and arguments.
-These can be run on a running backend using the rc command backend/command.
+These can be run on a running backend using the rc command backend/command.
stats
Print stats on the cache backend in JSON format.
rclone backend stats remote: [options] [<arguments>+]
@@ -12508,9 +14110,9 @@ chunk_total_size = 10G
The chunker
overlay transparently splits large files into smaller chunks during upload to wrapped remote and transparently assembles them back when the file is downloaded. This allows to effectively overcome size limits imposed by storage providers.
Configuration
To use it, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote.
-First check your chosen remote is working - we'll call it remote:path
here. Note that anything inside remote:path
will be chunked and anything outside won't. This means that if you are using a bucket based remote (e.g. S3, B2, swift) then you should probably put the bucket in the remote s3:bucket
.
+First check your chosen remote is working - we'll call it remote:path
here. Note that anything inside remote:path
will be chunked and anything outside won't. This means that if you are using a bucket-based remote (e.g. S3, B2, swift) then you should probably put the bucket in the remote s3:bucket
.
Now configure chunker
using rclone config
. We will call this one overlay
to separate it from the remote
itself.
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -12623,14 +14225,16 @@ y/e/d> y
--chunker-remote
Remote to chunk/unchunk.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
+Properties:
- Config: remote
- Env Var: RCLONE_CHUNKER_REMOTE
- Type: string
-- Default: ""
+- Required: true
--chunker-chunk-size
Files larger than chunk size will be split in chunks.
+Properties:
- Config: chunk_size
- Env Var: RCLONE_CHUNKER_CHUNK_SIZE
@@ -12640,6 +14244,7 @@ y/e/d> y
--chunker-hash-type
Choose how chunker handles hash sums.
All modes but "none" require metadata.
+Properties:
- Config: hash_type
- Env Var: RCLONE_CHUNKER_HASH_TYPE
@@ -12684,6 +14289,7 @@ y/e/d> y
--chunker-name-format
String format of chunk file names.
The two placeholders are: base file name (*) and chunk number (#...). There must be one and only one asterisk and one or more consecutive hash characters. If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. Possible chunk files are ignored if their name does not match given format.
+Properties:
- Config: name_format
- Env Var: RCLONE_CHUNKER_NAME_FORMAT
@@ -12693,6 +14299,7 @@ y/e/d> y
--chunker-start-from
Minimum valid chunk number. Usually 0 or 1.
By default chunk numbers start from 1.
+Properties:
- Config: start_from
- Env Var: RCLONE_CHUNKER_START_FROM
@@ -12702,6 +14309,7 @@ y/e/d> y
--chunker-meta-format
Format of the metadata object or "none".
By default "simplejson". Metadata is a small JSON file named after the composite file.
+Properties:
- Config: meta_format
- Env Var: RCLONE_CHUNKER_META_FORMAT
@@ -12724,6 +14332,7 @@ y/e/d> y
--chunker-fail-hard
Choose how chunker should handle files with missing or invalid chunks.
+Properties:
- Config: fail_hard
- Env Var: RCLONE_CHUNKER_FAIL_HARD
@@ -12743,6 +14352,7 @@ y/e/d> y
--chunker-transactions
Choose how chunker should handle temporary files during transactions.
+Properties:
- Config: transactions
- Env Var: RCLONE_CHUNKER_TRANSACTIONS
@@ -12778,7 +14388,7 @@ y/e/d> y
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -12930,11 +14540,12 @@ y/e/d> y
--sharefile-root-folder-id
ID of the root folder.
Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID).
+Properties:
- Config: root_folder_id
- Env Var: RCLONE_SHAREFILE_ROOT_FOLDER_ID
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -12963,6 +14574,7 @@ y/e/d> y
Here are the advanced options specific to sharefile (Citrix Sharefile).
--sharefile-upload-cutoff
Cutoff for switching to multipart upload.
+Properties:
- Config: upload_cutoff
- Env Var: RCLONE_SHAREFILE_UPLOAD_CUTOFF
@@ -12974,6 +14586,7 @@ y/e/d> y
Must a power of 2 >= 256k.
Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.
Reducing this will reduce memory usage but decrease performance.
+Properties:
- Config: chunk_size
- Env Var: RCLONE_SHAREFILE_CHUNK_SIZE
@@ -12983,22 +14596,24 @@ y/e/d> y
--sharefile-endpoint
Endpoint for API calls.
This is usually auto discovered as part of the oauth process, but can be set manually to something like: https://XXX.sharefile.com
+Properties:
- Config: endpoint
- Env Var: RCLONE_SHAREFILE_ENDPOINT
- Type: string
-- Default: ""
+- Required: false
--sharefile-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_SHAREFILE_ENCODING
- Type: MultiEncoder
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot
-Limitations
+Limitations
Note that ShareFile is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
ShareFile only supports filenames up to 256 characters in length.
rclone about
is not supported by the Citrix ShareFile backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
@@ -13016,7 +14631,7 @@ y/e/d> y
Before configuring the crypt remote, check the underlying remote is working. In this example the underlying remote is called remote
. We will configure a path path
within this remote to contain the encrypted content. Anything inside remote:path
will be encrypted and anything outside will not.
Configure crypt
using rclone config
. In this example the crypt
remote is called secret
, to differentiate it from the underlying remote
.
When you are done you can use the crypt remote named secret
just as you would with any other remote, e.g. rclone copy D:\docs secret:\docs
, and rclone will encrypt and decrypt as needed on the fly. If you access the wrapped remote remote:path
directly you will bypass the encryption, and anything you read will be in encrypted form, and anything you write will be unencrypted. To avoid issues it is best to configure a dedicated path for encrypted content, and access it exclusively through a crypt remote.
-
No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -13117,7 +14732,7 @@ y/e/d>
Note: A string which do not contain a :
will by rclone be treated as a relative path in the local filesystem. For example, if you enter the name remote
without the trailing :
, it will be treated as a subdirectory of the current directory with name "remote".
If a path remote:path/to/dir
is specified, rclone stores encrypted files in path/to/dir
on the remote. With file name encryption, files saved to secret:subdir/subfile
are stored in the unencrypted path path/to/dir
but the subdir/subpath
element is encrypted.
The path you specify does not have to exist, rclone will create it when needed.
-If you intend to use the wrapped remote both directly for keeping unencrypted content, as well as through a crypt remote for encrypted content, it is recommended to point the crypt remote to a separate directory within the wrapped remote. If you use a bucket based storage system (e.g. Swift, S3, Google Compute Storage, B2, Hubic) it is generally advisable to wrap the crypt remote around a specific bucket (s3:bucket
). If wrapping around the entire root of the storage (s3:
), and use the optional file name encryption, rclone will encrypt the bucket name.
+If you intend to use the wrapped remote both directly for keeping unencrypted content, as well as through a crypt remote for encrypted content, it is recommended to point the crypt remote to a separate directory within the wrapped remote. If you use a bucket-based storage system (e.g. Swift, S3, Google Compute Storage, B2, Hubic) it is generally advisable to wrap the crypt remote around a specific bucket (s3:bucket
). If wrapping around the entire root of the storage (s3:
), and use the optional file name encryption, rclone will encrypt the bucket name.
Changing password
Should the password, or the configuration file containing a lightly obscured form of the password, be compromised, you need to re-encrypt your data with a new password. Since rclone uses secret-key encryption, where the encryption key is generated directly from the password kept on the client, it is not possible to change the password/key of already encrypted content. Just changing the password configured for an existing crypt remote means you will no longer able to decrypt any of the previously encrypted content. The only possibility is to re-upload everything via a crypt remote configured with your new password.
Depending on the size of your data, your bandwith, storage quota etc, there are different approaches you can take: - If you have everything in a different location, for example on your local system, you could remove all of the prior encrypted files, change the password for your configured crypt remote (or delete and re-create the crypt configuration), and then re-upload everything from the alternative location. - If you have enough space on the storage system you can create a new crypt remote pointing to a separate directory on the same backend, and then use rclone to copy everything from the original crypt remote to the new, effectively decrypting everything on the fly using the old password and re-encrypting using the new password. When done, delete the original crypt remote directory and finally the rclone crypt configuration with the old password. All data will be streamed from the storage system and back, so you will get half the bandwith and be charged twice if you have upload and download quota on the storage system.
@@ -13187,7 +14802,8 @@ $ rclone -q ls secret:
- directory structure visible
- identical files names will have identical uploaded names
-Cloud storage systems have limits on file name length and total path length which rclone is more likely to breach using "Standard" file name encryption. Where file names are less thn 156 characters in length issues should not be encountered, irrespective of cloud storage provider.
+Cloud storage systems have limits on file name length and total path length which rclone is more likely to breach using "Standard" file name encryption. Where file names are less than 156 characters in length issues should not be encountered, irrespective of cloud storage provider.
+An experimental advanced option filename_encoding
is now provided to address this problem to a certain degree. For cloud storage systems with case sensitive file names (e.g. Google Drive), base64
can be used to reduce file name length. For cloud storage systems using UTF-16 to store file names internally (e.g. OneDrive), base32768
can be used to drastically reduce file name length.
An alternative, future rclone file name encryption mode may tolerate backend provider path length limits.
Directory name encryption
Crypt offers the option of encrypting dir names or leaving them intact. There are two options:
@@ -13204,14 +14820,16 @@ $ rclone -q ls secret:
--crypt-remote
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
+Properties:
- Config: remote
- Env Var: RCLONE_CRYPT_REMOTE
- Type: string
-- Default: ""
+- Required: true
--crypt-filename-encryption
How to encrypt the filenames.
+Properties:
- Config: filename_encryption
- Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION
@@ -13238,6 +14856,7 @@ $ rclone -q ls secret:
--crypt-directory-name-encryption
Option to either encrypt directory names or leave them intact.
NB If filename_encryption is "off" then this option will do nothing.
+Properties:
- Config: directory_name_encryption
- Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION
@@ -13258,21 +14877,23 @@ $ rclone -q ls secret:
--crypt-password
Password or pass phrase for encryption.
NB Input to this must be obscured - see rclone obscure.
+Properties:
- Config: password
- Env Var: RCLONE_CRYPT_PASSWORD
- Type: string
-- Default: ""
+- Required: true
--crypt-password2
Password or pass phrase for salt.
Optional but recommended. Should be different to the previous password.
NB Input to this must be obscured - see rclone obscure.
+Properties:
- Config: password2
- Env Var: RCLONE_CRYPT_PASSWORD2
- Type: string
-- Default: ""
+- Required: false
Advanced options
Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).
@@ -13280,6 +14901,7 @@ $ rclone -q ls secret:
Allow server-side operations (e.g. copy) to work across different crypt configs.
Normally this option is not what you want, but if you have two crypts pointing to the same backend you can use it.
This can be used, for example, to change file name encryption type without re-uploading all the data. Just make two crypt backends pointing to two different directories with the single changed parameter and use rclone move to move the files between the crypt remotes.
+Properties:
- Config: server_side_across_configs
- Env Var: RCLONE_CRYPT_SERVER_SIDE_ACROSS_CONFIGS
@@ -13290,6 +14912,7 @@ $ rclone -q ls secret:
For all files listed show how the names encrypt.
If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name.
This is so you can work out which encrypted names are which decrypted names just in case you need to do something with the encrypted file names, or for debugging purposes.
+Properties:
- Config: show_mapping
- Env Var: RCLONE_CRYPT_SHOW_MAPPING
@@ -13298,6 +14921,7 @@ $ rclone -q ls secret:
--crypt-no-data-encryption
Option to either encrypt file data or leave it unencrypted.
+Properties:
- Config: no_data_encryption
- Env Var: RCLONE_CRYPT_NO_DATA_ENCRYPTION
@@ -13315,13 +14939,39 @@ $ rclone -q ls secret:
+--crypt-filename-encoding
+How to encode the encrypted filename to text string.
+This option could help with shortening the encrypted filename. The suitable option would depend on the way your remote count the filename length and if it's case sensitve.
+Properties:
+
+- Config: filename_encoding
+- Env Var: RCLONE_CRYPT_FILENAME_ENCODING
+- Type: string
+- Default: "base32"
+- Examples:
+
+- "base32"
+
+- Encode using base32. Suitable for all remote.
+
+- "base64"
+
+- Encode using base64. Suitable for case sensitive remote.
+
+- "base32768"
+
+- Encode using base32768. Suitable if your remote counts UTF-16 or
+- Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive)
+
+
+
Backend commands
Here are the commands specific to the crypt backend.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
See the "rclone backend" command for more info on how to pass options and arguments.
-These can be run on a running backend using the rc command backend/command.
+These can be run on a running backend using the rc command backend/command.
encode
Encode the given filename(s)
rclone backend encode remote: [options] [<arguments>+]
@@ -13367,7 +15017,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big.
This uses a 32 byte (256 bit key) key derived from the user password.
-Examples
+Examples
1 byte file will encrypt to
- 32 bytes header
@@ -13400,7 +15050,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
Key derivation
Rclone uses scrypt
with parameters N=16384, r=8, p=1
with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.
scrypt
makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.
-SEE ALSO
+SEE ALSO
- rclone cryptdecode - Show forward/reverse mapping of encrypted filenames
@@ -13467,14 +15117,16 @@ y/e/d> y
Here are the standard options specific to compress (Compress a remote).
--compress-remote
Remote to compress.
+Properties:
- Config: remote
- Env Var: RCLONE_COMPRESS_REMOTE
- Type: string
-- Default: ""
+- Required: true
--compress-mode
Compression mode.
+Properties:
- Config: mode
- Env Var: RCLONE_COMPRESS_MODE
@@ -13494,6 +15146,7 @@ y/e/d> y
GZIP compression level (-2 to 9).
Generally -1 (default, equivalent to 5) is recommended. Levels 1 to 9 increase compression at the cost of speed. Going past 6 generally offers very little return.
Level -2 uses Huffmann encoding only. Only use if you know what you are doing. Level 0 turns off compression.
+Properties:
- Config: level
- Env Var: RCLONE_COMPRESS_LEVEL
@@ -13503,6 +15156,7 @@ y/e/d> y
--compress-ram-cache-limit
Some remotes don't allow the upload of files with unknown size. In this case the compressed file will need to be cached to determine it's size.
Files smaller than this limit will be cached in RAM, files larger than this limit will be cached on disk.
+Properties:
- Config: ram_cache_limit
- Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT
@@ -13638,53 +15292,59 @@ y/e/d> y
--dropbox-client-id
OAuth Client Id.
Leave blank normally.
+Properties:
- Config: client_id
- Env Var: RCLONE_DROPBOX_CLIENT_ID
- Type: string
-- Default: ""
+- Required: false
--dropbox-client-secret
OAuth Client Secret.
Leave blank normally.
+Properties:
- Config: client_secret
- Env Var: RCLONE_DROPBOX_CLIENT_SECRET
- Type: string
-- Default: ""
+- Required: false
Advanced options
Here are the advanced options specific to dropbox (Dropbox).
--dropbox-token
OAuth Access Token as a JSON blob.
+Properties:
- Config: token
- Env Var: RCLONE_DROPBOX_TOKEN
- Type: string
-- Default: ""
+- Required: false
--dropbox-auth-url
Auth server URL.
Leave blank to use the provider defaults.
+Properties:
- Config: auth_url
- Env Var: RCLONE_DROPBOX_AUTH_URL
- Type: string
-- Default: ""
+- Required: false
--dropbox-token-url
Token server url.
Leave blank to use the provider defaults.
+Properties:
- Config: token_url
- Env Var: RCLONE_DROPBOX_TOKEN_URL
- Type: string
-- Default: ""
+- Required: false
--dropbox-chunk-size
Upload chunk size (< 150Mi).
Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128 MiB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.
+Properties:
- Config: chunk_size
- Env Var: RCLONE_DROPBOX_CHUNK_SIZE
@@ -13696,15 +15356,17 @@ y/e/d> y
Note that if you want to use impersonate, you should make sure this flag is set when running "rclone config" as this will cause rclone to request the "members.read" scope which it won't normally. This is needed to lookup a members email address into the internal ID that dropbox uses in the API.
Using the "members.read" scope will require a Dropbox Team Admin to approve during the OAuth flow.
You will have to use your own App (setting your own client_id and client_secret) to use this option as currently rclone's default set of permissions doesn't include "members.read". This can be added once v1.55 or later is in use everywhere.
+Properties:
- Config: impersonate
- Env Var: RCLONE_DROPBOX_IMPERSONATE
- Type: string
-- Default: ""
+- Required: false
--dropbox-shared-files
Instructs rclone to work on individual shared files.
In this mode rclone's features are extremely limited - only list (ls, lsl, etc.) operations and read operations (e.g. downloading) are supported in this mode. All other operations will be disabled.
+Properties:
- Config: shared_files
- Env Var: RCLONE_DROPBOX_SHARED_FILES
@@ -13715,6 +15377,7 @@ y/e/d> y
Instructs rclone to work on shared folders.
When this flag is used with no path only the List operation is supported and all available shared folders will be listed. If you specify a path the first part will be interpreted as the name of shared folder. Rclone will then try to mount this shared to the root namespace. On success shared folder rclone proceeds normally. The shared folder is now pretty much a normal folder and all normal operations are supported.
Note that we don't unmount the shared folder afterwards so the --dropbox-shared-folders can be omitted after the first use of a particular shared folder.
+Properties:
- Config: shared_folders
- Env Var: RCLONE_DROPBOX_SHARED_FOLDERS
@@ -13732,6 +15395,7 @@ y/e/d> y
- async - batch upload and don't check completion
Rclone will close any outstanding batches when it exits which may make a delay on quit.
+Properties:
- Config: batch_mode
- Env Var: RCLONE_DROPBOX_BATCH_MODE
@@ -13749,6 +15413,7 @@ y/e/d> y
Rclone will close any outstanding batches when it exits which may make a delay on quit.
Setting this is a great idea if you are uploading lots of small files as it will make them a lot quicker. You can use --transfers 32 to maximise throughput.
+Properties:
- Config: batch_size
- Env Var: RCLONE_DROPBOX_BATCH_SIZE
@@ -13762,14 +15427,18 @@ y/e/d> y
- batch_mode: async - default batch_timeout is 500ms
- batch_mode: sync - default batch_timeout is 10s
-batch_mode: off - not in use
+- batch_mode: off - not in use
+
+Properties:
+
- Config: batch_timeout
- Env Var: RCLONE_DROPBOX_BATCH_TIMEOUT
- Type: Duration
-Default: 0s
+- Default: 0s
--dropbox-batch-commit-timeout
Max time to wait for a batch to finish comitting
+Properties:
- Config: batch_commit_timeout
- Env Var: RCLONE_DROPBOX_BATCH_COMMIT_TIMEOUT
@@ -13777,15 +15446,16 @@ y/e/d> y
- Default: 10m0s
--dropbox-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_DROPBOX_ENCODING
- Type: MultiEncoder
- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
-Limitations
+Limitations
Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are some file names such as thumbs.db
which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading
if it attempts to upload one of those file names, but the sync won't fail.
Some errors may occur if you try to sync copyright-protected files because Dropbox has its own copyright detector that prevents this sort of file being downloaded. This will return the error ERROR : /path/to/your/file: Failed to copy: failed to open source object: path/restricted_content/.
@@ -13811,7 +15481,7 @@ y/e/d> y
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -13904,11 +15574,12 @@ y/e/d> y
Here are the standard options specific to filefabric (Enterprise File Fabric).
--filefabric-url
URL of the Enterprise File Fabric to connect to.
+Properties:
- Config: url
- Env Var: RCLONE_FILEFABRIC_URL
- Type: string
-- Default: ""
+- Required: true
- Examples:
- "https://storagemadeeasy.com"
@@ -13929,22 +15600,24 @@ y/e/d> y
ID of the root folder.
Leave blank normally.
Fill in to make rclone start with directory of a given ID.
+Properties:
- Config: root_folder_id
- Env Var: RCLONE_FILEFABRIC_ROOT_FOLDER_ID
- Type: string
-- Default: ""
+- Required: false
--filefabric-permanent-token
Permanent Authentication Token.
A Permanent Authentication Token can be created in the Enterprise File Fabric, on the users Dashboard under Security, there is an entry you'll see called "My Authentication Tokens". Click the Manage button to create one.
These tokens are normally valid for several years.
For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens
+Properties:
- Config: permanent_token
- Env Var: RCLONE_FILEFABRIC_PERMANENT_TOKEN
- Type: string
-- Default: ""
+- Required: false
Advanced options
Here are the advanced options specific to filefabric (Enterprise File Fabric).
@@ -13952,33 +15625,37 @@ y/e/d> y
Session Token.
This is a session token which rclone caches in the config file. It is usually valid for 1 hour.
Don't set this value - rclone will set it automatically.
+Properties:
- Config: token
- Env Var: RCLONE_FILEFABRIC_TOKEN
- Type: string
-- Default: ""
+- Required: false
--filefabric-token-expiry
Token expiry time.
Don't set this value - rclone will set it automatically.
+Properties:
- Config: token_expiry
- Env Var: RCLONE_FILEFABRIC_TOKEN_EXPIRY
- Type: string
-- Default: ""
+- Required: false
--filefabric-version
Version read from the file fabric.
Don't set this value - rclone will set it automatically.
+Properties:
- Config: version
- Env Var: RCLONE_FILEFABRIC_VERSION
- Type: string
-- Default: ""
+- Required: false
--filefabric-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_FILEFABRIC_ENCODING
@@ -13993,7 +15670,7 @@ y/e/d> y
To create an FTP configuration named remote
, run
rclone config
Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, use anonymous
as username and your email address as password.
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
r) Rename remote
c) Copy remote
@@ -14017,11 +15694,11 @@ Choose a number from below, or type in your own value
1 / Connect to ftp.example.com
\ "ftp.example.com"
host> ftp.example.com
-FTP username, leave blank for current username, $USER
-Enter a string value. Press Enter for the default ("").
+FTP username
+Enter a string value. Press Enter for the default ("$USER").
user>
-FTP port, leave blank to use default (21)
-Enter a string value. Press Enter for the default ("").
+FTP port number
+Enter a signed integer. Press Enter for the default (21).
port>
FTP password
y) Yes type in my own password
@@ -14104,40 +15781,45 @@ y/e/d> y
--ftp-host
FTP host to connect to.
E.g. "ftp.example.com".
+Properties:
- Config: host
- Env Var: RCLONE_FTP_HOST
- Type: string
-- Default: ""
+- Required: true
--ftp-user
-FTP username, leave blank for current username, $USER.
+FTP username.
+Properties:
- Config: user
- Env Var: RCLONE_FTP_USER
- Type: string
-- Default: ""
+- Default: "$USER"
--ftp-port
-FTP port, leave blank to use default (21).
+FTP port number.
+Properties:
- Config: port
- Env Var: RCLONE_FTP_PORT
-- Type: string
-- Default: ""
+- Type: int
+- Default: 21
--ftp-pass
FTP password.
NB Input to this must be obscured - see rclone obscure.
+Properties:
- Config: pass
- Env Var: RCLONE_FTP_PASS
- Type: string
-- Default: ""
+- Required: false
--ftp-tls
Use Implicit FTPS (FTP over TLS).
When using implicit FTP over TLS the client connects using TLS right from the start which breaks compatibility with non-TLS-aware servers. This is usually served over port 990 rather than port 21. Cannot be used in combination with explicit FTP.
+Properties:
- Config: tls
- Env Var: RCLONE_FTP_TLS
@@ -14147,6 +15829,7 @@ y/e/d> y
--ftp-explicit-tls
Use Explicit FTPS (FTP over TLS).
When using explicit FTP over TLS the client explicitly requests security from the server in order to upgrade a plain text connection to an encrypted one. Cannot be used in combination with implicit FTP.
+Properties:
- Config: explicit_tls
- Env Var: RCLONE_FTP_EXPLICIT_TLS
@@ -14157,6 +15840,7 @@ y/e/d> y
Here are the advanced options specific to ftp (FTP Connection).
--ftp-concurrency
Maximum number of FTP simultaneous connections, 0 for unlimited.
+Properties:
- Config: concurrency
- Env Var: RCLONE_FTP_CONCURRENCY
@@ -14165,6 +15849,7 @@ y/e/d> y
--ftp-no-check-certificate
Do not verify the TLS certificate of the server.
+Properties:
- Config: no_check_certificate
- Env Var: RCLONE_FTP_NO_CHECK_CERTIFICATE
@@ -14173,6 +15858,7 @@ y/e/d> y
--ftp-disable-epsv
Disable using EPSV even if server advertises support.
+Properties:
- Config: disable_epsv
- Env Var: RCLONE_FTP_DISABLE_EPSV
@@ -14181,6 +15867,7 @@ y/e/d> y
--ftp-disable-mlsd
Disable using MLSD even if server advertises support.
+Properties:
- Config: disable_mlsd
- Env Var: RCLONE_FTP_DISABLE_MLSD
@@ -14189,6 +15876,7 @@ y/e/d> y
--ftp-writing-mdtm
Use MDTM to set modification time (VsFtpd quirk)
+Properties:
- Config: writing_mdtm
- Env Var: RCLONE_FTP_WRITING_MDTM
@@ -14199,6 +15887,7 @@ y/e/d> y
Max time before closing idle connections.
If no connections have been returned to the connection pool in the time given, rclone will empty the connection pool.
Set to 0 to keep connections indefinitely.
+Properties:
- Config: idle_timeout
- Env Var: RCLONE_FTP_IDLE_TIMEOUT
@@ -14207,6 +15896,7 @@ y/e/d> y
--ftp-close-timeout
Maximum time to wait for a response to close.
+Properties:
- Config: close_timeout
- Env Var: RCLONE_FTP_CLOSE_TIMEOUT
@@ -14216,6 +15906,7 @@ y/e/d> y
--ftp-tls-cache-size
Size of TLS session cache for all control and data connections.
TLS cache allows to resume TLS sessions and reuse PSK between connections. Increase if default size is not enough resulting in TLS resumption errors. Enabled by default. Use 0 to disable.
+Properties:
- Config: tls_cache_size
- Env Var: RCLONE_FTP_TLS_CACHE_SIZE
@@ -14224,6 +15915,7 @@ y/e/d> y
--ftp-disable-tls13
Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
+Properties:
- Config: disable_tls13
- Env Var: RCLONE_FTP_DISABLE_TLS13
@@ -14232,15 +15924,27 @@ y/e/d> y
--ftp-shut-timeout
Maximum time to wait for data connection closing status.
+Properties:
- Config: shut_timeout
- Env Var: RCLONE_FTP_SHUT_TIMEOUT
- Type: Duration
- Default: 1m0s
+--ftp-ask-password
+Allow asking for FTP password when needed.
+If this is set and no password is supplied then rclone will ask for a password
+Properties:
+
+- Config: ask_password
+- Env Var: RCLONE_FTP_ASK_PASSWORD
+- Type: bool
+- Default: false
+
--ftp-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_FTP_ENCODING
@@ -14262,7 +15966,7 @@ y/e/d> y
-Limitations
+Limitations
FTP servers acting as rclone remotes must support passive
mode. The mode cannot be configured as passive
is the only supported one. Rclone's FTP implementation is not compatible with active
mode as the library it uses doesn't support it. This will likely never be supported due to security concerns.
Rclone's FTP backend does not support any checksums but can compare file sizes.
rclone about
is not supported by the FTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
@@ -14436,7 +16140,7 @@ y/e/d> y
Eg --header-upload "Content-Type text/potato"
Note that the last of these is for setting custom metadata in the form --header-upload "x-goog-meta-key: value"
-Modification time
+Modification time
Google Cloud Storage stores md5sum natively. Google's gsutil tool stores modification time with one-second precision as goog-reserved-file-mtime
in file metadata.
To ensure compatibility with gsutil, rclone stores modification time in 2 separate metadata entries. mtime
uses RFC3339 format with one-nanosecond precision. goog-reserved-file-mtime
uses Unix timestamp format with one-second precision. To get modification time from object metadata, rclone reads the metadata in the following order: mtime
, goog-reserved-file-mtime
, object updated time.
Note that rclone's default modify window is 1ns. Files uploaded by gsutil only contain timestamps with one-second precision. If you use rclone to sync files previously uploaded by gsutil, rclone will attempt to update modification time for all these files. To avoid these possibly unnecessary updates, use --modify-window 1s
.
@@ -14478,52 +16182,58 @@ y/e/d> y
--gcs-client-id
OAuth Client Id.
Leave blank normally.
+Properties:
- Config: client_id
- Env Var: RCLONE_GCS_CLIENT_ID
- Type: string
-- Default: ""
+- Required: false
--gcs-client-secret
OAuth Client Secret.
Leave blank normally.
+Properties:
- Config: client_secret
- Env Var: RCLONE_GCS_CLIENT_SECRET
- Type: string
-- Default: ""
+- Required: false
--gcs-project-number
Project number.
Optional - needed only for list/create/delete buckets - see your developer console.
+Properties:
- Config: project_number
- Env Var: RCLONE_GCS_PROJECT_NUMBER
- Type: string
-- Default: ""
+- Required: false
--gcs-service-account-file
Service Account Credentials JSON file path.
Leave blank normally. Needed only if you want use SA instead of interactive login.
Leading ~
will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}
.
+Properties:
- Config: service_account_file
- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE
- Type: string
-- Default: ""
+- Required: false
--gcs-service-account-credentials
Service Account Credentials JSON blob.
Leave blank normally. Needed only if you want use SA instead of interactive login.
+Properties:
- Config: service_account_credentials
- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
- Type: string
-- Default: ""
+- Required: false
--gcs-anonymous
Access public buckets and objects without credentials.
Set to 'true' if you just want to download files and don't configure credentials.
+Properties:
- Config: anonymous
- Env Var: RCLONE_GCS_ANONYMOUS
@@ -14532,11 +16242,12 @@ y/e/d> y
--gcs-object-acl
Access Control List for new objects.
+Properties:
- Config: object_acl
- Env Var: RCLONE_GCS_OBJECT_ACL
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "authenticatedRead"
@@ -14573,11 +16284,12 @@ y/e/d> y
--gcs-bucket-acl
Access Control List for new buckets.
+Properties:
- Config: bucket_acl
- Env Var: RCLONE_GCS_BUCKET_ACL
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "authenticatedRead"
@@ -14616,6 +16328,7 @@ y/e/d> y
- creates buckets with Bucket Policy Only set
Docs: https://cloud.google.com/storage/docs/bucket-policy-only
+Properties:
- Config: bucket_policy_only
- Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY
@@ -14624,11 +16337,12 @@ y/e/d> y
--gcs-location
Location for the newly created buckets.
+Properties:
- Config: location
- Env Var: RCLONE_GCS_LOCATION
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -14659,18 +16373,38 @@ y/e/d> y
- Tokyo
+- "asia-northeast2"
+
+- Osaka
+
+- "asia-northeast3"
+
+- Seoul
+
- "asia-south1"
- Mumbai
+- "asia-south2"
+
+- Delhi
+
- "asia-southeast1"
- Singapore
+- "asia-southeast2"
+
+- Jakarta
+
- "australia-southeast1"
- Sydney
+- "australia-southeast2"
+
+- Melbourne
+
- "europe-north1"
- Finland
@@ -14691,6 +16425,14 @@ y/e/d> y
- Netherlands
+- "europe-west6"
+
+- Zürich
+
+- "europe-central2"
+
+- Warsaw
+
- "us-central1"
- Iowa
@@ -14711,15 +16453,52 @@ y/e/d> y
- California
+- "us-west3"
+
+- Salt Lake City
+
+- "us-west4"
+
+- Las Vegas
+
+- "northamerica-northeast1"
+
+- Montréal
+
+- "northamerica-northeast2"
+
+- Toronto
+
+- "southamerica-east1"
+
+- São Paulo
+
+- "southamerica-west1"
+
+- Santiago
+
+- "asia1"
+
+- Dual region: asia-northeast1 and asia-northeast2.
+
+- "eur4"
+
+- Dual region: europe-north1 and europe-west4.
+
+- "nam4"
+
+- Dual region: us-central1 and us-east1.
+
--gcs-storage-class
The storage class to use when storing objects in Google Cloud Storage.
+Properties:
- Config: storage_class
- Env Var: RCLONE_GCS_STORAGE_CLASS
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -14756,40 +16535,44 @@ y/e/d> y
Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
--gcs-token
OAuth Access Token as a JSON blob.
+Properties:
- Config: token
- Env Var: RCLONE_GCS_TOKEN
- Type: string
-- Default: ""
+- Required: false
--gcs-auth-url
Auth server URL.
Leave blank to use the provider defaults.
+Properties:
- Config: auth_url
- Env Var: RCLONE_GCS_AUTH_URL
- Type: string
-- Default: ""
+- Required: false
--gcs-token-url
Token server url.
Leave blank to use the provider defaults.
+Properties:
- Config: token_url
- Env Var: RCLONE_GCS_TOKEN_URL
- Type: string
-- Default: ""
+- Required: false
--gcs-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_GCS_ENCODING
- Type: MultiEncoder
- Default: Slash,CrLf,InvalidUtf8,Dot
-Limitations
+Limitations
rclone about
is not supported by the Google Cloud Storage backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
Google Drive
@@ -14800,7 +16583,7 @@ y/e/d> y
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
r) Rename remote
c) Copy remote
@@ -14868,7 +16651,7 @@ y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
-Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/
and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.
+Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/
and it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.
You can then use it like this,
List directories in top level of your drive
rclone lsd remote:
@@ -15026,7 +16809,7 @@ trashed=false and 'c' in parents
- When downloading the contents of the destination file is downloaded.
- When updating shortcut file with a non shortcut file, the shortcut is removed then a new file is uploaded in place of the shortcut.
- When server-side moving (renaming) the shortcut is renamed, not the destination file.
-- When server-side copying the shortcut is copied, not the contents of the shortcut.
+- When server-side copying the shortcut is copied, not the contents of the shortcut. (unless
--drive-copy-shortcut-content
is in use in which case the contents of the shortcut gets copied).
- When deleting the shortcut is deleted not the linked file.
- When setting the modification time, the modification time of the linked file will be set.
@@ -15268,28 +17051,31 @@ trashed=false and 'c' in parents
Here are the standard options specific to drive (Google Drive).
--drive-client-id
Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance.
+Properties:
- Config: client_id
- Env Var: RCLONE_DRIVE_CLIENT_ID
- Type: string
-- Default: ""
+- Required: false
--drive-client-secret
OAuth Client Secret.
Leave blank normally.
+Properties:
- Config: client_secret
- Env Var: RCLONE_DRIVE_CLIENT_SECRET
- Type: string
-- Default: ""
+- Required: false
--drive-scope
Scope that rclone should use when requesting access from drive.
+Properties:
- Config: scope
- Env Var: RCLONE_DRIVE_SCOPE
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "drive"
@@ -15321,24 +17107,27 @@ trashed=false and 'c' in parents
--drive-root-folder-id
ID of the root folder. Leave blank normally.
Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point.
+Properties:
- Config: root_folder_id
- Env Var: RCLONE_DRIVE_ROOT_FOLDER_ID
- Type: string
-- Default: ""
+- Required: false
--drive-service-account-file
Service Account Credentials JSON file path.
Leave blank normally. Needed only if you want use SA instead of interactive login.
Leading ~
will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}
.
+Properties:
- Config: service_account_file
- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_FILE
- Type: string
-- Default: ""
+- Required: false
--drive-alternate-export
Deprecated: No longer needed.
+Properties:
- Config: alternate_export
- Env Var: RCLONE_DRIVE_ALTERNATE_EXPORT
@@ -15349,49 +17138,55 @@ trashed=false and 'c' in parents
Here are the advanced options specific to drive (Google Drive).
--drive-token
OAuth Access Token as a JSON blob.
+Properties:
- Config: token
- Env Var: RCLONE_DRIVE_TOKEN
- Type: string
-- Default: ""
+- Required: false
--drive-auth-url
Auth server URL.
Leave blank to use the provider defaults.
+Properties:
- Config: auth_url
- Env Var: RCLONE_DRIVE_AUTH_URL
- Type: string
-- Default: ""
+- Required: false
--drive-token-url
Token server url.
Leave blank to use the provider defaults.
+Properties:
- Config: token_url
- Env Var: RCLONE_DRIVE_TOKEN_URL
- Type: string
-- Default: ""
+- Required: false
--drive-service-account-credentials
Service Account Credentials JSON blob.
Leave blank normally. Needed only if you want use SA instead of interactive login.
+Properties:
- Config: service_account_credentials
- Env Var: RCLONE_DRIVE_SERVICE_ACCOUNT_CREDENTIALS
- Type: string
-- Default: ""
+- Required: false
--drive-team-drive
ID of the Shared Drive (Team Drive).
+Properties:
- Config: team_drive
- Env Var: RCLONE_DRIVE_TEAM_DRIVE
- Type: string
-- Default: ""
+- Required: false
--drive-auth-owner-only
Only consider files owned by the authenticated user.
+Properties:
- Config: auth_owner_only
- Env Var: RCLONE_DRIVE_AUTH_OWNER_ONLY
@@ -15401,15 +17196,28 @@ trashed=false and 'c' in parents
--drive-use-trash
Send files to the trash instead of deleting permanently.
Defaults to true, namely sending files to the trash. Use --drive-use-trash=false
to delete files permanently instead.
+Properties:
- Config: use_trash
- Env Var: RCLONE_DRIVE_USE_TRASH
- Type: bool
- Default: true
+--drive-copy-shortcut-content
+Server side copy contents of shortcuts instead of the shortcut.
+When doing server side copies, normally rclone will copy shortcuts as shortcuts.
+If this flag is used then rclone will copy the contents of shortcuts rather than shortcuts themselves when doing server side copies.
+Properties:
+
+- Config: copy_shortcut_content
+- Env Var: RCLONE_DRIVE_COPY_SHORTCUT_CONTENT
+- Type: bool
+- Default: false
+
--drive-skip-gdocs
Skip google documents in all listings.
If given, gdocs practically become invisible to rclone.
+Properties:
- Config: skip_gdocs
- Env Var: RCLONE_DRIVE_SKIP_GDOCS
@@ -15422,6 +17230,7 @@ trashed=false and 'c' in parents
Setting this flag will cause Google photos and videos to return a blank MD5 checksum.
Google photos are identified by being in the "photos" space.
Corrupted checksums are caused by Google modifying the image/video but not updating the checksum.
+Properties:
- Config: skip_checksum_gphotos
- Env Var: RCLONE_DRIVE_SKIP_CHECKSUM_GPHOTOS
@@ -15432,6 +17241,7 @@ trashed=false and 'c' in parents
Only show files that are shared with me.
Instructs rclone to operate on your "Shared with me" folder (where Google Drive lets you access the files and folders others have shared with you).
This works both with the "list" (lsd, lsl, etc.) and the "copy" commands (copy, sync, etc.), and with all other commands too.
+Properties:
- Config: shared_with_me
- Env Var: RCLONE_DRIVE_SHARED_WITH_ME
@@ -15441,6 +17251,7 @@ trashed=false and 'c' in parents
--drive-trashed-only
Only show files that are in the trash.
This will show trashed files in their original directory structure.
+Properties:
- Config: trashed_only
- Env Var: RCLONE_DRIVE_TRASHED_ONLY
@@ -15449,6 +17260,7 @@ trashed=false and 'c' in parents
--drive-starred-only
Only show files that are starred.
+Properties:
- Config: starred_only
- Env Var: RCLONE_DRIVE_STARRED_ONLY
@@ -15457,14 +17269,16 @@ trashed=false and 'c' in parents
--drive-formats
Deprecated: See export_formats.
+Properties:
- Config: formats
- Env Var: RCLONE_DRIVE_FORMATS
- Type: string
-- Default: ""
+- Required: false
--drive-export-formats
Comma separated list of preferred formats for downloading Google docs.
+Properties:
- Config: export_formats
- Env Var: RCLONE_DRIVE_EXPORT_FORMATS
@@ -15473,15 +17287,17 @@ trashed=false and 'c' in parents
--drive-import-formats
Comma separated list of preferred formats for uploading Google docs.
+Properties:
- Config: import_formats
- Env Var: RCLONE_DRIVE_IMPORT_FORMATS
- Type: string
-- Default: ""
+- Required: false
--drive-allow-import-name-change
Allow the filetype to change when uploading Google docs.
E.g. file.doc to file.docx. This will confuse sync and reupload every time.
+Properties:
- Config: allow_import_name_change
- Env Var: RCLONE_DRIVE_ALLOW_IMPORT_NAME_CHANGE
@@ -15494,6 +17310,7 @@ trashed=false and 'c' in parents
WARNING: This flag may have some unexpected consequences.
When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the "--checksum" flag.
This feature was implemented to retain photos capture date as recorded by google photos. You will first need to check the "Create a Google Photos folder" option in your google drive settings. You can then copy or move the photos locally and use the date the image was taken (created) set as the modification date.
+Properties:
- Config: use_created_date
- Env Var: RCLONE_DRIVE_USE_CREATED_DATE
@@ -15504,6 +17321,7 @@ trashed=false and 'c' in parents
Use date file was shared instead of modified date.
Note that, as with "--drive-use-created-date", this flag may have unexpected consequences when uploading/downloading files.
If both this flag and "--drive-use-created-date" are set, the created date is used.
+Properties:
- Config: use_shared_date
- Env Var: RCLONE_DRIVE_USE_SHARED_DATE
@@ -15512,6 +17330,7 @@ trashed=false and 'c' in parents
--drive-list-chunk
Size of listing chunk 100-1000, 0 to disable.
+Properties:
- Config: list_chunk
- Env Var: RCLONE_DRIVE_LIST_CHUNK
@@ -15520,14 +17339,16 @@ trashed=false and 'c' in parents
--drive-impersonate
Impersonate this user when using a service account.
+Properties:
- Config: impersonate
- Env Var: RCLONE_DRIVE_IMPERSONATE
- Type: string
-- Default: ""
+- Required: false
--drive-upload-cutoff
Cutoff for switching to chunked upload.
+Properties:
- Config: upload_cutoff
- Env Var: RCLONE_DRIVE_UPLOAD_CUTOFF
@@ -15539,6 +17360,7 @@ trashed=false and 'c' in parents
Must a power of 2 >= 256k.
Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.
Reducing this will reduce memory usage but decrease performance.
+Properties:
- Config: chunk_size
- Env Var: RCLONE_DRIVE_CHUNK_SIZE
@@ -15548,6 +17370,7 @@ trashed=false and 'c' in parents
--drive-acknowledge-abuse
Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
If downloading a file returns the error "This file has been identified as malware or spam and cannot be downloaded" with the error code "cannotDownloadAbusiveFile" then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway.
+Properties:
- Config: acknowledge_abuse
- Env Var: RCLONE_DRIVE_ACKNOWLEDGE_ABUSE
@@ -15556,6 +17379,7 @@ trashed=false and 'c' in parents
--drive-keep-revision-forever
Keep new head revision of each file forever.
+Properties:
- Config: keep_revision_forever
- Env Var: RCLONE_DRIVE_KEEP_REVISION_FOREVER
@@ -15568,6 +17392,7 @@ trashed=false and 'c' in parents
WARNING: This flag may have some unexpected consequences.
It is not recommended to set this flag in your config - the recommended usage is using the flag form --drive-size-as-quota when doing rclone ls/lsl/lsf/lsjson/etc only.
If you do use this flag for syncing (not recommended) then you will need to use --ignore size also.
+Properties:
- Config: size_as_quota
- Env Var: RCLONE_DRIVE_SIZE_AS_QUOTA
@@ -15576,6 +17401,7 @@ trashed=false and 'c' in parents
--drive-v2-download-min-size
If Object's are greater, use drive v2 API to download.
+Properties:
- Config: v2_download_min_size
- Env Var: RCLONE_DRIVE_V2_DOWNLOAD_MIN_SIZE
@@ -15584,6 +17410,7 @@ trashed=false and 'c' in parents
--drive-pacer-min-sleep
Minimum time to sleep between API calls.
+Properties:
- Config: pacer_min_sleep
- Env Var: RCLONE_DRIVE_PACER_MIN_SLEEP
@@ -15592,6 +17419,7 @@ trashed=false and 'c' in parents
--drive-pacer-burst
Number of API calls to allow without sleeping.
+Properties:
- Config: pacer_burst
- Env Var: RCLONE_DRIVE_PACER_BURST
@@ -15601,6 +17429,7 @@ trashed=false and 'c' in parents
--drive-server-side-across-configs
Allow server-side operations (e.g. copy) to work across different drive configs.
This can be useful if you wish to do a server-side copy between two different Google drives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations.
+Properties:
- Config: server_side_across_configs
- Env Var: RCLONE_DRIVE_SERVER_SIDE_ACROSS_CONFIGS
@@ -15611,6 +17440,7 @@ trashed=false and 'c' in parents
Disable drive using http2.
There is currently an unsolved issue with the google drive backend and HTTP/2. HTTP/2 is therefore disabled by default for the drive backend but can be re-enabled here. When the issue is solved this flag will be removed.
See: https://github.com/rclone/rclone/issues/3631
+Properties:
- Config: disable_http2
- Env Var: RCLONE_DRIVE_DISABLE_HTTP2
@@ -15622,6 +17452,7 @@ trashed=false and 'c' in parents
At the time of writing it is only possible to upload 750 GiB of data to Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync.
Note that this detection is relying on error message strings which Google don't document so it may break in the future.
See: https://github.com/rclone/rclone/issues/3857
+Properties:
- Config: stop_on_upload_limit
- Env Var: RCLONE_DRIVE_STOP_ON_UPLOAD_LIMIT
@@ -15632,6 +17463,7 @@ trashed=false and 'c' in parents
Make download limit errors be fatal.
At the time of writing it is only possible to download 10 TiB of data from Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync.
Note that this detection is relying on error message strings which Google don't document so it may break in the future.
+Properties:
- Config: stop_on_download_limit
- Env Var: RCLONE_DRIVE_STOP_ON_DOWNLOAD_LIMIT
@@ -15641,15 +17473,27 @@ trashed=false and 'c' in parents
--drive-skip-shortcuts
If set skip shortcut files.
Normally rclone dereferences shortcut files making them appear as if they are the original file (see the shortcuts section). If this flag is set then rclone will ignore shortcut files completely.
+Properties:
- Config: skip_shortcuts
- Env Var: RCLONE_DRIVE_SKIP_SHORTCUTS
- Type: bool
- Default: false
+--drive-skip-dangling-shortcuts
+If set skip dangling shortcut files.
+If this is set then rclone will not show any dangling shortcuts in listings.
+Properties:
+
+- Config: skip_dangling_shortcuts
+- Env Var: RCLONE_DRIVE_SKIP_DANGLING_SHORTCUTS
+- Type: bool
+- Default: false
+
--drive-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_DRIVE_ENCODING
@@ -15662,7 +17506,7 @@ trashed=false and 'c' in parents
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
See the "rclone backend" command for more info on how to pass options and arguments.
-These can be run on a running backend using the rc command backend/command.
+These can be run on a running backend using the rc command backend/command.
get
Get command for fetching the drive config parameters
rclone backend get remote: [options] [<arguments>+]
@@ -15753,7 +17597,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2
The path should end with a / to indicate copy the file as named to this directory. If it doesn't end with a / then the last path component will be used as the file name.
If the destination is a drive backend then server-side copying will be attempted if possible.
Use the -i flag to see what would be copied before copying.
-Limitations
+Limitations
Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiB/s but lots of small files can take a long time.
Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server-side copies with --disable copy
to download and upload the files if you prefer.
Limitations of Google Docs
@@ -15779,7 +17623,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2
Select a project or create a new project.
Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the "Google Drive API".
Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials"
-If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK) then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen.
+If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK); enter "User Support Email" (your own email is OK); enter "Developer Contact Email" (your own email is OK); then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen.
(PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this has not been tested/documented so far).
@@ -15801,7 +17645,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -15954,24 +17798,27 @@ y/e/d> y
--gphotos-client-id
OAuth Client Id.
Leave blank normally.
+Properties:
- Config: client_id
- Env Var: RCLONE_GPHOTOS_CLIENT_ID
- Type: string
-- Default: ""
+- Required: false
--gphotos-client-secret
OAuth Client Secret.
Leave blank normally.
+Properties:
- Config: client_secret
- Env Var: RCLONE_GPHOTOS_CLIENT_SECRET
- Type: string
-- Default: ""
+- Required: false
--gphotos-read-only
Set to make the Google Photos backend read only.
If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access.
+Properties:
- Config: read_only
- Env Var: RCLONE_GPHOTOS_READ_ONLY
@@ -15982,33 +17829,37 @@ y/e/d> y
Here are the advanced options specific to google photos (Google Photos).
--gphotos-token
OAuth Access Token as a JSON blob.
+Properties:
- Config: token
- Env Var: RCLONE_GPHOTOS_TOKEN
- Type: string
-- Default: ""
+- Required: false
--gphotos-auth-url
Auth server URL.
Leave blank to use the provider defaults.
+Properties:
- Config: auth_url
- Env Var: RCLONE_GPHOTOS_AUTH_URL
- Type: string
-- Default: ""
+- Required: false
--gphotos-token-url
Token server url.
Leave blank to use the provider defaults.
+Properties:
- Config: token_url
- Env Var: RCLONE_GPHOTOS_TOKEN_URL
- Type: string
-- Default: ""
+- Required: false
--gphotos-read-size
Set to read the size of media items.
Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media.
+Properties:
- Config: read_size
- Env Var: RCLONE_GPHOTOS_READ_SIZE
@@ -16017,6 +17868,7 @@ y/e/d> y
--gphotos-start-year
Year limits the photos to be downloaded to those which are uploaded after the given year.
+Properties:
- Config: start_year
- Env Var: RCLONE_GPHOTOS_START_YEAR
@@ -16025,10 +17877,11 @@ y/e/d> y
--gphotos-include-archived
Also view and download archived media.
-By default rclone does not request archived media. Thus, when syncing, archived media is not visible in directory listings or transferred.
+By default, rclone does not request archived media. Thus, when syncing, archived media is not visible in directory listings or transferred.
Note that media in albums is always visible and synced, no matter their archive status.
With this flag, archived media are always visible in directory listings and transferred.
Without this flag, archived media will not be visible in directory listings and won't be transferred.
+Properties:
- Config: include_archived
- Env Var: RCLONE_GPHOTOS_INCLUDE_ARCHIVED
@@ -16036,15 +17889,16 @@ y/e/d> y
- Default: false
--gphotos-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_GPHOTOS_ENCODING
- Type: MultiEncoder
- Default: Slash,CrLf,InvalidUtf8,Dot
-Limitations
+Limitations
Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item.
Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and will count towards your storage quota in your Google Account. The API does not offer a way to upload in "high quality" mode..
rclone about
is not supported by the Google Photos backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
@@ -16080,7 +17934,7 @@ y/e/d> y
Now proceed to interactive or manual configuration.
Interactive configuration
Run rclone config
:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -16157,14 +18011,16 @@ rclone backend drop Hasher:
Here are the standard options specific to hasher (Better checksums for other remotes).
--hasher-remote
Remote to cache checksums for (e.g. myRemote:path).
+Properties:
- Config: remote
- Env Var: RCLONE_HASHER_REMOTE
- Type: string
-- Default: ""
+- Required: true
--hasher-hashes
Comma separated list of supported checksum types.
+Properties:
- Config: hashes
- Env Var: RCLONE_HASHER_HASHES
@@ -16173,6 +18029,7 @@ rclone backend drop Hasher:
--hasher-max-age
Maximum time to keep checksums in cache (0 = no cache, off = cache forever).
+Properties:
- Config: max_age
- Env Var: RCLONE_HASHER_MAX_AGE
@@ -16183,6 +18040,7 @@ rclone backend drop Hasher:
Here are the advanced options specific to hasher (Better checksums for other remotes).
--hasher-auto-size
Auto-update checksum for files smaller than this size (disabled by default).
+Properties:
- Config: auto_size
- Env Var: RCLONE_HASHER_AUTO_SIZE
@@ -16195,7 +18053,7 @@ rclone backend drop Hasher:
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
See the "rclone backend" command for more info on how to pass options and arguments.
-These can be run on a running backend using the rc command backend/command.
+These can be run on a running backend using the rc command backend/command.
drop
Drop cache
rclone backend drop remote: [options] [<arguments>+]
@@ -16247,7 +18105,7 @@ rclone backend drop Hasher:
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -16356,19 +18214,21 @@ username = root
--hdfs-namenode
Hadoop name node and port.
E.g. "namenode:8020" to connect to host namenode at port 8020.
+Properties:
- Config: namenode
- Env Var: RCLONE_HDFS_NAMENODE
- Type: string
-- Default: ""
+- Required: true
--hdfs-username
Hadoop user name.
+Properties:
- Config: username
- Env Var: RCLONE_HDFS_USERNAME
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "root"
@@ -16382,20 +18242,22 @@ username = root
--hdfs-service-principal-name
Kerberos service principal name for the namenode.
Enables KERBEROS authentication. Specifies the Service Principal Name (SERVICE/FQDN) for the namenode. E.g. "hdfs/namenode.hadoop.docker" for namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'.
+Properties:
- Config: service_principal_name
- Env Var: RCLONE_HDFS_SERVICE_PRINCIPAL_NAME
- Type: string
-- Default: ""
+- Required: false
--hdfs-data-transfer-protection
Kerberos data transfer protection: authentication|integrity|privacy.
Specifies whether or not authentication, data signature integrity checks, and wire encryption is required when communicating the the datanodes. Possible values are 'authentication', 'integrity' and 'privacy'. Used only with KERBEROS enabled.
+Properties:
- Config: data_transfer_protection
- Env Var: RCLONE_HDFS_DATA_TRANSFER_PROTECTION
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "privacy"
@@ -16405,27 +18267,31 @@ username = root
--hdfs-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_HDFS_ENCODING
- Type: MultiEncoder
- Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
- No server-side
Move
or DirMove
.
- Checksums not implemented.
HTTP
The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!)
-Paths are specified as remote:
or remote:path/to/dir
.
+Paths are specified as remote:
or remote:path
.
+The remote:
represents the configured url, and any path following it will be resolved relative to this url, according to the URL standard. This means with remote url https://beta.rclone.org/branch
and path fix
, the resolved URL will be https://beta.rclone.org/branch/fix
, while with path /fix
the resolved URL will be https://beta.rclone.org/fix
as the absolute path is resolved from the root of the domain.
+If the path following the remote:
ends with /
it will be assumed to point to a directory. If the path does not end with /
, then a HEAD request is sent and the response used to decide if it it is treated as a file or a directory (run with -vv
to see details). When --http-no-head is specified, a path without ending /
is always assumed to be a file. If rclone incorrectly assumes the path is a file, the solution is to specify the path with ending /
. When you know the path is a directory, ending it with /
is always better as it avoids the initial HEAD request.
+To just download a single file it is easier to use copyurl.
Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -16482,16 +18348,19 @@ e/n/d/r/c/s/q> q
Usage without a config file
Since the http remote only has one config parameter it is easy to use without a config file:
rclone lsd --http-url https://beta.rclone.org :http:
+or:
+rclone lsd :http,url='https://beta.rclone.org':
Standard options
Here are the standard options specific to http (http Connection).
--http-url
URL of http host to connect to.
E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password.
+Properties:
- Config: url
- Env Var: RCLONE_HTTP_URL
- Type: string
-- Default: ""
+- Required: true
Advanced options
Here are the advanced options specific to http (http Connection).
@@ -16499,8 +18368,9 @@ e/n/d/r/c/s/q> q
Set HTTP headers for all transactions.
Use this to set additional HTTP headers for all transactions.
The input format is comma separated list of key,value pairs. Standard CSV encoding may be used.
-For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
+For example, to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
+Properties:
- Config: headers
- Env Var: RCLONE_HTTP_HEADERS
@@ -16512,6 +18382,7 @@ e/n/d/r/c/s/q> q
Use this if your target website does not use / on the end of directories.
A / on the end of a path is how rclone normally tells the difference between files and directories. If this flag is set, then rclone will treat all files with Content-Type: text/html as directories and read URLs from them rather than downloading them.
Note that this may cause rclone to confuse genuine HTML files with directories.
+Properties:
- Config: no_slash
- Env Var: RCLONE_HTTP_NO_SLASH
@@ -16519,24 +18390,22 @@ e/n/d/r/c/s/q> q
- Default: false
--http-no-head
-Don't use HEAD requests to find file sizes in dir listing.
-If your site is being very slow to load then you can try this option. Normally rclone does a HEAD request for each potential file in a directory listing to:
+Don't use HEAD requests.
+HEAD requests are mainly used to find file sizes in dir listing. If your site is being very slow to load then you can try this option. Normally rclone does a HEAD request for each potential file in a directory listing to:
- find its size
- check it really exists
- check to see if it is a directory
-If you set this option, rclone will not do the HEAD request. This will mean
+If you set this option, rclone will not do the HEAD request. This will mean that directory listings are much quicker, but rclone won't have the times or sizes of any files, and some files that don't exist may be in the listing.
+Properties:
-- directory listings are much quicker
-- rclone won't have the times or sizes of any files
-some files that don't exist may be in the listing
- Config: no_head
- Env Var: RCLONE_HTTP_NO_HEAD
- Type: bool
-Default: false
+- Default: false
-Limitations
+Limitations
rclone about
is not supported by the HTTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
Hubic
@@ -16605,52 +18474,58 @@ y/e/d> y
--hubic-client-id
OAuth Client Id.
Leave blank normally.
+Properties:
- Config: client_id
- Env Var: RCLONE_HUBIC_CLIENT_ID
- Type: string
-- Default: ""
+- Required: false
--hubic-client-secret
OAuth Client Secret.
Leave blank normally.
+Properties:
- Config: client_secret
- Env Var: RCLONE_HUBIC_CLIENT_SECRET
- Type: string
-- Default: ""
+- Required: false
Advanced options
Here are the advanced options specific to hubic (Hubic).
--hubic-token
OAuth Access Token as a JSON blob.
+Properties:
- Config: token
- Env Var: RCLONE_HUBIC_TOKEN
- Type: string
-- Default: ""
+- Required: false
--hubic-auth-url
Auth server URL.
Leave blank to use the provider defaults.
+Properties:
- Config: auth_url
- Env Var: RCLONE_HUBIC_AUTH_URL
- Type: string
-- Default: ""
+- Required: false
--hubic-token-url
Token server url.
Leave blank to use the provider defaults.
+Properties:
- Config: token_url
- Env Var: RCLONE_HUBIC_TOKEN_URL
- Type: string
-- Default: ""
+- Required: false
--hubic-chunk-size
Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The default for this is 5 GiB which is its maximum value.
+Properties:
- Config: chunk_size
- Env Var: RCLONE_HUBIC_CHUNK_SIZE
@@ -16662,6 +18537,7 @@ y/e/d> y
When doing streaming uploads (e.g. using rcat or mount) setting this flag will cause the swift backend to not upload chunked files.
This will limit the maximum upload size to 5 GiB. However non chunked files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal copy operations.
+Properties:
- Config: no_chunk
- Env Var: RCLONE_HUBIC_NO_CHUNK
@@ -16669,15 +18545,16 @@ y/e/d> y
- Default: false
--hubic-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_HUBIC_ENCODING
- Type: MultiEncoder
- Default: Slash,InvalidUtf8
-Limitations
+Limitations
This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.
The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
Jottacloud
@@ -16690,14 +18567,16 @@ y/e/d> y
Standard authentication
To configure Jottacloud you will need to generate a personal security token in the Jottacloud web interface. You will the option to do in your account security settings (for whitelabel version you need to find this page in its web interface). Note that the web interface may refer to this token as a JottaCli token.
Legacy authentication
-If you are using one of the whitelabel versions (e.g. from Elkjøp or Tele2) you may not have the option to generate a CLI token. In this case you'll have to use the legacy authentication. To to this select yes when the setup asks for legacy authentication and enter your username and password. The rest of the setup is identical to the default setup.
+If you are using one of the whitelabel versions (e.g. from Elkjøp) you may not have the option to generate a CLI token. In this case you'll have to use the legacy authentication. To do this select yes when the setup asks for legacy authentication and enter your username and password. The rest of the setup is identical to the default setup.
Telia Cloud authentication
Similar to other whitelabel versions Telia Cloud doesn't offer the option of creating a CLI token, and additionally uses a separate authentication flow where the username is generated internally. To setup rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is identical to the default setup.
+Tele2 Cloud authentication
+As Tele2-Com Hem merger was completed this authentication can be used for former Com Hem Cloud and Tele2 Cloud customers as no support for creating a CLI token exists, and additionally uses a separate authentication flow where the username is generated internally. To setup rclone to use Tele2 Cloud, choose Tele2 Cloud authentication in the setup. The rest of the setup is identical to the default setup.
Configuration
Here is an example of how to make a remote called remote
with the default setup. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -16742,7 +18621,7 @@ Choose a number from below, or type in an existing value
1 > Archive
2 > Links
3 > Sync
-
+
Mountpoints> 1
--------------------
[jotta]
@@ -16764,15 +18643,17 @@ y/e/d> y
To copy a local directory to an Jottacloud directory called backup
rclone copy /home/source remote:backup
Devices and Mountpoints
-The official Jottacloud client registers a device for each computer you install it on, and then creates a mountpoint for each folder you select for Backup. The web interface uses a special device called Jotta for the Archive and Sync mountpoints. In most cases you'll want to use the Jotta/Archive device/mountpoint, however if you want to access files uploaded by any of the official clients rclone provides the option to select other devices and mountpoints during config.
+The official Jottacloud client registers a device for each computer you install it on, and then creates a mountpoint for each folder you select for Backup. The web interface uses a special device called Jotta for the Archive and Sync mountpoints.
+With rclone you'll want to use the Jotta/Archive device/mountpoint in most cases, however if you want to access files uploaded by any of the official clients rclone provides the option to select other devices and mountpoints during config. Note that uploading files is currently not supported to other devices than Jotta.
The built-in Jotta device may also contain several other mountpoints, such as: Latest, Links, Shared and Trash. These are special mountpoints with a different internal representation than the "regular" mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing.
--fast-list
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown.
+Note also that with rclone version 1.58 and newer information about MIME types are not available when using --fast-list
.
Modified time and hashes
Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
Jottacloud supports MD5 type hashes, so you can use the --checksum
flag.
-Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the TMPDIR
environment variable points to) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag. When uploading from local disk the source checksum is always available, so this does not apply. Starting with rclone version 1.52 the same is true for crypted remotes (in older versions the crypt backend would not calculate hashes for uploads from local disk, so the Jottacloud backend had to do it as described above).
+Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (in location given by --temp-dir) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag. When uploading from local disk the source checksum is always available, so this does not apply. Starting with rclone version 1.52 the same is true for crypted remotes (in older versions the crypt backend would not calculate hashes for uploads from local disk, so the Jottacloud backend had to do it as described above).
Restricted filename characters
In addition to the default restricted characters set the following characters are also replaced:
@@ -16823,7 +18704,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
Deleting files
-By default rclone will send all files to the trash when deleting files. They will be permanently deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately by using the --jottacloud-hard-delete flag, or set the equivalent environment variable. Emptying the trash is supported by the cleanup command.
+By default, rclone will send all files to the trash when deleting files. They will be permanently deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately by using the --jottacloud-hard-delete flag, or set the equivalent environment variable. Emptying the trash is supported by the cleanup command.
Versions
Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website.
Versioning can be disabled by --jottacloud-no-versions
option. This is achieved by deleting the remote file prior to uploading a new version. If the upload the fails no version of the file will be available in the remote.
@@ -16833,6 +18714,7 @@ y/e/d> y
Here are the advanced options specific to jottacloud (Jottacloud).
--jottacloud-md5-memory-limit
Files bigger than this will be cached on disk to calculate the MD5 if required.
+Properties:
- Config: md5_memory_limit
- Env Var: RCLONE_JOTTACLOUD_MD5_MEMORY_LIMIT
@@ -16842,6 +18724,7 @@ y/e/d> y
--jottacloud-trashed-only
Only show files that are in the trash.
This will show trashed files in their original directory structure.
+Properties:
- Config: trashed_only
- Env Var: RCLONE_JOTTACLOUD_TRASHED_ONLY
@@ -16850,6 +18733,7 @@ y/e/d> y
--jottacloud-hard-delete
Delete files permanently rather than putting them into the trash.
+Properties:
- Config: hard_delete
- Env Var: RCLONE_JOTTACLOUD_HARD_DELETE
@@ -16858,6 +18742,7 @@ y/e/d> y
--jottacloud-upload-resume-limit
Files bigger than this can be resumed if the upload fail's.
+Properties:
- Config: upload_resume_limit
- Env Var: RCLONE_JOTTACLOUD_UPLOAD_RESUME_LIMIT
@@ -16866,6 +18751,7 @@ y/e/d> y
--jottacloud-no-versions
Avoid server side versioning by deleting files and recreating files instead of overwriting them.
+Properties:
- Config: no_versions
- Env Var: RCLONE_JOTTACLOUD_NO_VERSIONS
@@ -16873,15 +18759,16 @@ y/e/d> y
- Default: false
--jottacloud-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_JOTTACLOUD_ENCODING
- Type: MultiEncoder
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
Jottacloud only supports filenames up to 255 characters in length.
@@ -16895,46 +18782,58 @@ y/e/d> y
Here is an example of how to make a remote called koofr
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
-name> koofr
+name> koofr
+Option Storage.
Type of storage to configure.
-Enter a string value. Press Enter for the default ("").
-Choose a number from below, or type in your own value
+Choose a number from below, or type in your own value.
[snip]
-XX / Koofr
- \ "koofr"
+22 / Koofr, Digi Storage and other Koofr-compatible storage providers
+ \ (koofr)
[snip]
Storage> koofr
-** See help for koofr backend at: https://rclone.org/koofr/ **
-
-Your Koofr user name
-Enter a string value. Press Enter for the default ("").
-user> USER@NAME
-Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
-y) Yes type in my own password
+Option provider.
+Choose your storage provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Koofr, https://app.koofr.net/
+ \ (koofr)
+ 2 / Digi Storage, https://storage.rcs-rds.ro/
+ \ (digistorage)
+ 3 / Any other Koofr API compatible storage service
+ \ (other)
+provider> 1
+Option user.
+Your user name.
+Enter a value.
+user> USERNAME
+Option password.
+Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).
+Choose an alternative below.
+y) Yes, type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
-Edit advanced config? (y/n)
+Edit advanced config?
y) Yes
-n) No
+n) No (default)
y/n> n
Remote config
--------------------
[koofr]
type = koofr
-baseurl = https://app.koofr.net
-user = USER@NAME
+provider = koofr
+user = USERNAME
password = *** ENCRYPTED ***
--------------------
-y) Yes this is OK
+y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
@@ -16945,7 +18844,7 @@ y/e/d> y
List all the files in your Koofr
rclone ls koofr:
To copy a local directory to an Koofr directory called backup
-rclone copy /home/source remote:backup
+rclone copy /home/source koofr:backup
Restricted filename characters
In addition to the default restricted characters set the following characters are also replaced:
@@ -16966,46 +18865,99 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
Standard options
-Here are the standard options specific to koofr (Koofr).
+Here are the standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
+--koofr-provider
+Choose your storage provider.
+Properties:
+
+- Config: provider
+- Env Var: RCLONE_KOOFR_PROVIDER
+- Type: string
+- Required: false
+- Examples:
+
+- "koofr"
+
+- Koofr, https://app.koofr.net/
+
+- "digistorage"
+
+- Digi Storage, https://storage.rcs-rds.ro/
+
+- "other"
+
+- Any other Koofr API compatible storage service
+
+
+
+--koofr-endpoint
+The Koofr API endpoint to use.
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_KOOFR_ENDPOINT
+- Provider: other
+- Type: string
+- Required: true
+
--koofr-user
-Your Koofr user name.
+Your user name.
+Properties:
- Config: user
- Env Var: RCLONE_KOOFR_USER
- Type: string
-- Default: ""
+- Required: true
--koofr-password
-Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).
+Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).
NB Input to this must be obscured - see rclone obscure.
+Properties:
- Config: password
- Env Var: RCLONE_KOOFR_PASSWORD
+- Provider: koofr
- Type: string
-- Default: ""
+- Required: true
+
+--koofr-password
+Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+
+- Config: password
+- Env Var: RCLONE_KOOFR_PASSWORD
+- Provider: digistorage
+- Type: string
+- Required: true
+
+--koofr-password
+Your password for rclone (generate one at your service's settings page).
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+
+- Config: password
+- Env Var: RCLONE_KOOFR_PASSWORD
+- Provider: other
+- Type: string
+- Required: true
Advanced options
-Here are the advanced options specific to koofr (Koofr).
---koofr-endpoint
-The Koofr API endpoint to use.
-
-- Config: endpoint
-- Env Var: RCLONE_KOOFR_ENDPOINT
-- Type: string
-- Default: "https://app.koofr.net"
-
+Here are the advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
--koofr-mountid
Mount ID of the mount to use.
If omitted, the primary mount is used.
+Properties:
- Config: mountid
- Env Var: RCLONE_KOOFR_MOUNTID
- Type: string
-- Default: ""
+- Required: false
--koofr-setmtime
Does the backend support setting modification time.
Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.
+Properties:
- Config: setmtime
- Env Var: RCLONE_KOOFR_SETMTIME
@@ -17013,16 +18965,143 @@ y/e/d> y
- Default: true
--koofr-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_KOOFR_ENCODING
- Type: MultiEncoder
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
+Providers
+Koofr
+This is the original Koofr storage provider used as main example and described in the configuration section above.
+Digi Storage
+Digi Storage is a cloud storage service run by Digi.ro that provides a Koofr API.
+Here is an example of how to make a remote called ds
. First run:
+ rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> ds
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+22 / Koofr, Digi Storage and other Koofr-compatible storage providers
+ \ (koofr)
+[snip]
+Storage> koofr
+Option provider.
+Choose your storage provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Koofr, https://app.koofr.net/
+ \ (koofr)
+ 2 / Digi Storage, https://storage.rcs-rds.ro/
+ \ (digistorage)
+ 3 / Any other Koofr API compatible storage service
+ \ (other)
+provider> 2
+Option user.
+Your user name.
+Enter a value.
+user> USERNAME
+Option password.
+Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).
+Choose an alternative below.
+y) Yes, type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+--------------------
+[ds]
+type = koofr
+provider = digistorage
+user = USERNAME
+password = *** ENCRYPTED ***
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Other
+You may also want to use another, public or private storage provider that runs a Koofr API compatible service, by simply providing the base URL to connect to.
+Here is an example of how to make a remote called other
. First run:
+ rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> other
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+22 / Koofr, Digi Storage and other Koofr-compatible storage providers
+ \ (koofr)
+[snip]
+Storage> koofr
+Option provider.
+Choose your storage provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Koofr, https://app.koofr.net/
+ \ (koofr)
+ 2 / Digi Storage, https://storage.rcs-rds.ro/
+ \ (digistorage)
+ 3 / Any other Koofr API compatible storage service
+ \ (other)
+provider> 3
+Option endpoint.
+The Koofr API endpoint to use.
+Enter a value.
+endpoint> https://koofr.other.org
+Option user.
+Your user name.
+Enter a value.
+user> USERNAME
+Option password.
+Your password for rclone (generate one at your service's settings page).
+Choose an alternative below.
+y) Yes, type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+--------------------
+[other]
+type = koofr
+provider = other
+endpoint = https://koofr.other.org
+user = USERNAME
+password = *** ENCRYPTED ***
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
Mail.ru Cloud
Mail.ru Cloud is a cloud storage provided by a Russian internet company Mail.Ru Group. The official desktop client is Disk-O:, available on Windows and Mac OS.
Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclone until it gets eventually implemented.
@@ -17041,7 +19120,7 @@ y/e/d> y
Here is an example of making a mailru configuration. First create a Mail.ru Cloud account and choose a tariff, then run
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -17169,24 +19248,27 @@ y/e/d> y
Here are the standard options specific to mailru (Mail.ru Cloud).
--mailru-user
User name (usually email).
+Properties:
- Config: user
- Env Var: RCLONE_MAILRU_USER
- Type: string
-- Default: ""
+- Required: true
--mailru-pass
Password.
NB Input to this must be obscured - see rclone obscure.
+Properties:
- Config: pass
- Env Var: RCLONE_MAILRU_PASS
- Type: string
-- Default: ""
+- Required: true
--mailru-speedup-enable
Skip full upload if there is another file with same data hash.
This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. It is meaningless and ineffective if source file is unique or encrypted. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization.
+Properties:
- Config: speedup_enable
- Env Var: RCLONE_MAILRU_SPEEDUP_ENABLE
@@ -17209,6 +19291,7 @@ y/e/d> y
--mailru-speedup-file-patterns
Comma separated list of file name patterns eligible for speedup (put by hash).
Patterns are case insensitive and can contain '*' or '?' meta characters.
+Properties:
- Config: speedup_file_patterns
- Env Var: RCLONE_MAILRU_SPEEDUP_FILE_PATTERNS
@@ -17237,6 +19320,7 @@ y/e/d> y
--mailru-speedup-max-disk
This option allows you to disable speedup (put by hash) for large files.
Reason is that preliminary hashing can exhaust your RAM or disk space.
+Properties:
- Config: speedup_max_disk
- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_DISK
@@ -17260,6 +19344,7 @@ y/e/d> y
--mailru-speedup-max-memory
Files larger than the size given below will always be hashed on disk.
+Properties:
- Config: speedup_max_memory
- Env Var: RCLONE_MAILRU_SPEEDUP_MAX_MEMORY
@@ -17283,6 +19368,7 @@ y/e/d> y
--mailru-check-hash
What should copy do if file checksum is mismatched or invalid.
+Properties:
- Config: check_hash
- Env Var: RCLONE_MAILRU_CHECK_HASH
@@ -17303,31 +19389,34 @@ y/e/d> y
--mailru-user-agent
HTTP user agent used internally by client.
Defaults to "rclone/VERSION" or "--user-agent" provided on command line.
+Properties:
- Config: user_agent
- Env Var: RCLONE_MAILRU_USER_AGENT
- Type: string
-- Default: ""
+- Required: false
--mailru-quirks
Comma separated list of internal maintenance flags.
This option must not be used by an ordinary user. It is intended only to facilitate remote troubleshooting of backend issues. Strict meaning of flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist unknowndirs
+Properties:
- Config: quirks
- Env Var: RCLONE_MAILRU_QUIRKS
- Type: string
-- Default: ""
+- Required: false
--mailru-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_MAILRU_ENCODING
- Type: MultiEncoder
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits.
Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Mega
@@ -17339,7 +19428,7 @@ y/e/d> y
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -17426,26 +19515,29 @@ y/e/d> y
Here are the standard options specific to mega (Mega).
--mega-user
User name.
+Properties:
- Config: user
- Env Var: RCLONE_MEGA_USER
- Type: string
-- Default: ""
+- Required: true
--mega-pass
Password.
NB Input to this must be obscured - see rclone obscure.
+Properties:
- Config: pass
- Env Var: RCLONE_MEGA_PASS
- Type: string
-- Default: ""
+- Required: true
Advanced options
Here are the advanced options specific to mega (Mega).
--mega-debug
Output more debug from Mega.
If this flag is set (along with -vv) it will print further debugging information from the mega backend.
+Properties:
- Config: debug
- Env Var: RCLONE_MEGA_DEBUG
@@ -17455,6 +19547,7 @@ y/e/d> y
--mega-hard-delete
Delete files permanently rather than putting them into the trash.
Normally the mega backend will put all deletions into the trash rather than permanently deleting them. If you specify this then rclone will permanently delete objects instead.
+Properties:
- Config: hard_delete
- Env Var: RCLONE_MEGA_HARD_DELETE
@@ -17462,23 +19555,24 @@ y/e/d> y
- Default: false
--mega-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_MEGA_ENCODING
- Type: MultiEncoder
- Default: Slash,InvalidUtf8,Dot
-Limitations
+Limitations
This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.
Mega allows duplicate files which may confuse rclone.
Memory
The memory backend is an in RAM backend. It does not persist its data - use the local backend for that.
-The memory backend behaves like a bucket based remote (e.g. like s3). Because it has no parameters you can just use it with the :memory:
remote name.
+The memory backend behaves like a bucket-based remote (e.g. like s3). Because it has no parameters you can just use it with the :memory:
remote name.
Configuration
You can configure it as a remote like this with rclone config
too if you want to:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -17512,13 +19606,193 @@ rclone serve sftp :memory:
The memory backend supports MD5 hashes and modification times accurate to 1 nS.
Restricted filename characters
The memory backend replaces the default restricted characters set.
+Akamai NetStorage
+Paths are specified as remote:
You may put subdirectories in too, e.g. remote:/path/to/dir
. If you have a CP code you can use that as the folder after the domain such as <domain>/<cpcode>/<internal directories within cpcode>.
+For example, this is commonly configured with or without a CP code: * With a CP code. [your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/
* Without a CP code. [your-domain-prefix]-nsu.akamaihd.net
+See all buckets rclone lsd remote: The initial setup for Netstorage involves getting an account and secret. Use rclone config
to walk you through the setup process.
+Here's an example of how to make a remote called ns1
.
+
+- To begin the interactive configuration process, enter this command:
+
+rclone config
+
+- Type
n
to create a new remote.
+
+n) New remote
+d) Delete remote
+q) Quit config
+e/n/d/q> n
+
+- For this example, enter
ns1
when you reach the name> prompt.
+
+name> ns1
+
+- Enter
netstorage
as the type of storage to configure.
+
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+XX / NetStorage
+ \ "netstorage"
+Storage> netstorage
+
+- Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes.
+
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+ 1 / HTTP protocol
+ \ "http"
+ 2 / HTTPS protocol
+ \ "https"
+protocol> 1
+
+- Specify your NetStorage host, CP code, and any necessary content paths using this format:
<domain>/<cpcode>/<content>/
+
+Enter a string value. Press Enter for the default ("").
+host> baseball-nsu.akamaihd.net/123456/content/
+
+- Set the netstorage account name
+
+Enter a string value. Press Enter for the default ("").
+account> username
+
+- Set the Netstorage account secret/G2O key which will be used for authentication purposes. Select the
y
option to set your own password then enter your secret. Note: The secret is stored in the rclone.conf
file with hex-encoded encryption.
+
+y) Yes type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+
+- View the summary and confirm your remote configuration.
+
+[ns1]
+type = netstorage
+protocol = http
+host = baseball-nsu.akamaihd.net/123456/content/
+account = username
+secret = *** ENCRYPTED ***
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+This remote is called ns1
and can now be used.
+Example operations
+Get started with rclone and NetStorage with these examples. For additional rclone commands, visit https://rclone.org/commands/.
+See contents of a directory in your project
+rclone lsd ns1:/974012/testing/
+Sync the contents local with remote
+rclone sync . ns1:/974012/testing/
+Upload local content to remote
+rclone copy notes.txt ns1:/974012/testing/
+Delete content on remote
+rclone delete ns1:/974012/testing/notes.txt
+Move or copy content between CP codes.
+Your credentials must have access to two CP codes on the same remote. You can't perform operations between different remotes.
+rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/
+Symlink Support
+The Netstorage backend changes the rclone --links, -l
behavior. When uploading, instead of creating the .rclonelink file, use the "symlink" API in order to create the corresponding symlink on the remote. The .rclonelink file will not be created, the upload will be intercepted and only the symlink file that matches the source file name with no suffix will be created on the remote.
+This will effectively allow commands like copy/copyto, move/moveto and sync to upload from local to remote and download from remote to local directories with symlinks. Due to internal rclone limitations, it is not possible to upload an individual symlink file to any remote backend. You can always use the "backend symlink" command to create a symlink on the NetStorage server, refer to "symlink" section below.
+Individual symlink files on the remote can be used with the commands like "cat" to print the destination name, or "delete" to delete symlink, or copy, copy/to and move/moveto to download from the remote to local. Note: individual symlink files on the remote should be specified including the suffix .rclonelink.
+Note: No file with the suffix .rclonelink should ever exist on the server since it is not possible to actually upload/create a file with .rclonelink suffix with rclone, it can only exist if it is manually created through a non-rclone method on the remote.
+Implicit vs. Explicit Directories
+With NetStorage, directories can exist in one of two forms:
+
+- Explicit Directory. This is an actual, physical directory that you have created in a storage group.
+- Implicit Directory. This refers to a directory within a path that has not been physically created. For example, during upload of a file, non-existent subdirectories can be specified in the target path. NetStorage creates these as "implicit." While the directories aren't physically created, they exist implicitly and the noted path is connected with the uploaded file.
+
+Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly.
+ListR Feature
+NetStorage remote supports the ListR feature by using the "list" NetStorage API action to return a lexicographical list of all objects within the specified CP code, recursing into subdirectories as they're encountered.
+
+Rclone will use the ListR method for some commands by default. Commands such as lsf -R
will use ListR by default. To disable this, include the --disable listR
option to use the non-recursive method of listing objects.
+Rclone will not use the ListR method for some commands. Commands such as sync
don't use ListR by default. To force using the ListR method, include the --fast-list
option.
+
+There are pros and cons of using the ListR method, refer to rclone documentation. In general, the sync command over an existing deep tree on the remote will run faster with the "--fast-list" flag but with extra memory usage as a side effect. It might also result in higher CPU utilization but the whole task can be completed faster.
+Note: There is a known limitation that "lsf -R" will display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass "--disable listR" flag if these numbers are important in the output.
+Purge Feature
+NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method.
+Note: Read the NetStorage Usage API for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible.
+Standard options
+Here are the standard options specific to netstorage (Akamai NetStorage).
+--netstorage-host
+Domain+path of NetStorage host to connect to.
+Format should be /
+Properties:
+
+- Config: host
+- Env Var: RCLONE_NETSTORAGE_HOST
+- Type: string
+- Required: true
+
+--netstorage-account
+Set the NetStorage account name
+Properties:
+
+- Config: account
+- Env Var: RCLONE_NETSTORAGE_ACCOUNT
+- Type: string
+- Required: true
+
+--netstorage-secret
+Set the NetStorage account secret/G2O key for authentication.
+Please choose the 'y' option to set your own password then enter your secret.
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+
+- Config: secret
+- Env Var: RCLONE_NETSTORAGE_SECRET
+- Type: string
+- Required: true
+
+Advanced options
+Here are the advanced options specific to netstorage (Akamai NetStorage).
+--netstorage-protocol
+Select between HTTP or HTTPS protocol.
+Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes.
+Properties:
+
+- Config: protocol
+- Env Var: RCLONE_NETSTORAGE_PROTOCOL
+- Type: string
+- Default: "https"
+- Examples:
+
+- "http"
+
+- HTTP protocol
+
+- "https"
+
+- HTTPS protocol
+
+
+
+Backend commands
+Here are the commands specific to the netstorage backend.
+Run them with
+rclone backend COMMAND remote:
+The help below will explain what arguments each command takes.
+See the "rclone backend" command for more info on how to pass options and arguments.
+These can be run on a running backend using the rc command backend/command.
+du
+Return disk usage information for a specified directory
+rclone backend du remote: [options] [<arguments>+]
+The usage information returned, includes the targeted directory as well as all files stored in any sub-directories that may exist.
+symlink
+You can create a symbolic link in ObjectStore with the symlink action.
+rclone backend symlink remote: [options] [<arguments>+]
+The desired path location (including applicable sub-directories) ending in the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if applicable. rclone backend symlink
Microsoft Azure Blob Storage
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:container/path/to/dir
.
Configuration
Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -17560,6 +19834,8 @@ y/e/d> y
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Modified time
The modified time is stored as metadata on the object with the mtime
key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it.
+Performance
+When uploading large files, increasing the value of --azureblob-upload-concurrency
will increase performance at the cost of using more memory. The default of 16 is set quite conservatively to use less memory. It maybe be necessary raise it to 64 or higher to fully utilize a 1 GBit/s link with a single file transfer.
Restricted filename characters
In addition to the default restricted characters set the following characters are also replaced:
@@ -17619,16 +19895,17 @@ container/
Note that you can't see or access any other containers - this will fail
rclone ls azureblob:othercontainer
Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server.
-Standard options
+Standard options
Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).
--azureblob-account
Storage Account Name.
Leave blank to use SAS URL or Emulator.
+Properties:
- Config: account
- Env Var: RCLONE_AZUREBLOB_ACCOUNT
- Type: string
-- Default: ""
+- Required: false
--azureblob-service-principal-file
Path to file containing credentials for use with a service principal.
@@ -17638,34 +19915,38 @@ container/
--scopes "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container>" \
> azure-principal.json
See "Create an Azure service principal" and "Assign an Azure role for access to blob data" pages for more details.
+Properties:
- Config: service_principal_file
- Env Var: RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE
- Type: string
-- Default: ""
+- Required: false
--azureblob-key
Storage Account Key.
Leave blank to use SAS URL or Emulator.
+Properties:
- Config: key
- Env Var: RCLONE_AZUREBLOB_KEY
- Type: string
-- Default: ""
+- Required: false
--azureblob-sas-url
SAS URL for container level access only.
Leave blank if using account/key or Emulator.
+Properties:
- Config: sas_url
- Env Var: RCLONE_AZUREBLOB_SAS_URL
- Type: string
-- Default: ""
+- Required: false
--azureblob-use-msi
Use a managed service identity to authenticate (only works in Azure).
When true, use a managed service identity to authenticate to Azure Storage instead of a SAS token or account key.
If the VM(SS) on which this program is running has a system-assigned identity, it will be used by default. If the resource has no system-assigned but exactly one user-assigned identity, the user-assigned identity will be used by default. If the resource has multiple user-assigned identities, the identity to use must be explicitly specified using exactly one of the msi_object_id, msi_client_id, or msi_mi_res_id parameters.
+Properties:
- Config: use_msi
- Env Var: RCLONE_AZUREBLOB_USE_MSI
@@ -17675,70 +19956,91 @@ container/
--azureblob-use-emulator
Uses local storage emulator if provided as 'true'.
Leave blank if using real azure storage endpoint.
+Properties:
- Config: use_emulator
- Env Var: RCLONE_AZUREBLOB_USE_EMULATOR
- Type: bool
- Default: false
-Advanced options
+Advanced options
Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).
--azureblob-msi-object-id
Object ID of the user-assigned MSI to use, if any.
Leave blank if msi_client_id or msi_mi_res_id specified.
+Properties:
- Config: msi_object_id
- Env Var: RCLONE_AZUREBLOB_MSI_OBJECT_ID
- Type: string
-- Default: ""
+- Required: false
--azureblob-msi-client-id
Object ID of the user-assigned MSI to use, if any.
Leave blank if msi_object_id or msi_mi_res_id specified.
+Properties:
- Config: msi_client_id
- Env Var: RCLONE_AZUREBLOB_MSI_CLIENT_ID
- Type: string
-- Default: ""
+- Required: false
--azureblob-msi-mi-res-id
Azure resource ID of the user-assigned MSI to use, if any.
Leave blank if msi_client_id or msi_object_id specified.
+Properties:
- Config: msi_mi_res_id
- Env Var: RCLONE_AZUREBLOB_MSI_MI_RES_ID
- Type: string
-- Default: ""
+- Required: false
--azureblob-endpoint
Endpoint for the service.
Leave blank normally.
+Properties:
- Config: endpoint
- Env Var: RCLONE_AZUREBLOB_ENDPOINT
- Type: string
-- Default: ""
+- Required: false
--azureblob-upload-cutoff
Cutoff for switching to chunked upload (<= 256 MiB) (deprecated).
+Properties:
- Config: upload_cutoff
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
- Type: string
-- Default: ""
+- Required: false
--azureblob-chunk-size
-Upload chunk size (<= 100 MiB).
-Note that this is stored in memory and there may be up to "--transfers" chunks stored at once in memory.
+Upload chunk size.
+Note that this is stored in memory and there may be up to "--transfers" * "--azureblob-upload-concurrency" chunks stored at once in memory.
+Properties:
- Config: chunk_size
- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
- Type: SizeSuffix
- Default: 4Mi
+--azureblob-upload-concurrency
+Concurrency for multipart uploads.
+This is the number of chunks of the same file that are uploaded concurrently.
+If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.
+In tests, upload speed increases almost linearly with upload concurrency. For example to fill a gigabit pipe it may be necessary to raise this to 64. Note that this will use more memory.
+Note that chunks are stored in memory and there may be up to "--transfers" * "--azureblob-upload-concurrency" chunks stored at once in memory.
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_AZUREBLOB_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 16
+
--azureblob-list-chunk
Size of blob list.
This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. "List blobs" requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( source ). This can be used to limit the number of blobs items to return, to avoid the time out.
+Properties:
- Config: list_chunk
- Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
@@ -17749,17 +20051,19 @@ container/
Access tier of blob: hot, cool or archive.
Archived blobs can be restored by setting access tier to hot or cool. Leave blank if you intend to use default access tier, which is set at account level
If there is no "access tier" specified, rclone doesn't apply any tier. rclone performs "Set Tier" operation on blobs while uploading, if objects are not modified, specifying "access tier" to new one will have no effect. If blobs are in "archive tier" at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to "Hot" or "Cool".
+Properties:
- Config: access_tier
- Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
- Type: string
-- Default: ""
+- Required: false
--azureblob-archive-tier-delete
Delete archive tier blobs before overwriting.
Archive tier blobs cannot be updated. So without this flag, if you attempt to update an archive tier blob, then rclone will produce the error:
can't update archive tier blob without --azureblob-archive-tier-delete
With this flag set then before rclone attempts to overwrite an archive tier blob, it will delete the existing blob before uploading its replacement. This has the potential for data loss if the upload fails (unlike updating a normal blob) and also may cost more since deleting archive tier blobs early may be chargable.
+Properties:
- Config: archive_tier_delete
- Env Var: RCLONE_AZUREBLOB_ARCHIVE_TIER_DELETE
@@ -17769,6 +20073,7 @@ container/
--azureblob-disable-checksum
Don't store MD5 checksum with object metadata.
Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.
+Properties:
- Config: disable_checksum
- Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM
@@ -17778,6 +20083,7 @@ container/
--azureblob-memory-pool-flush-time
How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.
+Properties:
- Config: memory_pool_flush_time
- Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME
@@ -17786,6 +20092,7 @@ container/
--azureblob-memory-pool-use-mmap
Whether to use mmap buffers in internal memory pool.
+Properties:
- Config: memory_pool_use_mmap
- Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP
@@ -17793,8 +20100,9 @@ container/
- Default: false
--azureblob-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_AZUREBLOB_ENCODING
@@ -17803,11 +20111,12 @@ container/
--azureblob-public-access
Public access level of a container: blob or container.
+Properties:
- Config: public_access
- Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS
- Type: string
-- Default: ""
+- Required: false
- Examples:
- ""
@@ -17827,13 +20136,14 @@ container/
--azureblob-no-head-object
If set, do not do HEAD before GET when getting objects.
+Properties:
- Config: no_head_object
- Env Var: RCLONE_AZUREBLOB_NO_HEAD_OBJECT
- Type: bool
- Default: false
-Limitations
+Limitations
MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.
rclone about
is not supported by the Microsoft Azure Blob storage backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
@@ -17935,9 +20245,10 @@ y/e/d> y
- Enter a name for your app, choose account type
Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
, select Web
in Redirect URI
, then type (do not copy and paste) http://localhost:53682/
and click Register. Copy and keep the Application (client) ID
under the app name for later use.
- Under
manage
select Certificates & secrets
, click New client secret
. Enter a description (can be anything) and set Expires
to 24 months. Copy and keep that secret Value for later use (you won't be able to see this value afterwards).
- Under
manage
select API permissions
, click Add a permission
and select Microsoft Graph
then select delegated permissions
.
-- Search and select the following permissions:
Files.Read
, Files.ReadWrite
, Files.Read.All
, Files.ReadWrite.All
, offline_access
, User.Read
. Once selected click Add permissions
at the bottom.
+- Search and select the following permissions:
Files.Read
, Files.ReadWrite
, Files.Read.All
, Files.ReadWrite.All
, offline_access
, User.Read
, and optionally Sites.Read.All
(see below). Once selected click Add permissions
at the bottom.
Now the application is complete. Run rclone config
to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps.
+The Sites.Read.All
permission is required if you need to search SharePoint sites when configuring the remote. However, if that permission is not assigned, you need to set disable_site_permission
option to true in the advanced options.
Modification time and hashes
OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
OneDrive personal supports SHA1 type hashes. OneDrive for business and Sharepoint Server support QuickXorHash.
@@ -18042,28 +20353,31 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Deleting files
Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.
-Standard options
+Standard options
Here are the standard options specific to onedrive (Microsoft OneDrive).
--onedrive-client-id
OAuth Client Id.
Leave blank normally.
+Properties:
- Config: client_id
- Env Var: RCLONE_ONEDRIVE_CLIENT_ID
- Type: string
-- Default: ""
+- Required: false
--onedrive-client-secret
OAuth Client Secret.
Leave blank normally.
+Properties:
- Config: client_secret
- Env Var: RCLONE_ONEDRIVE_CLIENT_SECRET
- Type: string
-- Default: ""
+- Required: false
--onedrive-region
Choose national cloud region for OneDrive.
+Properties:
- Config: region
- Env Var: RCLONE_ONEDRIVE_REGION
@@ -18089,37 +20403,41 @@ y/e/d> y
-Advanced options
+Advanced options
Here are the advanced options specific to onedrive (Microsoft OneDrive).
--onedrive-token
OAuth Access Token as a JSON blob.
+Properties:
- Config: token
- Env Var: RCLONE_ONEDRIVE_TOKEN
- Type: string
-- Default: ""
+- Required: false
--onedrive-auth-url
Auth server URL.
Leave blank to use the provider defaults.
+Properties:
- Config: auth_url
- Env Var: RCLONE_ONEDRIVE_AUTH_URL
- Type: string
-- Default: ""
+- Required: false
--onedrive-token-url
Token server url.
Leave blank to use the provider defaults.
+Properties:
- Config: token_url
- Env Var: RCLONE_ONEDRIVE_TOKEN_URL
- Type: string
-- Default: ""
+- Required: false
--onedrive-chunk-size
Chunk size to upload files with - must be multiple of 320k (327,680 bytes).
Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you may encounter "Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big." Note that the chunks will be buffered into memory.
+Properties:
- Config: chunk_size
- Env Var: RCLONE_ONEDRIVE_CHUNK_SIZE
@@ -18128,23 +20446,46 @@ y/e/d> y
--onedrive-drive-id
The ID of the drive to use.
+Properties:
- Config: drive_id
- Env Var: RCLONE_ONEDRIVE_DRIVE_ID
- Type: string
-- Default: ""
+- Required: false
--onedrive-drive-type
The type of the drive (personal | business | documentLibrary).
+Properties:
- Config: drive_type
- Env Var: RCLONE_ONEDRIVE_DRIVE_TYPE
- Type: string
-- Default: ""
+- Required: false
+
+--onedrive-root-folder-id
+ID of the root folder.
+This isn't normally needed, but in special circumstances you might know the folder ID that you wish to access but not be able to get there through a path traversal.
+Properties:
+
+- Config: root_folder_id
+- Env Var: RCLONE_ONEDRIVE_ROOT_FOLDER_ID
+- Type: string
+- Required: false
+
+--onedrive-disable-site-permission
+Disable the request for Sites.Read.All permission.
+If set to true, you will no longer be able to search for a SharePoint site when configuring drive ID, because rclone will not request Sites.Read.All permission. Set it to true if your organization didn't assign Sites.Read.All permission to the application, and your organization disallows users to consent app permission request on their own.
+Properties:
+
+- Config: disable_site_permission
+- Env Var: RCLONE_ONEDRIVE_DISABLE_SITE_PERMISSION
+- Type: bool
+- Default: false
--onedrive-expose-onenote-files
Set to make OneNote files show up in directory listings.
-By default rclone will hide OneNote files in directory listings because operations like "Open" and "Update" won't work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory listing, set this option.
+By default, rclone will hide OneNote files in directory listings because operations like "Open" and "Update" won't work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory listing, set this option.
+Properties:
- Config: expose_onenote_files
- Env Var: RCLONE_ONEDRIVE_EXPOSE_ONENOTE_FILES
@@ -18154,6 +20495,7 @@ y/e/d> y
--onedrive-server-side-across-configs
Allow server-side operations (e.g. copy) to work across different onedrive configs.
This will only work if you are copying between two OneDrive Personal drives AND the files to copy are already shared between them. In other cases, rclone will fall back to normal copy (which will be slightly slower).
+Properties:
- Config: server_side_across_configs
- Env Var: RCLONE_ONEDRIVE_SERVER_SIDE_ACROSS_CONFIGS
@@ -18162,6 +20504,7 @@ y/e/d> y
--onedrive-list-chunk
Size of listing chunk.
+Properties:
- Config: list_chunk
- Env Var: RCLONE_ONEDRIVE_LIST_CHUNK
@@ -18174,6 +20517,7 @@ y/e/d> y
These versions take up space out of the quota.
This flag checks for versions after file upload and setting modification time and removes all but the last version.
NB Onedrive personal can't currently delete versions so don't use this flag there.
+Properties:
- Config: no_versions
- Env Var: RCLONE_ONEDRIVE_NO_VERSIONS
@@ -18182,6 +20526,7 @@ y/e/d> y
--onedrive-link-scope
Set the scope of the links created by the link command.
+Properties:
- Config: link_scope
- Env Var: RCLONE_ONEDRIVE_LINK_SCOPE
@@ -18204,6 +20549,7 @@ y/e/d> y
--onedrive-link-type
Set the type of the links created by the link command.
+Properties:
- Config: link_type
- Env Var: RCLONE_ONEDRIVE_LINK_TYPE
@@ -18228,22 +20574,24 @@ y/e/d> y
--onedrive-link-password
Set the password for links created by the link command.
At the time of writing this only works with OneDrive personal paid accounts.
+Properties:
- Config: link_password
- Env Var: RCLONE_ONEDRIVE_LINK_PASSWORD
- Type: string
-- Default: ""
+- Required: false
--onedrive-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_ONEDRIVE_ENCODING
- Type: MultiEncoder
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
-Limitations
+Limitations
If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the rclone config reconnect remote:
command to get a new token and refresh token.
Naming
Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
@@ -18458,30 +20806,33 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the standard options specific to opendrive (OpenDrive).
--opendrive-username
Username.
+Properties:
- Config: username
- Env Var: RCLONE_OPENDRIVE_USERNAME
- Type: string
-- Default: ""
+- Required: true
--opendrive-password
Password.
NB Input to this must be obscured - see rclone obscure.
+Properties:
- Config: password
- Env Var: RCLONE_OPENDRIVE_PASSWORD
- Type: string
-- Default: ""
+- Required: true
-Advanced options
+Advanced options
Here are the advanced options specific to opendrive (OpenDrive).
--opendrive-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_OPENDRIVE_ENCODING
@@ -18491,13 +20842,14 @@ y/e/d> y
--opendrive-chunk-size
Files will be uploaded in chunks this size.
Note that these chunks are buffered in memory so increasing them will increase memory use.
+Properties:
- Config: chunk_size
- Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE
- Type: SizeSuffix
- Default: 10Mi
-Limitations
+Limitations
Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
rclone about
is not supported by the OpenDrive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
@@ -18508,7 +20860,7 @@ y/e/d> y
Here is an example of making an QingStor configuration. First run
rclone config
This will guide you through an interactive setup process.
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
r) Rename remote
c) Copy remote
@@ -18599,11 +20951,12 @@ y/e/d> y
Restricted filename characters
The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the standard options specific to qingstor (QingCloud Object Storage).
--qingstor-env-auth
Get QingStor credentials from runtime.
Only applies if access_key_id and secret_access_key is blank.
+Properties:
- Config: env_auth
- Env Var: RCLONE_QINGSTOR_ENV_AUTH
@@ -18624,38 +20977,42 @@ y/e/d> y
--qingstor-access-key-id
QingStor Access Key ID.
Leave blank for anonymous access or runtime credentials.
+Properties:
- Config: access_key_id
- Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID
- Type: string
-- Default: ""
+- Required: false
--qingstor-secret-access-key
QingStor Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
+Properties:
- Config: secret_access_key
- Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY
- Type: string
-- Default: ""
+- Required: false
--qingstor-endpoint
Enter an endpoint URL to connection QingStor API.
Leave blank will use the default value "https://qingstor.com:443".
+Properties:
- Config: endpoint
- Env Var: RCLONE_QINGSTOR_ENDPOINT
- Type: string
-- Default: ""
+- Required: false
--qingstor-zone
Zone to connect to.
Default is "pek3a".
+Properties:
- Config: zone
- Env Var: RCLONE_QINGSTOR_ZONE
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "pek3a"
@@ -18675,10 +21032,11 @@ y/e/d> y
-Advanced options
+Advanced options
Here are the advanced options specific to qingstor (QingCloud Object Storage).
--qingstor-connection-retries
Number of connection retries.
+Properties:
- Config: connection_retries
- Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES
@@ -18688,6 +21046,7 @@ y/e/d> y
--qingstor-upload-cutoff
Cutoff for switching to chunked upload.
Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB.
+Properties:
- Config: upload_cutoff
- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF
@@ -18699,6 +21058,7 @@ y/e/d> y
When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size.
Note that "--qingstor-upload-concurrency" chunks of this size are buffered in memory per transfer.
If you are transferring large files over high-speed links and you have enough memory, then increasing this will speed up the transfers.
+Properties:
- Config: chunk_size
- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE
@@ -18710,6 +21070,7 @@ y/e/d> y
This is the number of chunks of the same file that are uploaded concurrently.
NB if you set this to > 1 then the checksums of multipart uploads become corrupted (the uploads themselves are not corrupted though).
If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.
+Properties:
- Config: upload_concurrency
- Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY
@@ -18717,15 +21078,16 @@ y/e/d> y
- Default: 1
--qingstor-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_QINGSTOR_ENCODING
- Type: MultiEncoder
- Default: Slash,Ctl,InvalidUtf8
-Limitations
+Limitations
rclone about
is not supported by the qingstor backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
Sia
@@ -18739,7 +21101,7 @@ y/e/d> y
Here is an example of how to make a sia
remote called mySia
. First, run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -18795,11 +21157,12 @@ y/e/d> y
- Upload a local directory to the Sia directory called backup
rclone copy /home/source mySia:backup
-Standard options
+Standard options
Here are the standard options specific to sia (Sia Decentralized Cloud).
--sia-api-url
Sia daemon API URL, like http://sia.daemon.host:9980.
Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost.
+Properties:
- Config: api_url
- Env Var: RCLONE_SIA_API_URL
@@ -18810,17 +21173,19 @@ y/e/d> y
Sia Daemon API Password.
Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.
NB Input to this must be obscured - see rclone obscure.
+Properties:
- Config: api_password
- Env Var: RCLONE_SIA_API_PASSWORD
- Type: string
-- Default: ""
+- Required: false
-Advanced options
+Advanced options
Here are the advanced options specific to sia (Sia Decentralized Cloud).
--sia-user-agent
Siad User Agent
Sia daemon requires the 'Sia-Agent' user agent by default for security
+Properties:
- Config: user_agent
- Env Var: RCLONE_SIA_USER_AGENT
@@ -18828,15 +21193,16 @@ y/e/d> y
- Default: "Sia-Agent"
--sia-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_SIA_ENCODING
- Type: MultiEncoder
- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
- Modification times not supported
- Checksums not supported
@@ -18858,7 +21224,7 @@ y/e/d> y
Here is an example of making a swift configuration. First run
rclone config
This will guide you through an interactive setup process.
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -19017,10 +21383,11 @@ rclone lsd myremote:
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
Get swift credentials from environment variables in standard OpenStack form.
+Properties:
User name to log in (OS_USERNAME).
+Properties:
API key or password (OS_PASSWORD).
+Properties:
Authentication URL for server (OS_AUTH_URL).
+Properties:
User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+Properties:
User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+Properties:
Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME).
+Properties:
Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID).
+Properties:
Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME).
+Properties:
Region name - optional (OS_REGION_NAME).
+Properties:
Storage URL - optional (OS_STORAGE_URL).
+Properties:
Auth Token from alternate authentication - optional (OS_AUTH_TOKEN).
+Properties:
Application Credential ID (OS_APPLICATION_CREDENTIAL_ID).
+Properties:
Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME).
+Properties:
Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET).
+Properties:
AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION).
+Properties:
Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE).
+Properties:
The storage policy to use when creating a new container.
This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider.
+Properties:
Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
If true avoid calling abort upload on a failure.
It should be set to true for resuming uploads across different sessions.
+Properties:
Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The default for this is 5 GiB which is its maximum value.
+Properties:
When doing streaming uploads (e.g. using rcat or mount) setting this flag will cause the swift backend to not upload chunked files.
This will limit the maximum upload size to 5 GiB. However non chunked files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal copy operations.
+Properties:
This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -19343,7 +21731,7 @@ y/e/d> y
rclone lsd remote:
List all the files in your pCloud
rclone ls remote:
-To copy a local directory to an pCloud directory called backup
+To copy a local directory to a pCloud directory called backup
rclone copy /home/source remote:backup
Modified time and hashes
pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.
@@ -19375,57 +21763,63 @@ y/e/d> y
However you can set this to restrict rclone to a specific folder hierarchy.
In order to do this you will have to find the Folder ID
of the directory you wish rclone to display. This will be the folder
field of the URL when you open the relevant folder in the pCloud web interface.
So if the folder you want rclone to use has a URL which looks like https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid
in the browser, then you use 5xxxxxxxx8
as the root_folder_id
in the config.
Here are the standard options specific to pcloud (Pcloud).
OAuth Client Id.
Leave blank normally.
+Properties:
OAuth Client Secret.
Leave blank normally.
+Properties:
Here are the advanced options specific to pcloud (Pcloud).
OAuth Access Token as a JSON blob.
+Properties:
Auth server URL.
Leave blank to use the provider defaults.
+Properties:
Token server url.
Leave blank to use the provider defaults.
+Properties:
This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
Fill in for rclone to use a non root folder as its starting point.
+Properties:
Hostname to connect to.
This is normally set when rclone initially does the oauth connection, however you will need to set it by hand if you are using remote config with rclone authorize.
+Properties:
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -19539,29 +21935,31 @@ y/e/d>
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the standard options specific to premiumizeme (premiumize.me).
--premiumizeme-api-key
API Key.
This is not normally used - use oauth instead.
+Properties:
- Config: api_key
- Env Var: RCLONE_PREMIUMIZEME_API_KEY
- Type: string
-- Default: ""
+- Required: false
-Advanced options
+Advanced options
Here are the advanced options specific to premiumizeme (premiumize.me).
--premiumizeme-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_PREMIUMIZEME_ENCODING
- Type: MultiEncoder
- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
premiumize.me file names can't have the \
or "
characters in. rclone maps these to and from an identical looking unicode equivalents \
and "
premiumize.me only supports filenames up to 255 characters in length.
@@ -19573,7 +21971,7 @@ y/e/d>
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -19650,11 +22048,12 @@ e/n/d/r/c/s/q> q
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Advanced options
+Advanced options
Here are the advanced options specific to putio (Put.io).
--putio-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_PUTIO_ENCODING
@@ -19669,7 +22068,7 @@ e/n/d/r/c/s/q> q
Here is an example of making a seafile configuration for a user with no two-factor authentication. First run
rclone config
This will guide you through an interactive setup process. To authenticate you will need the URL of your server, your email (or username) and your password.
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -19743,7 +22142,7 @@ y/e/d> y
rclone sync -i /home/local/directory seafile:library
Configuration in library mode
Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -19863,15 +22262,16 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
Compatibility
It has been actively tested using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition
Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.
-Standard options
+Standard options
Here are the standard options specific to seafile (seafile).
--seafile-url
URL of seafile host to connect to.
+Properties:
- Config: url
- Env Var: RCLONE_SEAFILE_URL
- Type: string
-- Default: ""
+- Required: true
- Examples:
- "https://cloud.seafile.com/"
@@ -19882,23 +22282,26 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
--seafile-user
User name (usually email address).
+Properties:
- Config: user
- Env Var: RCLONE_SEAFILE_USER
- Type: string
-- Default: ""
+- Required: true
--seafile-pass
Password.
NB Input to this must be obscured - see rclone obscure.
+Properties:
- Config: pass
- Env Var: RCLONE_SEAFILE_PASS
- Type: string
-- Default: ""
+- Required: false
--seafile-2fa
Two-factor authentication ('true' if the account has 2FA enabled).
+Properties:
- Config: 2fa
- Env Var: RCLONE_SEAFILE_2FA
@@ -19908,34 +22311,38 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
Name of the library.
Leave blank to access all non-encrypted libraries.
+Properties:
Library password (for encrypted libraries only).
Leave blank if you pass it through the command line.
NB Input to this must be obscured - see rclone obscure.
+Properties:
Authentication token.
+Properties:
Here are the advanced options specific to seafile (seafile).
Should rclone create a library if it doesn't exist.
+Properties:
This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
Here is an example of making an SFTP configuration. First run
rclone config
This will guide you through an interactive setup process.
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -19983,9 +22391,11 @@ Choose a number from below, or type in your own value
1 / Connect to example.com
\ "example.com"
host> example.com
-SSH username, leave blank for current username, $USER
+SSH username
+Enter a string value. Press Enter for the default ("$USER").
user> sftpuser
-SSH port, leave blank to use default (22)
+SSH port number
+Enter a signed integer. Press Enter for the default (22).
port>
SSH password, leave blank to use ssh-agent.
y) Yes type in my own password
@@ -20052,18 +22462,19 @@ pubkey_file = ~/id_rsa-cert.pub
Host key validation
By default rclone will not check the server's host key for validation. This can allow an attacker to replace a server with their own and if you use password authentication then this can lead to that password being exposed.
Host key matching, using standard known_hosts
files can be turned on by enabling the known_hosts_file
option. This can point to the file maintained by OpenSSH
or can point to a unique file.
-e.g.
+e.g. using the OpenSSH known_hosts
file:
[remote]
type = sftp
host = example.com
user = sftpuser
pass =
known_hosts_file = ~/.ssh/known_hosts
+Alternatively you can create your own known hosts file like this:
+ssh-keyscan -t dsa,rsa,ecdsa,ed25519 example.com >> known_hosts
There are some limitations:
rclone
will not manage this file for you. If the key is missing or wrong then the connection will be refused.
- If the server is set up for a certificate host key then the entry in the
known_hosts
file must be the @cert-authority
entry for the CA
-- Unlike
OpenSSH
, the libraries used by rclone
do not permit (at time of writing) multiple host keys to be listed for a server. Only the first entry is used.
If the host key provided by the server does not match the one in the file (or is missing) then the connection will be aborted and an error returned such as
NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key mismatch
@@ -20083,84 +22494,93 @@ known_hosts_file = ~/.ssh/known_hosts
Modified times are stored on the server to 1 second precision.
Modified times are used in syncing and are fully supported.
Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false
in your RClone backend configuration to disable this behaviour.
Here are the standard options specific to sftp (SSH/SFTP Connection).
SSH host to connect to.
E.g. "example.com".
+Properties:
SSH username, leave blank for current username, $USER.
+SSH username.
+Properties:
SSH port, leave blank to use default (22).
+SSH port number.
+Properties:
SSH password, leave blank to use ssh-agent.
NB Input to this must be obscured - see rclone obscure.
+Properties:
Raw PEM-encoded private key.
If specified, will override key_file parameter.
+Properties:
Path to PEM-encoded private key file.
Leave blank or set key-use-agent to use ssh-agent.
Leading ~
will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}
.
Properties:
The passphrase to decrypt the PEM-encoded private key file.
Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys in the new OpenSSH format can't be used.
NB Input to this must be obscured - see rclone obscure.
+Properties:
Optional path to public key file.
Set this if you have a signed certificate you want to use for authentication.
Leading ~
will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}
.
Properties:
When set forces the usage of the ssh-agent.
When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is requested from the ssh-agent. This allows to avoid Too many authentication failures for *username*
errors when the ssh-agent contains many keys.
Properties:
Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.
+Properties:
Disable the execution of SSH commands to determine if remote file hashing is available.
Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
+Properties:
Here are the advanced options specific to sftp (SSH/SFTP Connection).
Optional path to known_hosts file.
Set this value to enable server host key validation.
Leading ~
will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}
.
Properties:
Allow asking for SFTP password when needed.
If this is set and no password is supplied then rclone will: - ask for a password - not contact the ssh agent
+Properties:
Override path used by SSH connection.
This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes.
Shared folders can be found in directories representing volumes
-rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory
+rclone sync /home/local/directory remote:/directory --sftp-path-override /volume2/directory
Home directory can be found in a shared folder called "home"
-rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
+rclone sync /home/local/directory remote:/home/directory --sftp-path-override /volume1/homes/USER/directory
+Properties:
Set the modified time on the remote if set.
+Properties:
The command used to read md5 hashes.
Leave blank for autodetect.
+Properties:
The command used to read sha1 hashes.
Leave blank for autodetect.
+Properties:
Set to skip any symlinks and any other non regular files.
+Properties:
Specifies the SSH2 subsystem on the remote host.
+Properties:
Specifies the path or command to run a sftp server on the remote host.
The subsystem option is ignored when server_command is defined.
+Properties:
If set use fstat instead of stat.
Some servers limit the amount of open files and calling Stat after opening the file will throw an error from the server. Setting this flag will call Fstat instead of Stat which is called on an already open file handle.
It has been found that this helps with IBM Sterling SFTP servers which have "extractability" level set to 1 which means only 1 file can be opened at any given time.
+Properties:
Failed to copy: file does not exist
Then you may need to enable this flag.
If concurrent reads are disabled, the use_fstat option is ignored.
+Properties:
If set don't use concurrent writes.
Normally rclone uses concurrent writes to upload files. This improves the performance greatly, especially for distant servers.
This option disables concurrent writes should that be necessary.
+Properties:
Max time before closing idle connections.
If no connections have been returned to the connection pool in the time given, rclone will empty the connection pool.
Set to 0 to keep connections indefinitely.
+Properties:
SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote's PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck
to true
to disable checksumming.
SFTP also supports about
if the same login has shell access and df
are in the remote's PATH. about
will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. about
will fail if it does not have shell access or if df
is not in the remote's PATH.
Note that some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck
is a good idea.
The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher
setting in the configuration file to true
. Further details on the insecurity of this cipher can be found in this paper.
SFTP isn't supported under plan9 until this issue is fixed.
Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn't supported (but --contimeout
is).
Note that --timeout
and --contimeout
are both supported.
C14 is supported through the SFTP backend.
rsync.net is supported through the SFTP backend.
See rsync.net's documentation of rclone examples.
+Storj is an encrypted, secure, and cost-effective object storage service that enables you to store, back up, and archive large amounts of data in a decentralized manner.
+Storj can be used both with this native backend and with the s3 backend using the Storj S3 compatible gateway (shared or private).
+Use this backend to take advantage of client-side encryption as well as to achieve the best possible download performance. Uploads will be erasure-coded locally, thus a 1gb upload will result in 2.68gb of data being uploaded to storage nodes across the network.
+Use the s3 backend and one of the S3 compatible Hosted Gateways to increase upload performance and reduce the load on your systems and network. Uploads will be encrypted and erasure-coded server-side, thus a 1GB upload will result in only in 1GB of data being uploaded to storage nodes across the network.
+Side by side comparison with more details:
+rclone checksum
is not possible without download, as checksum metadata is not calculated during uploadTo make a new Storj configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Storj project you are a member of.
+Here is an example of how to make a remote called remote
. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / Storj Decentralized Cloud Storage
+ \ "storj"
+[snip]
+Storage> storj
+** See help for storj backend at: https://rclone.org/storj/ **
+
+Choose an authentication method.
+Enter a string value. Press Enter for the default ("existing").
+Choose a number from below, or type in your own value
+ 1 / Use an existing access grant.
+ \ "existing"
+ 2 / Create a new access grant from satellite address, API key, and passphrase.
+ \ "new"
+provider> existing
+Access Grant.
+Enter a string value. Press Enter for the default ("").
+access_grant> your-access-grant-received-by-someone-else
+Remote config
+--------------------
+[remote]
+type = storj
+access_grant = your-access-grant-received-by-someone-else
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / Storj Decentralized Cloud Storage
+ \ "storj"
+[snip]
+Storage> storj
+** See help for storj backend at: https://rclone.org/storj/ **
+
+Choose an authentication method.
+Enter a string value. Press Enter for the default ("existing").
+Choose a number from below, or type in your own value
+ 1 / Use an existing access grant.
+ \ "existing"
+ 2 / Create a new access grant from satellite address, API key, and passphrase.
+ \ "new"
+provider> new
+Satellite Address. Custom satellite address should match the format: `<nodeid>@<address>:<port>`.
+Enter a string value. Press Enter for the default ("us-central-1.storj.io").
+Choose a number from below, or type in your own value
+ 1 / US Central 1
+ \ "us-central-1.storj.io"
+ 2 / Europe West 1
+ \ "europe-west-1.storj.io"
+ 3 / Asia East 1
+ \ "asia-east-1.storj.io"
+satellite_address> 1
+API Key.
+Enter a string value. Press Enter for the default ("").
+api_key> your-api-key-for-your-storj-project
+Encryption Passphrase. To access existing objects enter passphrase used for uploading.
+Enter a string value. Press Enter for the default ("").
+passphrase> your-human-readable-encryption-passphrase
+Remote config
+--------------------
+[remote]
+type = storj
+satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777
+api_key = your-api-key-for-your-storj-project
+passphrase = your-human-readable-encryption-passphrase
+access_grant = the-access-grant-generated-from-the-api-key-and-passphrase
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Here are the standard options specific to storj (Storj Decentralized Cloud Storage).
+Choose an authentication method.
+Properties:
+Access grant.
+Properties:
+Satellite address.
+Custom satellite address should match the format: <nodeid>@<address>:<port>
.
Properties:
+API key.
+Properties:
+Encryption passphrase.
+To access existing objects enter passphrase used for uploading.
+Properties:
+Paths are specified as remote:bucket
(or remote:
for the lsf
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
Once configured you can then use rclone
like this.
Use the mkdir
command to create new bucket, e.g. bucket
.
rclone mkdir remote:bucket
+Use the lsf
command to list all buckets.
rclone lsf remote:
+Note the colon (:
) character at the end of the command line.
Use the rmdir
command to delete an empty bucket.
rclone rmdir remote:bucket
+Use the purge
command to delete a non-empty bucket with all its content.
rclone purge remote:bucket
+Use the copy
command to upload an object.
rclone copy --progress /home/local/directory/file.ext remote:bucket/path/to/dir/
+The --progress
flag is for displaying progress information. Remove it if you don't need this information.
Use a folder in the local path to upload all its objects.
+rclone copy --progress /home/local/directory/ remote:bucket/path/to/dir/
+Only modified files will be copied.
+Use the ls
command to list recursively all objects in a bucket.
rclone ls remote:bucket
+Add the folder to the remote path to list recursively all objects in this folder.
+rclone ls remote:bucket/path/to/dir/
+Use the lsf
command to list non-recursively all objects in a bucket or a folder.
rclone lsf remote:bucket/path/to/dir/
+Use the copy
command to download an object.
rclone copy --progress remote:bucket/path/to/dir/file.ext /home/local/directory/
+The --progress
flag is for displaying progress information. Remove it if you don't need this information.
Use a folder in the remote path to download all its objects.
+rclone copy --progress remote:bucket/path/to/dir/ /home/local/directory/
+Use the deletefile
command to delete a single object.
rclone deletefile remote:bucket/path/to/dir/file.ext
+Use the delete
command to delete all object in a folder.
rclone delete remote:bucket/path/to/dir/
+Use the size
command to print the total size of objects in a bucket or a folder.
rclone size remote:bucket/path/to/dir/
+Use the sync
command to sync the source to the destination, changing the destination only, deleting any excess files.
rclone sync -i --progress /home/local/directory/ remote:bucket/path/to/dir/
+The --progress
flag is for displaying progress information. Remove it if you don't need this information.
Since this can cause data loss, test first with the --dry-run
flag to see exactly what would be copied and deleted.
The sync can be done also from Storj to the local file system.
+rclone sync -i --progress remote:bucket/path/to/dir/ /home/local/directory/
+Or between two Storj buckets.
+rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
+Or even between another cloud storage and Storj.
+rclone sync -i --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
+rclone about
is not supported by the rclone Storj backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+If you get errors like too many open files
this usually happens when the default ulimit
for system max open files is exceeded. Native Storj protocol opens a large number of TCP connections (each of which is counted as an open file). For a single upload stream you can expect 110 TCP connections to be opened. For a single download stream you can expect 35. This batch of connections will be opened for every 64 MiB segment and you should also expect TCP connections to be reused. If you do many transfers you eventually open a connection to most storage nodes (thousands of nodes).
To fix these, please raise your system limits. You can do this issuing a ulimit -n 65536
just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. $HOME/.bashrc
, or change the system-wide configuration, usually /etc/sysctl.conf
and/or /etc/security/limits.conf
, but please refer to your operating system manual.
SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.
-The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -20430,340 +23160,124 @@ y/e/d> y
Deleting files
Deleted files will be moved to the "Deleted items" folder by default.
However you can supply the flag --sugarsync-hard-delete
or set the config parameter hard_delete = true
if you would like files to be deleted straight away.
-Standard options
+Standard options
Here are the standard options specific to sugarsync (Sugarsync).
--sugarsync-app-id
Sugarsync App ID.
Leave blank to use rclone's.
+Properties:
- Config: app_id
- Env Var: RCLONE_SUGARSYNC_APP_ID
- Type: string
-- Default: ""
+- Required: false
--sugarsync-access-key-id
Sugarsync Access Key ID.
Leave blank to use rclone's.
+Properties:
- Config: access_key_id
- Env Var: RCLONE_SUGARSYNC_ACCESS_KEY_ID
- Type: string
-- Default: ""
+- Required: false
--sugarsync-private-access-key
Sugarsync Private Access Key.
Leave blank to use rclone's.
+Properties:
- Config: private_access_key
- Env Var: RCLONE_SUGARSYNC_PRIVATE_ACCESS_KEY
- Type: string
-- Default: ""
+- Required: false
--sugarsync-hard-delete
Permanently delete files if true otherwise put them in the deleted files.
+Properties:
- Config: hard_delete
- Env Var: RCLONE_SUGARSYNC_HARD_DELETE
- Type: bool
- Default: false
-Advanced options
+Advanced options
Here are the advanced options specific to sugarsync (Sugarsync).
--sugarsync-refresh-token
Sugarsync refresh token.
Leave blank normally, will be auto configured by rclone.
+Properties:
- Config: refresh_token
- Env Var: RCLONE_SUGARSYNC_REFRESH_TOKEN
- Type: string
-- Default: ""
+- Required: false
--sugarsync-authorization
Sugarsync authorization.
Leave blank normally, will be auto configured by rclone.
+Properties:
- Config: authorization
- Env Var: RCLONE_SUGARSYNC_AUTHORIZATION
- Type: string
-- Default: ""
+- Required: false
--sugarsync-authorization-expiry
Sugarsync authorization expiry.
Leave blank normally, will be auto configured by rclone.
+Properties:
- Config: authorization_expiry
- Env Var: RCLONE_SUGARSYNC_AUTHORIZATION_EXPIRY
- Type: string
-- Default: ""
+- Required: false
--sugarsync-user
Sugarsync user.
Leave blank normally, will be auto configured by rclone.
+Properties:
- Config: user
- Env Var: RCLONE_SUGARSYNC_USER
- Type: string
-- Default: ""
+- Required: false
--sugarsync-root-id
Sugarsync root id.
Leave blank normally, will be auto configured by rclone.
+Properties:
- Config: root_id
- Env Var: RCLONE_SUGARSYNC_ROOT_ID
- Type: string
-- Default: ""
+- Required: false
--sugarsync-deleted-id
Sugarsync deleted folder id.
Leave blank normally, will be auto configured by rclone.
+Properties:
- Config: deleted_id
- Env Var: RCLONE_SUGARSYNC_DELETED_ID
- Type: string
-- Default: ""
+- Required: false
--sugarsync-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_SUGARSYNC_ENCODING
- Type: MultiEncoder
- Default: Slash,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
rclone about
is not supported by the SugarSync backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
Tardigrade
-Tardigrade is an encrypted, secure, and cost-effective object storage service that enables you to store, back up, and archive large amounts of data in a decentralized manner.
-Configuration
-To make a new Tardigrade configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Tardigrade project you are a member of.
-Here is an example of how to make a remote called remote
. First run:
- rclone config
-This will guide you through an interactive setup process:
-Setup with access grant
-No remotes found - make a new one
-n) New remote
-s) Set configuration password
-q) Quit config
-n/s/q> n
-name> remote
-Type of storage to configure.
-Enter a string value. Press Enter for the default ("").
-Choose a number from below, or type in your own value
-[snip]
-XX / Tardigrade Decentralized Cloud Storage
- \ "tardigrade"
-[snip]
-Storage> tardigrade
-** See help for tardigrade backend at: https://rclone.org/tardigrade/ **
-
-Choose an authentication method.
-Enter a string value. Press Enter for the default ("existing").
-Choose a number from below, or type in your own value
- 1 / Use an existing access grant.
- \ "existing"
- 2 / Create a new access grant from satellite address, API key, and passphrase.
- \ "new"
-provider> existing
-Access Grant.
-Enter a string value. Press Enter for the default ("").
-access_grant> your-access-grant-received-by-someone-else
-Remote config
---------------------
-[remote]
-type = tardigrade
-access_grant = your-access-grant-received-by-someone-else
---------------------
-y) Yes this is OK (default)
-e) Edit this remote
-d) Delete this remote
-y/e/d> y
-Setup with API key and passphrase
-No remotes found - make a new one
-n) New remote
-s) Set configuration password
-q) Quit config
-n/s/q> n
-name> remote
-Type of storage to configure.
-Enter a string value. Press Enter for the default ("").
-Choose a number from below, or type in your own value
-[snip]
-XX / Tardigrade Decentralized Cloud Storage
- \ "tardigrade"
-[snip]
-Storage> tardigrade
-** See help for tardigrade backend at: https://rclone.org/tardigrade/ **
-
-Choose an authentication method.
-Enter a string value. Press Enter for the default ("existing").
-Choose a number from below, or type in your own value
- 1 / Use an existing access grant.
- \ "existing"
- 2 / Create a new access grant from satellite address, API key, and passphrase.
- \ "new"
-provider> new
-Satellite Address. Custom satellite address should match the format: `<nodeid>@<address>:<port>`.
-Enter a string value. Press Enter for the default ("us-central-1.tardigrade.io").
-Choose a number from below, or type in your own value
- 1 / US Central 1
- \ "us-central-1.tardigrade.io"
- 2 / Europe West 1
- \ "europe-west-1.tardigrade.io"
- 3 / Asia East 1
- \ "asia-east-1.tardigrade.io"
-satellite_address> 1
-API Key.
-Enter a string value. Press Enter for the default ("").
-api_key> your-api-key-for-your-tardigrade-project
-Encryption Passphrase. To access existing objects enter passphrase used for uploading.
-Enter a string value. Press Enter for the default ("").
-passphrase> your-human-readable-encryption-passphrase
-Remote config
---------------------
-[remote]
-type = tardigrade
-satellite_address = 12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777
-api_key = your-api-key-for-your-tardigrade-project
-passphrase = your-human-readable-encryption-passphrase
-access_grant = the-access-grant-generated-from-the-api-key-and-passphrase
---------------------
-y) Yes this is OK (default)
-e) Edit this remote
-d) Delete this remote
-y/e/d> y
-Standard options
-Here are the standard options specific to tardigrade (Tardigrade Decentralized Cloud Storage).
---tardigrade-provider
-Choose an authentication method.
-
-- Config: provider
-- Env Var: RCLONE_TARDIGRADE_PROVIDER
-- Type: string
-- Default: "existing"
-- Examples:
-
-- "existing"
-
-- Use an existing access grant.
-
-- "new"
-
-- Create a new access grant from satellite address, API key, and passphrase.
-
-
-
---tardigrade-access-grant
-Access grant.
-
-- Config: access_grant
-- Env Var: RCLONE_TARDIGRADE_ACCESS_GRANT
-- Type: string
-- Default: ""
-
---tardigrade-satellite-address
-Satellite address.
-Custom satellite address should match the format: <nodeid>@<address>:<port>
.
-
-- Config: satellite_address
-- Env Var: RCLONE_TARDIGRADE_SATELLITE_ADDRESS
-- Type: string
-- Default: "us-central-1.tardigrade.io"
-- Examples:
-
-- "us-central-1.tardigrade.io"
-
-- US Central 1
-
-- "europe-west-1.tardigrade.io"
-
-- Europe West 1
-
-- "asia-east-1.tardigrade.io"
-
-- Asia East 1
-
-
-
---tardigrade-api-key
-API key.
-
-- Config: api_key
-- Env Var: RCLONE_TARDIGRADE_API_KEY
-- Type: string
-- Default: ""
-
---tardigrade-passphrase
-Encryption passphrase.
-To access existing objects enter passphrase used for uploading.
-
-- Config: passphrase
-- Env Var: RCLONE_TARDIGRADE_PASSPHRASE
-- Type: string
-- Default: ""
-
-Usage
-Paths are specified as remote:bucket
(or remote:
for the lsf
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
-Once configured you can then use rclone
like this.
-Create a new bucket
-Use the mkdir
command to create new bucket, e.g. bucket
.
-rclone mkdir remote:bucket
-List all buckets
-Use the lsf
command to list all buckets.
-rclone lsf remote:
-Note the colon (:
) character at the end of the command line.
-Delete a bucket
-Use the rmdir
command to delete an empty bucket.
-rclone rmdir remote:bucket
-Use the purge
command to delete a non-empty bucket with all its content.
-rclone purge remote:bucket
-Upload objects
-Use the copy
command to upload an object.
-rclone copy --progress /home/local/directory/file.ext remote:bucket/path/to/dir/
-The --progress
flag is for displaying progress information. Remove it if you don't need this information.
-Use a folder in the local path to upload all its objects.
-rclone copy --progress /home/local/directory/ remote:bucket/path/to/dir/
-Only modified files will be copied.
-List objects
-Use the ls
command to list recursively all objects in a bucket.
-rclone ls remote:bucket
-Add the folder to the remote path to list recursively all objects in this folder.
-rclone ls remote:bucket/path/to/dir/
-Use the lsf
command to list non-recursively all objects in a bucket or a folder.
-rclone lsf remote:bucket/path/to/dir/
-Download objects
-Use the copy
command to download an object.
-rclone copy --progress remote:bucket/path/to/dir/file.ext /home/local/directory/
-The --progress
flag is for displaying progress information. Remove it if you don't need this information.
-Use a folder in the remote path to download all its objects.
-rclone copy --progress remote:bucket/path/to/dir/ /home/local/directory/
-Delete objects
-Use the deletefile
command to delete a single object.
-rclone deletefile remote:bucket/path/to/dir/file.ext
-Use the delete
command to delete all object in a folder.
-rclone delete remote:bucket/path/to/dir/
-Print the total size of objects
-Use the size
command to print the total size of objects in a bucket or a folder.
-rclone size remote:bucket/path/to/dir/
-Sync two Locations
-Use the sync
command to sync the source to the destination, changing the destination only, deleting any excess files.
-rclone sync -i --progress /home/local/directory/ remote:bucket/path/to/dir/
-The --progress
flag is for displaying progress information. Remove it if you don't need this information.
-Since this can cause data loss, test first with the --dry-run
flag to see exactly what would be copied and deleted.
-The sync can be done also from Tardigrade to the local file system.
-rclone sync -i --progress remote:bucket/path/to/dir/ /home/local/directory/
-Or between two Tardigrade buckets.
-rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
-Or even between another cloud storage and Tardigrade.
-rclone sync -i --progress s3:bucket/path/to/dir/ tardigrade:bucket/path/to/dir/
-Limitations
-rclone about
is not supported by the rclone Tardigrade backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
-See List of backends that do not support rclone about See rclone about
-Known issues
-If you get errors like too many open files
this usually happens when the default ulimit
for system max open files is exceeded. Native Storj protocol opens a large number of TCP connections (each of which is counted as an open file). For a single upload stream you can expect 110 TCP connections to be opened. For a single download stream you can expect 35. This batch of connections will be opened for every 64 MiB segment and you should also expect TCP connections to be reused. If you do many transfers you eventually open a connection to most storage nodes (thousands of nodes).
-To fix these, please raise your system limits. You can do this issuing a ulimit -n 65536
just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. $HOME/.bashrc
, or change the system-wide configuration, usually /etc/sysctl.conf
and/or /etc/security/limits.conf
, but please refer to your operating system manual.
+The Tardigrade backend has been renamed to be the Storj backend. Old configuration files will continue to work.
Uptobox
This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional cloud storage provider and therefore not suitable for long term storage.
Paths are specified as remote:path
@@ -20848,29 +23362,31 @@ y/e/d>
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Here are the standard options specific to uptobox (Uptobox).
Your access token.
Get it from https://uptobox.com/my_account.
+Properties:
Here are the advanced options specific to uptobox (Uptobox).
This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
Uptobox will delete inactive files that have not been accessed in 60 days.
rclone about
is not supported by this backend an overview of used space can however been seen in the uptobox web interface.
Here is an example of how to make a union called remote
for local folders. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -21101,19 +23617,21 @@ e/n/d/r/c/s/q> q
-Standard options
+Standard options
Here are the standard options specific to union (Union merges the contents of several upstream fs).
--union-upstreams
List of space separated upstreams.
Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc.
+Properties:
- Config: upstreams
- Env Var: RCLONE_UNION_UPSTREAMS
- Type: string
-- Default: ""
+- Required: true
--union-action-policy
Policy to choose upstream on ACTION category.
+Properties:
- Config: action_policy
- Env Var: RCLONE_UNION_ACTION_POLICY
@@ -21122,6 +23640,7 @@ e/n/d/r/c/s/q> q
Policy to choose upstream on CREATE category.
+Properties:
Policy to choose upstream on SEARCH category.
+Properties:
Cache time of usage and free space (in seconds).
This option is only useful when a path preserving policy is used.
+Properties:
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -21221,24 +23742,26 @@ y/e/d> y
Modified time and hashes
Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times.
Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
-Standard options
+Standard options
Here are the standard options specific to webdav (Webdav).
--webdav-url
URL of http host to connect to.
E.g. https://example.com.
+Properties:
- Config: url
- Env Var: RCLONE_WEBDAV_URL
- Type: string
-- Default: ""
+- Required: true
--webdav-vendor
Name of the Webdav site/service/software you are using.
+Properties:
- Config: vendor
- Env Var: RCLONE_WEBDAV_VENDOR
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "nextcloud"
@@ -21266,55 +23789,61 @@ y/e/d> y
User name.
In case NTLM authentication is used, the username should be in the format 'Domain'.
+Properties:
Password.
NB Input to this must be obscured - see rclone obscure.
+Properties:
Bearer token instead of user/pass (e.g. a Macaroon).
+Properties:
Here are the advanced options specific to webdav (Webdav).
Command to run to get a bearer token.
+Properties:
This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
Default encoding is Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8 for sharepoint-ntlm or identity otherwise.
+Properties:
Set HTTP headers for all transactions.
Use this to set additional HTTP headers for all transactions
The input format is comma separated list of key,value pairs. Standard CSV encoding may be used.
-For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
+For example, to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
+Properties:
Here is an example of making a yandex configuration. First run
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s> n
@@ -21457,64 +23986,79 @@ y/e/d> y
Restricted filename characters
The default restricted characters set are replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the standard options specific to yandex (Yandex Disk).
--yandex-client-id
OAuth Client Id.
Leave blank normally.
+Properties:
- Config: client_id
- Env Var: RCLONE_YANDEX_CLIENT_ID
- Type: string
-- Default: ""
+- Required: false
--yandex-client-secret
OAuth Client Secret.
Leave blank normally.
+Properties:
- Config: client_secret
- Env Var: RCLONE_YANDEX_CLIENT_SECRET
- Type: string
-- Default: ""
+- Required: false
-Advanced options
+Advanced options
Here are the advanced options specific to yandex (Yandex Disk).
--yandex-token
OAuth Access Token as a JSON blob.
+Properties:
- Config: token
- Env Var: RCLONE_YANDEX_TOKEN
- Type: string
-- Default: ""
+- Required: false
--yandex-auth-url
Auth server URL.
Leave blank to use the provider defaults.
+Properties:
- Config: auth_url
- Env Var: RCLONE_YANDEX_AUTH_URL
- Type: string
-- Default: ""
+- Required: false
--yandex-token-url
Token server url.
Leave blank to use the provider defaults.
+Properties:
- Config: token_url
- Env Var: RCLONE_YANDEX_TOKEN_URL
- Type: string
-- Default: ""
+- Required: false
+
+--yandex-hard-delete
+Delete files permanently rather than putting them into the trash.
+Properties:
+
+- Config: hard_delete
+- Env Var: RCLONE_YANDEX_HARD_DELETE
+- Type: bool
+- Default: false
--yandex-encoding
-This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
- Config: encoding
- Env Var: RCLONE_YANDEX_ENCODING
- Type: MultiEncoder
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout
parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers
errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m
, that is --timeout 60m
.
Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. Token generation will work without a mail account, but Rclone won't be able to complete any actions.
[403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
@@ -21524,7 +24068,7 @@ y/e/d> y
Here is an example of making a zoho configuration. First run
rclone config
This will guide you through an interactive setup process:
-No remotes found - make a new one
+No remotes found, make a new one?
n) New remote
s) Set configuration password
n/s> n
@@ -21600,34 +24144,37 @@ y/e/d>
To view your current quota you can use the rclone about remote:
command which will display your current usage.
Restricted filename characters
Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.
-Standard options
+Standard options
Here are the standard options specific to zoho (Zoho).
--zoho-client-id
OAuth Client Id.
Leave blank normally.
+Properties:
- Config: client_id
- Env Var: RCLONE_ZOHO_CLIENT_ID
- Type: string
-- Default: ""
+- Required: false
--zoho-client-secret
OAuth Client Secret.
Leave blank normally.
+Properties:
- Config: client_secret
- Env Var: RCLONE_ZOHO_CLIENT_SECRET
- Type: string
-- Default: ""
+- Required: false
--zoho-region
Zoho region to connect to.
You'll have to use the region your organization is registered in. If not sure use the same top level domain as you connect to in your browser.
+Properties:
- Config: region
- Env Var: RCLONE_ZOHO_REGION
- Type: string
-- Default: ""
+- Required: false
- Examples:
- "com"
@@ -21648,37 +24195,41 @@ y/e/d>
Here are the advanced options specific to zoho (Zoho).
OAuth Access Token as a JSON blob.
+Properties:
Auth server URL.
Leave blank to use the provider defaults.
+Properties:
Token server url.
Leave blank to use the provider defaults.
+Properties:
This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
NB Rclone (like most unix tools such as du
, rsync
and tar
) treats a bind mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.
-Here are the advanced options specific to local (Local Disk).
Disable UNC (long path names) conversion on Windows.
+Properties:
Follow symlinks and copy the pointed to item.
+Properties:
Translate symlinks to/from regular files with a '.rclonelink' extension.
+Properties:
Don't warn about skipped symlinks.
This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.
+Properties:
So rclone now always reads the link.
+Properties:
Rclone does not normally touch the encoding of file names it reads from the file system.
This can be useful when using macOS as it normally provides decomposed (NFD) unicode which in some language (eg Korean) doesn't display properly on some OSes.
Note that rclone compares filenames with unicode normalization in the sync routine so this flag shouldn't normally be used.
+Properties:
Don't update the stat info for the file
Properties:
+Default: false
Don't cross filesystem boundaries (unix/macOS only).
+Properties:
Force the filesystem to report itself as case sensitive.
Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice.
+Properties:
Force the filesystem to report itself as case insensitive.
Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice.
+Properties:
Disable preallocation of disk space for transferred files.
Preallocation of disk space helps prevent filesystem fragmentation. However, some virtual filesystem layers (such as Google Drive File Stream) may incorrectly set the actual file size equal to the preallocated space, causing checksum and file size checks to fail. Use this flag to disable preallocation.
+Properties:
Disable sparse files for multi-thread downloads.
On Windows platforms rclone will make sparse files when doing multi-thread downloads. This avoids long pauses on large files where the OS zeros the file. However sparse files may be undesirable as they cause disk fragmentation and can be slow to work with.
+Properties:
Disable setting modtime.
Normally rclone updates modification time of files after they are done uploading. This can cause permissions issues on Linux platforms when the user rclone is running as does not own the file uploaded, such as when copying to a CIFS mount owned by another user. If this option is enabled, rclone will no longer update the modtime after copying a file.
+Properties:
This sets the encoding for the backend.
+The encoding for the backend.
See the encoding section in the overview for more info.
+Properties:
Here are the commands specific to the local backend.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
See the "rclone backend" command for more info on how to pass options and arguments.
-These can be run on a running backend using the rc command backend/command.
+These can be run on a running backend using the rc command backend/command.
A null operation for testing backend commands
rclone backend noop remote: [options] [<arguments>+]
@@ -22224,6 +24791,217 @@ $ tree /tmp/b
windows/arm64
build (rclone mount
not supported yet) (Nick Craig-Wood){{ regexp }}
syntax to pattern matches (Nick Craig-Wood)--human
replaced by global --human-readable
(albertony)github.com/jlaffaye/ftp
to fix go get github.com/rclone/rclone
(Nick Craig-Wood)/robots.txt
(Nick Craig-Wood)operations/publiclink
default for expires
parameter (Nick Craig-Wood)transferQueueSize
when summing up statistics group (Carlo Mion)StatsInfo
fields in the computation of the group sum (Carlo Mion)--max-duration
so it doesn't retry when the duration is exceeded (Nick Craig-Wood)--devname
to set the device name sent to FUSE for mount display (Nick Craig-Wood)vfs/stats
remote control to show statistics (Nick Craig-Wood)failed to _ensure cache internal error: downloaders is nil error
(Nick Craig-Wood)base64
and base32768
filename encoding options (Max Sum, Sinan Tan)--azureblob-upload-concurrency
parameter to speed uploads (Nick Craig-Wood)chunk_size
as it is no longer needed (Nick Craig-Wood)--azureblob-upload-concurrency
to 16 by default (Nick Craig-Wood)--drive-copy-shortcut-content
(Abhiraj)--drive-skip-dangling-shortcuts
flag (Nick Craig-Wood)--drive-export-formats
shows all doc types (Nick Craig-Wood)--ftp-ask-password
to prompt for password when needed (Borna Butkovic)Sites.Read.All
(Charlie Jiang)--onedrive-root-folder-id
flag (Nick Craig-Wood)400 pathIsTooLong
error (ctrl-q)ListObjectsV2
for faster listings (Felix Bünemann)
+ListObject
v1 on unsupported providers (Nick Craig-Wood)ETag
on multipart transfers to verify the transfer was OK (Nick Craig-Wood)
+--s3-use-multipart-etag
provider quirk to disable this on unsupported providers (Nick Craig-Wood)GLACIER_IR
storage class (Yunhai Luo)Content-MD5
workaround for object-lock enabled buckets (Paulo Martins)--no-head
flag (Nick Craig-Wood)md5sum
/sha1sum
commands to look for (albertony)known_hosts
file (Nick Craig-Wood)//
in (Nick Craig-Wood)--drive-shared-with-me
use drive,shared_with_me:
--drive-shared-with-me
use drive,shared_with_me:
rclone backend stats cache:
(Nick Craig-Wood)rclone hashsum DropboxHash
(Nick Craig-Wood)-o
/--opt
and -a
/--arg
for more structured input (Nick Craig-Wood)backend/command
for running backend specific commands remotely (Nick Craig-Wood)backend/command
for running backend-specific commands remotely (Nick Craig-Wood)mount/mount
command for starting rclone mount
via the API (Chaitanya)--vfs-case-insensitive
for windows/macOS mounts (Ivan Andreev)--daemon-timout
to 15 minutes on macOS and FreeBSD (Nick Craig-Wood)--ignore-case-sync
for forced case insensitivity (garry415)--stats-one-line-date
and --stats-one-line-date-format
(Peter Berbec)--backup-dir
(Nick Craig-Wood)--ignore-checksum
is in effect, don't calculate checksum (Nick Craig-Wood)--rc-serve
(Nick Craig-Wood)--rc-serve
(Nick Craig-Wood)--max-size 0b
b
suffix so we can specify bytes in --bwlimit, --min-size, etc.Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing.
Currently rclone loads each directory/bucket entirely into memory before using it. Since each rclone object takes 0.5k-1k of memory this can take a very long time and use a large amount of memory.
Millions of files in a directory tends to occur on bucket-based remotes (e.g. S3 buckets) since those remotes do not segregate subdirectories within the bucket.
-Bucket based remotes (e.g. S3/GCS/Swift/B2) do not have a concept of directories. Rclone therefore cannot create directories in them which means that empty directories on a bucket based remote will tend to disappear.
+Bucket-based remotes (e.g. S3/GCS/Swift/B2) do not have a concept of directories. Rclone therefore cannot create directories in them which means that empty directories on a bucket-based remote will tend to disappear.
Some software creates empty keys ending in /
as directory markers. Rclone doesn't do this as it potentially creates more objects and costs more. This ability may be added in the future (probably via a flag/option).
Bugs are stored in rclone's GitHub project:
@@ -27754,7 +30532,7 @@ dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS serverIf you are usingsystemd-resolved
(default on Arch Linux), ensure it is at version 233 or higher. Previous releases contain a bug which causes not all domains to be resolved properly.
Additionally with the GODEBUG=netdns=
environment variable the Go resolver decision can be influenced. This also allows to resolve certain issues with DNS resolution. See the name resolution section in the go docs.
It is likely you have more than 10,000 files that need to be synced. By default rclone only gets 10,000 files ahead in a sync so as not to use up too much memory. You can change this default with the --max-backlog flag.
+It is likely you have more than 10,000 files that need to be synced. By default, rclone only gets 10,000 files ahead in a sync so as not to use up too much memory. You can change this default with the --max-backlog flag.
Rclone is written in Go which uses a garbage collector. The default settings for the garbage collector mean that it runs when the heap size has doubled.
However it is possible to tune the garbage collector to use less memory by setting GOGC to a lower value, say export GOGC=20
. This will make the garbage collector work harder, reducing memory size at the expense of CPU usage.