diff --git a/MANUAL.html b/MANUAL.html index db69f2569..51a73d999 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,12 +12,75 @@ span.underline{text-decoration: underline;} div.column{display: inline-block; vertical-align: top; width: 50%;} +
Jul 20, 2021
+Nov 01, 2021
Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone's familiar syntax includes shell pipeline support, and --dry-run
protection. It is used at the command line, in scripts or via its API.
Users call rclone "The Swiss army knife of cloud storage", and "Technology indistinguishable from magic".
Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can check the integrity of your files. Where possible, rclone employs server-side transfers to minimise local bandwidth use and transfers from one provider to another without using local disk.
-Virtual backends wrap local and cloud file systems to apply encryption, compression chunking and joining.
+Virtual backends wrap local and cloud file systems to apply encryption, compression, chunking, hashing and joining.
Rclone mounts any local, cloud or virtual filesystem as a disk on Windows, macOS, linux and FreeBSD, and also serves these over SFTP, HTTP, WebDAV, FTP and DLNA.
Rclone is mature, open source software originally inspired by rsync and written in Go. The friendly support community are familiar with varied use cases. Official Ubuntu, Debian, Fedora, Brew and Chocolatey repos. include rclone. For the latest version downloading from rclone.org is recommended.
Rclone is widely used on Linux, Windows and Mac. Third party developers create innovative backup, restore, GUI and business process solutions using the rclone command line or API.
@@ -118,6 +181,7 @@rclone
or rclone.exe
binary from the archiverclone
executable, rclone.exe
on Windows, from the archive.rclone config
to setup. See rclone config docs for more details.See below for some expanded Linux / macOS instructions.
-See the Usage section of the docs for how to use rclone, or run rclone -h
.
See the usage docs for how to use rclone, or run rclone -h
.
Already installed rclone can be easily updated to the latest version using the rclone selfupdate command.
To install rclone on Linux/macOS/BSD systems, run:
@@ -171,6 +235,7 @@ sudo mandbrclone config
brew install rclone
+NOTE: This version of rclone will not support mount
any more (see #5373). If mounting is wanted on macOS, either install a precompiled binary or enable the relevant option when installing from source.
To avoid problems with macOS gatekeeper enforcing the binary to be signed and notarized it is enough to download with curl
.
Download the latest version of rclone.
@@ -239,11 +304,12 @@ docker run --rm \ ls ~/data/mount kill %1Make sure you have at least Go go1.13 installed. Download go if necessary. The latest release is recommended. Then
-git clone https://github.com/rclone/rclone.git
-cd rclone
-go build
-./rclone version
+Make sure you have at least Go go1.14 installed. Download go if necessary. The latest release is recommended. Then
+git clone https://github.com/rclone/rclone.git
+cd rclone
+go build
+# If on macOS and mount is wanted, instead run: make GOTAGS=cmount
+./rclone version
This will leave you a checked out version of rclone you can modify and send pull requests with. If you use make
instead of go build
then the rclone build will have the correct version information in it.
You can also build the latest stable rclone with:
go get github.com/rclone/rclone
@@ -260,38 +326,44 @@ go build
- hosts: rclone-hosts
roles:
- rclone
-As mentioned above, rclone is single executable (rclone
, or rclone.exe
on Windows) that you can download as a zip archive and extract into a location of your choosing. When executing different commands, it may create files in different locations, such as a configuration file and various temporary files. By default the locations for these are according to your operating system, e.g. configuration file in your user profile directory and temporary files in the standard temporary directory, but you can customize all of them, e.g. to make a completely self-contained, portable installation.
Run the config paths command to see the locations that rclone will use.
+To override them set the corresponding options (as command-line arguments, or as environment variables): - --config - --cache-dir - --temp-dir
+After installing and configuring rclone, as described above, you are ready to use rclone as an interactive command line utility. If your goal is to perform periodic operations, such as a regular sync, you will probably want to configure your rclone command in your operating system's scheduler. If you need to expose service-like features, such as remote control, GUI, serve or mount, you will often want an rclone command always running in the background, and configuring it to run in a service infrastructure may be a better option. Below are some alternatives on how to achieve this on different operating systems.
NOTE: Before setting up autorun it is highly recommended that you have tested your command manually from a Command Prompt first.
-The most relevant alternatives for autostart on Windows are: - Run at user log on using the Startup folder - Run at user log on, at system startup or at schedule using Task Scheduler - Run at system startup using Windows service
-Rclone is a console application, so if not starting from an existing Command Prompt, e.g. when starting rclone.exe from a shortcut, it will open a Command Prompt window. When configuring rclone to run from task scheduler and windows service you are able to set it to run hidden in background. From rclone version 1.54 you can also make it run hidden from anywhere by adding option --no-console
(it may still flash briefly when the program starts). Since rclone normally writes information and any error messages to the console, you must redirect this to a file to be able to see it. Rclone has a built-in option --log-file
for that.
Example command to run a sync in background:
c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclone\logs\sync_files.txt
-As mentioned in the mount documentation, mounted drives created as Administrator are not visible to other accounts, not even the account that was elevated as Administrator. By running the mount command as the built-in SYSTEM
user account, it will create drives accessible for everyone on the system. Both scheduled task and Windows service can be used to achieve this.
NOTE: Remember that when rclone runs as the SYSTEM
user, the user profile that it sees will not be yours. This means that if you normally run rclone with configuration file in the default location, to be able to use the same configuration when running as the system user you must explicitely tell rclone where to find it with the --config
option, or else it will look in the system users profile path (C:\Windows\System32\config\systemprofile
). To test your command manually from a Command Prompt, you can run it with the PsExec utility from Microsoft's Sysinternals suite, which takes option -s
to execute commands as the SYSTEM
user.
To quickly execute an rclone command you can simply create a standard Windows Explorer shortcut for the complete rclone command you want to run. If you store this shortcut in the special "Startup" start-menu folder, Windows will automatically run it at login. To open this folder in Windows Explorer, enter path %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup
, or C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp
if you want the command to start for every user that logs in.
This is the easiest approach to autostarting of rclone, but it offers no functionality to set it to run as different user, or to set conditions or actions on certain events. Setting up a scheduled task as described below will often give you better results.
-Task Scheduler is an administrative tool built into Windows, and it can be used to configure rclone to be started automatically in a highly configurable way, e.g. periodically on a schedule, on user log on, or at system startup. It can run be configured to run as the current user, or for a mount command that needs to be available to all users it can run as the SYSTEM
user. For technical information, see https://docs.microsoft.com/windows/win32/taskschd/task-scheduler-start-page.
For running rclone at system startup, you can create a Windows service that executes your rclone command, as an alternative to scheduled task configured to run at startup.
-For mount commands, Rclone has a built-in Windows service integration via the third party WinFsp library it uses. Registering as a regular Windows service easy, as you just have to execute the built-in PowerShell command New-Service
(requires administrative privileges).
Example of a PowerShell command that creates a Windows service for mounting some remote:/files
as drive letter X:
, for all users (service will be running as the local system account):
New-Service -Name Rclone -BinaryPathName 'c:\rclone\rclone.exe mount remote:/files X: --config c:\rclone\config\rclone.conf --log-file c:\rclone\logs\mount.txt'
The WinFsp service infrastructure supports incorporating services for file system implementations, such as rclone, into its own launcher service, as kind of "child services". This has the additional advantage that it also implements a network provider that integrates into Windows standard methods for managing network drives. This is currently not officially supported by Rclone, but with WinFsp version 2019.3 B2 / v1.5B2 or later it should be possible through path rewriting as described here.
-To Windows service running any rclone command, the excellent third party utility NSSM, the "Non-Sucking Service Manager", can be used. It includes some advanced features such as adjusting process periority, defining process environment variables, redirect to file anything written to stdout, and customized response to different exit codes, with a GUI to configure everything from (although it can also be used from command line ).
There are also several other alternatives. To mention one more, WinSW, "Windows Service Wrapper", is worth checking out. It requires .NET Framework, but it is preinstalled on newer versions of Windows, and it also provides alternative standalone distributions which includes necessary runtime (.NET 5). WinSW is a command-line only utility, where you have to manually create an XML file with service configuration. This may be a drawback for some, but it can also be an advantage as it is easy to back up and re-use the configuration settings, without having go through manual steps in a GUI. One thing to note is that by default it does not restart the service on error, one have to explicit enable this in the configuration file (via the "onfailure" parameter).
-To always run rclone in background, relevant for mount commands etc, you can use systemd to set up rclone as a system or user service. Running as a system service ensures that it is run at startup even if the user it is running as has no active session. Running rclone as a user service ensures that it only starts after the configured user has logged into the system.
-To run a periodic command, such as a copy/sync, you can set up a cron job.
+Rclone is a command line program to manage files on cloud storage. After download and install, continue here to learn how to use it: Initial configuration, what the basic syntax looks like, describes the various subcommands, the various options, and more.
First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config
entry for how to find the config file and choose its location.)
The easiest way to make the config is to run rclone with the config option:
@@ -315,10 +387,11 @@ go buildRclone syncs a directory tree from one storage system to another.
Its syntax is like this
Syntax: [options] subcommand <parameters> <parameters...>
@@ -366,11 +440,12 @@ rclone sync -i /local/path remote:path # syncs /local/path to the remote<
name
.Copy files from source to dest, skipping already copied.
+Copy files from source to dest, skipping identical files.
Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination.
+Copy the source to the destination. Does not transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Doesn't delete files from the destination.
Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.
If dest:path doesn't exist, it is created and the source:path contents go there.
For example
@@ -413,7 +488,7 @@ destpath/sourcepath/two.txtMake source and dest identical, modifying destination only.
Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary (except duplicate objects, see below).
+Sync the source to the destination, changing the destination only. Doesn't transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary (except duplicate objects, see below).
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive
/-i
flag.
rclone sync -i SOURCE remote:DESTINATION
Note that files in the destination won't be deleted if there were any errors at any point. Duplicate objects (files with the same name, on those providers that support it) are also not yet handled.
@@ -530,7 +605,7 @@ rclone --dry-run --min-size 100M delete remote:path -C, --checkfile string Treat source:path as a SUM file with hashes of given type
--combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
- --download Check by downloading rather than with hash.
+ --download Check by downloading rather than with hash
--error string Report all files with errors (hashing or reading) to this file
-h, --help help for check
--match string Report all matching files to this file
@@ -603,7 +678,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone lsd remote:path [flags]
-h, --help help for lsd
- -R, --recursive Recurse into the listing.
+ -R, --recursive Recurse into the listing
See the global flags page for global options not listed here.
rclone size remote:path [flags]
-h, --help help for size
- --json format output as JSON
+ --json Format output as JSON
See the global flags page for global options not listed here.
rclone version [flags]
--check Check for new version.
+ --check Check for new version
-h, --help help for version
See the global flags page for global options not listed here.
SEE ALSO
@@ -805,7 +880,7 @@ two-3.txt: renamed from: two.txt
rclone dedupe [mode] remote:path [flags]
--by-hash Find indentical hashes rather than names
- --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename. (default "interactive")
+ --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename (default "interactive")
-h, --help help for dedupe
See the global flags page for global options not listed here.
rclone about
prints quota information about a remote to standard output. The output is typically used, free, quota and trash contents.
E.g. Typical output from rclone about remote:
is:
Total: 17G
-Used: 7.444G
-Free: 1.315G
-Trashed: 100.000M
-Other: 8.241G
+Total: 17 GiB
+Used: 7.444 GiB
+Free: 1.315 GiB
+Trashed: 100.000 MiB
+Other: 8.241 GiB
Where the fields are:
Not all backends print all fields. Information is not included if it is not provided by a backend. Where the value is unlimited it is omitted.
+All sizes are in number of bytes.
Applying a --full
flag to the command prints the bytes in full, e.g.
Total: 18253611008
Used: 7993453766
@@ -846,11 +921,11 @@ Other: 8849156022
"other": 8849156022,
"free": 1411001220
}
-Not all backends support the rclone about
command.
See List of backends that do not support about
+Not all backends print all fields. Information is not included if it is not provided by a backend. Where the value is unlimited it is omitted.
+Some backends does not support the rclone about
command at all, see complete list in documentation.
rclone about remote: [flags]
--full Full numbers instead of SI units
+ --full Full numbers instead of human-readable
-h, --help help for about
--json Format output as JSON
See the global flags page for global options not listed here.
@@ -889,8 +964,8 @@ rclone backend help <backendname>
rclone backend <command> remote:path [opts] <args> [flags]
-h, --help help for backend
- --json Always output in JSON format.
- -o, --option stringArray Option in the form name=value or name.
+ --json Always output in JSON format
+ -o, --option stringArray Option in the form name=value or name
See the global flags page for global options not listed here.
Use the --head
flag to print characters only at the start, --tail
for the end and --offset
and --count
to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1
is equivalent to --tail 1
.
rclone cat remote:path [flags]
--count int Only print N characters. (default -1)
- --discard Discard the output instead of printing.
- --head int Only print the first N characters.
+ --count int Only print N characters (default -1)
+ --discard Discard the output instead of printing
+ --head int Only print the first N characters
-h, --help help for cat
- --offset int Start printing at offset N (or from end if -ve).
- --tail int Only print the last N characters.
+ --offset int Start printing at offset N (or from end if -ve)
+ --tail int Only print the last N characters
See the global flags page for global options not listed here.
Checks that hashsums of source files match the SUM file. It compares hashes (MD5, SHA1, etc) and logs a report of files which don't match. It doesn't alter the file system.
If you supply the --download
flag, it will download the data from remote and calculate the contents hash on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
Note that hash values in the SUM file are treated as case insensitive.
If you supply the --one-way
flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that are not in the source will not be detected.
The --differ
, --missing-on-dst
, --missing-on-src
, --match
and --error
flags write paths, one per line, to the file name (or stdout if it is -
) supplied. What they write is described in the help below. For example --differ
will write all paths which are present on both the source and destination but different.
The --combined
flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files.
--combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
- --download Check by hashing the contents.
+ --download Check by hashing the contents
--error string Report all files with errors (hashing or reading) to this file
-h, --help help for checksum
--match string Report all matching files to this file
@@ -951,9 +1027,89 @@ rclone backend help <backendname>
generate the autocompletion script for the specified shell
+Generate the autocompletion script for rclone for the specified shell. See each sub-command's help for details on how to use the generated script.
+ -h, --help help for completion
+See the global flags page for global options not listed here.
+generate the autocompletion script for bash
+Generate the autocompletion script for the bash shell.
+This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager.
+To load completions in your current shell session: $ source <(rclone completion bash)
+To load completions for every new session, execute once: Linux: $ rclone completion bash > /etc/bash_completion.d/rclone MacOS: $ rclone completion bash > /usr/local/etc/bash_completion.d/rclone
+You will need to start a new shell for this setup to take effect.
+rclone completion bash
+ -h, --help help for bash
+ --no-descriptions disable completion descriptions
+See the global flags page for global options not listed here.
+generate the autocompletion script for fish
+Generate the autocompletion script for the fish shell.
+To load completions in your current shell session: $ rclone completion fish | source
+To load completions for every new session, execute once: $ rclone completion fish > ~/.config/fish/completions/rclone.fish
+You will need to start a new shell for this setup to take effect.
+rclone completion fish [flags]
+ -h, --help help for fish
+ --no-descriptions disable completion descriptions
+See the global flags page for global options not listed here.
+generate the autocompletion script for powershell
+Generate the autocompletion script for powershell.
+To load completions in your current shell session: PS C:> rclone completion powershell | Out-String | Invoke-Expression
+To load completions for every new session, add the output of the above command to your powershell profile.
+rclone completion powershell [flags]
+ -h, --help help for powershell
+ --no-descriptions disable completion descriptions
+See the global flags page for global options not listed here.
+generate the autocompletion script for zsh
+Generate the autocompletion script for the zsh shell.
+If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once:
+$ echo "autoload -U compinit; compinit" >> ~/.zshrc
+To load completions for every new session, execute once: # Linux: $ rclone completion zsh > "${fpath[1]}/_rclone" # macOS: $ rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
+You will need to start a new shell for this setup to take effect.
+rclone completion zsh [flags]
+ -h, --help help for zsh
+ --no-descriptions disable completion descriptions
+See the global flags page for global options not listed here.
+Create a new remote with name, type and options.
-Create a new remote of name
with type
and options. The options should be passed in pairs of key
value
or as key=value
.
For example to make a swift remote of name myremote using auto config you would do:
rclone config create myremote swift env_auth true
@@ -1007,140 +1163,150 @@ rclone config create myremote swift env_auth=true
At the end of the non interactive process, rclone will return a result with State
as empty string.
If --all
is passed then rclone will ask all the config questions, not just the post config questions. Any parameters are used as defaults for questions as usual.
Note that bin/config.py
in the rclone source implements this protocol as a readable demonstration.
rclone config create `name` `type` [`key` `value`]* [flags]
- --all Ask the full set of config questions.
- --continue Continue the configuration process with an answer.
+rclone config create name type [key value]* [flags]
+Options
+ --all Ask the full set of config questions
+ --continue Continue the configuration process with an answer
-h, --help help for create
- --no-obscure Force any passwords not to be obscured.
- --non-interactive Don't interact with user and return questions.
- --obscure Force any passwords to be obscured.
- --result string Result - use with --continue.
- --state string State - use with --continue.
+ --no-obscure Force any passwords not to be obscured
+ --non-interactive Don't interact with user and return questions
+ --obscure Force any passwords to be obscured
+ --result string Result - use with --continue
+ --state string State - use with --continue
See the global flags page for global options not listed here.
-Delete an existing remote name
.
rclone config delete `name` [flags]
-Delete an existing remote.
+rclone config delete name [flags]
+ -h, --help help for delete
See the global flags page for global options not listed here.
-Disconnects user from remote
-This disconnects the remote: passed in to the cloud storage system.
This normally means revoking the oauth token.
To reconnect use "rclone config reconnect".
rclone config disconnect remote: [flags]
- -h, --help help for disconnect
See the global flags page for global options not listed here.
-Dump the config file as JSON.
rclone config dump [flags]
- -h, --help help for dump
See the global flags page for global options not listed here.
-Enter an interactive configuration session.
-Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.
rclone config edit [flags]
- -h, --help help for edit
See the global flags page for global options not listed here.
-Show path of configuration file in use.
rclone config file [flags]
- -h, --help help for file
See the global flags page for global options not listed here.
-Update password in an existing remote.
-Update an existing remote's password. The password should be passed in pairs of key
password
or as key=password
. The password
should be passed in in clear (unobscured).
For example to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
rclone config password myremote fieldname=mypassword
This command is obsolete now that "config update" and "config create" both support obscuring passwords directly.
-rclone config password `name` [`key` `value`]+ [flags]
-rclone config password name [key value]+ [flags]
+ -h, --help help for password
See the global flags page for global options not listed here.
-Show paths used for configuration, cache, temp etc.
+rclone config paths [flags]
+ -h, --help help for paths
+See the global flags page for global options not listed here.
+List in JSON format all the providers and options.
rclone config providers [flags]
- -h, --help help for providers
See the global flags page for global options not listed here.
-Re-authenticates user with remote.
-This reconnects remote: passed in to the cloud storage system.
To disconnect the remote use "rclone config disconnect".
This normally means going through the interactive oauth flow again.
rclone config reconnect remote: [flags]
- -h, --help help for reconnect
See the global flags page for global options not listed here.
-Print (decrypted) config file, or the config for a single remote.
rclone config show [<remote>] [flags]
- -h, --help help for show
See the global flags page for global options not listed here.
-Ensure configuration file exists.
rclone config touch [flags]
- -h, --help help for touch
See the global flags page for global options not listed here.
-Update options in an existing remote.
-Update an existing remote's options. The options should be passed in pairs of key
value
or as key=value
.
For example to update the env_auth field of a remote of name myremote you would do:
rclone config update myremote env_auth true
@@ -1194,37 +1360,37 @@ rclone config update myremote env_auth=true
At the end of the non interactive process, rclone will return a result with State
as empty string.
If --all
is passed then rclone will ask all the config questions, not just the post config questions. Any parameters are used as defaults for questions as usual.
Note that bin/config.py
in the rclone source implements this protocol as a readable demonstration.
rclone config update `name` [`key` `value`]+ [flags]
- --all Ask the full set of config questions.
- --continue Continue the configuration process with an answer.
+rclone config update name [key value]+ [flags]
+Options
+ --all Ask the full set of config questions
+ --continue Continue the configuration process with an answer
-h, --help help for update
- --no-obscure Force any passwords not to be obscured.
- --non-interactive Don't interact with user and return questions.
- --obscure Force any passwords to be obscured.
- --result string Result - use with --continue.
- --state string State - use with --continue.
+ --no-obscure Force any passwords not to be obscured
+ --non-interactive Don't interact with user and return questions
+ --obscure Force any passwords to be obscured
+ --result string Result - use with --continue
+ --state string State - use with --continue
See the global flags page for global options not listed here.
-Prints info about logged in user of remote.
-This prints the details of the person logged in to the cloud storage system.
rclone config userinfo remote: [flags]
- -h, --help help for userinfo
--json Format output as JSON
See the global flags page for global options not listed here.
-Copy files from source to dest, skipping already copied.
-Copy files from source to dest, skipping identical files.
+If source:path is a file or directory then it copies it to a file or directory named dest:path.
This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.
So
@@ -1236,38 +1402,38 @@ rclone config update myremote env_auth=true if src is directory copy it to dst, overwriting existing files if they exist see copy command for full details -This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.
+This doesn't transfer files that are identical on src and dst, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.
Note: Use the -P
/--progress
flag to view real-time transfer statistics
rclone copyto source:path dest:path [flags]
- -h, --help help for copyto
See the global flags page for global options not listed here.
-Copy url content to dest.
-Download a URL's content and copy it to the destination without saving it in temporary storage.
Setting --auto-filename
will cause the file name to be retrieved from the URL (after any redirections) and used in the destination path. With --print-filename
in addition, the resulting file name will be printed.
Setting --no-clobber
will prevent overwriting file on the destination if there is one with the same name.
Setting --stdout
or making the output file name -
will cause the output to be written to standard output.
rclone copyurl https://example.com dest:path [flags]
- -a, --auto-filename Get the file name from the URL and use it for destination file path
-h, --help help for copyurl
--no-clobber Prevent overwriting file with same name
-p, --print-filename Print the resulting name from --auto-filename
--stdout Write the output to stdout rather than a file
See the global flags page for global options not listed here.
-Cryptcheck checks the integrity of a crypted remote.
-rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.
For it to work the underlying remote of the cryptedremote must support some kind of checksum.
It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.
@@ -1287,7 +1453,7 @@ if src is directory! path
means there was an error reading or hashing the source or dest.rclone cryptcheck remote:path cryptedremote:path [flags]
- --combined string Make a combined report of changes to this file
--differ string Report all non-matching files to this file
--error string Report all files with errors (hashing or reading) to this file
@@ -1297,13 +1463,13 @@ if src is directory
--missing-on-src string Report all files missing from the source to this file
--one-way Check one way only, source files must exist on remote
See the global flags page for global options not listed here.
-Cryptdecode returns unencrypted file names.
-rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.
If you supply the --reverse flag, it will return encrypted file names.
use it like this
@@ -1312,34 +1478,34 @@ if src is directory rclone cryptdecode --reverse encryptedremote: filename1 filename2Another way to accomplish this is by using the rclone backend encode
(or decode
)command. See the documentation on the crypt
overlay for more info.
rclone cryptdecode encryptedremote: encryptedfilename [flags]
- -h, --help help for cryptdecode
--reverse Reverse cryptdecode, encrypts filenames
See the global flags page for global options not listed here.
-Remove a single file from remote.
-Remove a single file from remote. Unlike delete
it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.
rclone deletefile remote:path [flags]
- -h, --help help for deletefile
See the global flags page for global options not listed here.
-Output completion script for a given shell.
-Generates a shell completion script for rclone. Run with --help to list the supported shells.
- -h, --help help for genautocomplete
See the global flags page for global options not listed here.
-Output bash completion script for rclone.
-Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete bash
@@ -1357,16 +1523,16 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
If you supply a command line argument the script will be written there.
If output_file is "-", then the output will be written to stdout.
rclone genautocomplete bash [output_file] [flags]
- -h, --help help for bash
See the global flags page for global options not listed here.
-Output fish completion script for rclone.
-Generates a fish autocompletion script for rclone.
This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete fish
@@ -1375,16 +1541,16 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
If you supply a command line argument the script will be written there.
If output_file is "-", then the output will be written to stdout.
rclone genautocomplete fish [output_file] [flags]
- -h, --help help for fish
See the global flags page for global options not listed here.
-Output zsh completion script for rclone.
-Generates a zsh autocompletion script for rclone.
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete zsh
@@ -1393,28 +1559,28 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
If you supply a command line argument the script will be written there.
If output_file is "-", then the output will be written to stdout.
rclone genautocomplete zsh [output_file] [flags]
- -h, --help help for zsh
See the global flags page for global options not listed here.
-Output markdown docs for rclone to the directory supplied.
-This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
rclone gendocs output_directory [flags]
- -h, --help help for gendocs
See the global flags page for global options not listed here.
-Produces a hashsum file for all the objects in the path.
-Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.
By default, the hash is requested from the remote. If the hash is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote.
Run without a hash to see the list of all supported hashes, e.g.
@@ -1424,27 +1590,28 @@ Supported hashes are: * sha1 * whirlpool * crc32 + * sha256 * dropbox * mailru * quickxorThen
$ rclone hashsum MD5 remote:path
-Note that hash names are case insensitive.
+Note that hash names are case insensitive and values are output in lower case.
rclone hashsum <hash> remote:path [flags]
- --base64 Output base64 encoded hashsum
-C, --checkfile string Validate hashes against a given SUM file instead of printing them
--download Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
-h, --help help for hashsum
--output-file string Output hashsums to a file rather than the terminal
See the global flags page for global options not listed here.
-Generate public link to file/folder.
-rclone link will create, retrieve or remove a public link to the given file or folder.
rclone link remote:path/to/file
rclone link remote:path/to/folder/
@@ -1454,32 +1621,32 @@ rclone link --expire 1d remote:path/to/file
Use the --unlink flag to remove existing public links to the file or folder. Note not all backends support "--unlink" flag - those that don't will just ignore it.
If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by default be created with the least constraints – e.g. no expiry, no password protection, accessible without account.
rclone link remote:path [flags]
- --expire Duration The amount of time that the link will be valid (default off)
-h, --help help for link
--unlink Remove existing public link to file/folder
See the global flags page for global options not listed here.
-List all the remotes in the config file.
-rclone listremotes lists all the available remotes from the config file.
When uses with the -l flag it lists the types too.
rclone listremotes [flags]
- -h, --help help for listremotes
- --long Show the type as well as names.
+ --long Show the type as well as names
See the global flags page for global options not listed here.
-List directories and objects in remote:path formatted for parsing.
-List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.
Eg
$ rclone lsf swift:bucket
@@ -1549,25 +1716,25 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket based remotes).
rclone lsf remote:path [flags]
- --absolute Put a leading / in front of path names.
- --csv Output in CSV format.
- -d, --dir-slash Append a slash to directory names. (default true)
- --dirs-only Only list directories.
- --files-only Only list files.
+Options
+ --absolute Put a leading / in front of path names
+ --csv Output in CSV format
+ -d, --dir-slash Append a slash to directory names (default true)
+ --dirs-only Only list directories
+ --files-only Only list files
-F, --format string Output format - see help for details (default "p")
--hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5")
-h, --help help for lsf
- -R, --recursive Recurse into the listing.
- -s, --separator string Separator for the items in the format. (default ";")
+ -R, --recursive Recurse into the listing
+ -s, --separator string Separator for the items in the format (default ";")
See the global flags page for global options not listed here.
-List directories and objects in the path in JSON format.
-List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this
{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsBucket" : false, "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", }
@@ -1577,6 +1744,7 @@ rclone copy --files-from-raw new_files /path/to/local remote:pathIf --encrypted is not specified the Encrypted won't be emitted.
If --dirs-only is not specified files in addition to directories are returned
If --files-only is not specified directories in addition to the files will be returned.
+if --stat is set then a single JSON blob will be returned about the item pointed to. This will return an error if the item isn't found. However on bucket based backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will return an empty directory as it isn't possible to tell empty directories from missing directories there.
The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name.
If the directory is a bucket in a bucket based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true".
The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (e.g. Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav, etc.) no digits will be shown ("2017-05-31T16:15:57+01:00").
@@ -1595,31 +1763,34 @@ rclone copy --files-from-raw new_files /path/to/local remote:pathThe other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket based remotes).
rclone lsjson remote:path [flags]
- --dirs-only Show only directories in the listing.
- -M, --encrypted Show the encrypted names.
- --files-only Show only files in the listing.
- --hash Include hashes in the output (may take longer).
- --hash-type stringArray Show only this hash type (may be repeated).
+Options
+ --dirs-only Show only directories in the listing
+ -M, --encrypted Show the encrypted names
+ --files-only Show only files in the listing
+ --hash Include hashes in the output (may take longer)
+ --hash-type stringArray Show only this hash type (may be repeated)
-h, --help help for lsjson
- --no-mimetype Don't read the mime type (can speed things up).
- --no-modtime Don't read the modification time (can speed things up).
- --original Show the ID of the underlying Object.
- -R, --recursive Recurse into the listing.
+ --no-mimetype Don't read the mime type (can speed things up)
+ --no-modtime Don't read the modification time (can speed things up)
+ --original Show the ID of the underlying Object
+ -R, --recursive Recurse into the listing
+ --stat Just return the info for the pointed to file
See the global flags page for global options not listed here.
-Mount the remote as file system on a mountpoint.
-rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
On Linux and OSX, you can either run mount in foreground mode or background (daemon) mode. Mount runs in foreground mode by default, use the --daemon
flag to specify background mode. You can only run mount in foreground mode on Windows.
On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the --daemon
flag to force background mode. On Windows you can run mount in foreground only, the flag is ignored.
In background mode rclone acts as a generic Unix mount program: the main program starts, spawns background rclone process to setup and maintain the mount, waits until success or timeout and exits with appropriate code (killing the child process if it fails).
On Linux/macOS/FreeBSD start the mount like this, where /path/to/local/mount
is an empty existing directory:
rclone mount remote:path/to/files /path/to/local/mount
-On Windows you can start a mount in different ways. See below for details. The following examples will mount to an automatically assigned drive, to specific drive letter X:
, to path C:\path\parent\mount
(where parent directory or drive must exist, and mount must not exist, and is not supported when mounting as a network drive), and the last example will mount as network share \\cloud\remote
and map it to an automatically assigned drive:
On Windows you can start a mount in different ways. See below for details. If foreground mount is used interactively from a console window, rclone will serve the mount and occupy the console so another window should be used to work with the mount until rclone is interrupted e.g. by pressing Ctrl-C.
+The following examples will mount to an automatically assigned drive, to specific drive letter X:
, to path C:\path\parent\mount
(where parent directory or drive must exist, and mount must not exist, and is not supported when mounting as a network drive), and the last example will mount as network share \\cloud\remote
and map it to an automatically assigned drive:
rclone mount remote:path/to/files *
rclone mount remote:path/to/files X:
rclone mount remote:path/to/files C:\path\parent\mount
@@ -1632,7 +1803,6 @@ fusermount -u /path/to/local/mount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually.
The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not support the about feature at all, then 1 PiB is set as both the total and the free size.
-Note: As of rclone
1.52.2, rclone mount
now requires Go version 1.13 or newer on some platforms depending on the underlying FUSE library in use.
To run rclone mount on Windows, you will need to download and install WinFsp.
WinFsp is an open source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with cgofuse. Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows.
@@ -1660,8 +1830,8 @@ rclone mount remote:path/to/files * --volname \\cloud\remoteSee also Limitations section below.
The FUSE emulation layer on Windows must convert between the POSIX-based permission model used in FUSE, and the permission model used in Windows, based on access-control lists (ACL).
-The mounted filesystem will normally get three entries in its access-control list (ACL), representing permissions for the POSIX permission scopes: Owner, group and others. By default, the owner and group will be taken from the current user, and the built-in group "Everyone" will be used to represent others. The user/group can be customized with FUSE options "UserName" and "GroupName", e.g. -o UserName=user123 -o GroupName="Authenticated Users"
.
The permissions on each entry will be set according to options --dir-perms
and --file-perms
, which takes a value in traditional numeric notation, where the default corresponds to --file-perms 0666 --dir-perms 0777
.
The mounted filesystem will normally get three entries in its access-control list (ACL), representing permissions for the POSIX permission scopes: Owner, group and others. By default, the owner and group will be taken from the current user, and the built-in group "Everyone" will be used to represent others. The user/group can be customized with FUSE options "UserName" and "GroupName", e.g. -o UserName=user123 -o GroupName="Authenticated Users"
. The permissions on each entry will be set according to options --dir-perms
and --file-perms
, which takes a value in traditional numeric notation.
The default permissions corresponds to --file-perms 0666 --dir-perms 0777
, i.e. read and write permissions to everyone. This means you will not be able to start any programs from the the mount. To be able to do that you must add execute permissions, e.g. --file-perms 0777 --dir-perms 0777
to add it to everyone. If the program needs to write files, chances are you will have to enable VFS File Caching as well (see also limitations).
Note that the mapping of permissions is not always trivial, and the result you see in Windows Explorer may not be exactly like you expected. For example, when setting a value that includes write access, this will be mapped to individual permissions "write attributes", "write data" and "append data", but not "write extended attributes". Windows will then show this as basic permission "Special" instead of "Write", because "Write" includes the "write extended attributes" permission.
If you set POSIX permissions for only allowing access to the owner, using --file-perms 0600 --dir-perms 0700
, the user group and the built-in "Everyone" group will still be given some special permissions, such as "read attributes" and "read permissions", in Windows. This is done for compatibility reasons, e.g. to allow users without additional permissions to be able to read basic metadata about files like in UNIX. One case that may arise is that other programs (incorrectly) interprets this as the file being accessible by everyone. For example an SSH client may warn about "unprotected private key file".
WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity", that allows the complete specification of file security descriptors using SDDL. With this you can work around issues such as the mentioned "unprotected private key file" by specifying -o FileSecurity="D:P(A;;FA;;;OW)"
, for file all access (FA) to the owner (OW).
Without the use of --vfs-cache-mode
this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without --vfs-cache-mode writes
or --vfs-cache-mode full
. See the VFS File Caching section for more info.
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
+When rclone mount
is invoked on Unix with --daemon
flag, the main rclone program will wait for the background mount to become ready or until the timeout specified by the --daemon-wait
flag. On Linux it can check mount status using ProcFS so the flag in fact sets maximum time to wait, while the real wait can be less. On macOS / BSD the time to wait is constant and the check is performed only at the end. We advise you to set wait time on macOS reasonably.
Only supported on Linux, FreeBSD, OS X and Windows at the moment.
File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the VFS File Caching for solutions to make mount more reliable.
@@ -1689,18 +1860,51 @@ rclone mount remote:path/to/files * --volname \\cloud\remoteNote that all the rclone filters can be used to select a subset of the files to be visible in the mount.
When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode.
---vfs-read-chunk-size
will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests.
When --vfs-read-chunk-size-limit
is also specified and greater than --vfs-read-chunk-size
, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1
will disable the limit and the chunk size will grow indefinitely.
With --vfs-read-chunk-size 100M
and --vfs-read-chunk-size-limit 0
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M
is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Note that systemd runs mount units without any environment variables including PATH
or HOME
. This means that tilde (~
) expansion will not work and you should provide --config
and --cache-dir
explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount
program, rclone will use the fallback PATH of /bin:/usr/bin
in this scenario. Please ensure that fusermount
is present on this PATH.
The core Unix program /bin/mount
normally takes the -t FSTYPE
argument then runs the /sbin/mount.FSTYPE
helper program passing it mount options as -o key=val,...
or --opt=...
. Automount (classic or systemd) behaves in a similar way.
rclone by default expects GNU-style flags --key val
. To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone
and optionally /usr/bin/rclonefs
, e.g. ln -s /usr/bin/rclone /sbin/mount.rclone
. rclone will detect it and translate command-line arguments appropriately.
Now you can run classic mounts like this:
+mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem
+or create systemd mount units:
+# /etc/systemd/system/mnt-data.mount
+[Unit]
+After=network-online.target
+[Mount]
+Type=rclone
+What=sftp1:subdir
+Where=/mnt/data
+Options=rw,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
+optionally accompanied by systemd automount unit
+# /etc/systemd/system/mnt-data.automount
+[Unit]
+After=network-online.target
+Before=remote-fs.target
+[Automount]
+Where=/mnt/data
+TimeoutIdleSec=600
+[Install]
+WantedBy=multi-user.target
+or add in /etc/fstab
a line like
sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0
+or use classic Automountd. Remember to provide explicit config=...,cache-dir=...
as a workaround for mount units being run without HOME
.
Rclone in the mount helper mode will split -o
argument(s) by comma, replace _
by -
and prepend --
to get the command-line flags. Options containing commas or spaces can be wrapped in single or double quotes. Any inner quotes inside outer quotes of the same type should be doubled.
Mount option syntax includes a few extra options treated specially:
+env.NAME=VALUE
will set an environment variable for the mount process. This helps with Automountd and Systemd.mount which don't allow setting custom environment for mount helpers. Typically you will use env.HTTPS_PROXY=proxy.host:3128
or env.HOME=/root
command=cmount
can be used to run cmount
or any other rclone command rather than the default mount
.args2env
will pass mount options to the mount helper running in background via environment variables instead of command line arguments. This allows to hide secrets from such commands as ps
or pgrep
.vv...
will be transformed into appropriate --verbose=N
x-systemd.automount
, _netdev
, nosuid
and alike are intended only for Automountd and ignored by rclone.This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.
Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+--dir-cache-time duration Time to cache directory entries for (default 5m0s)
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
@@ -1719,13 +1923,13 @@ rclone mount remote:path/to/files * --volname \\cloud\remote
Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
---vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
---vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
---vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
+--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
+Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size
note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache.
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.
-This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.
-When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.
-When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.
+This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
writes.
When reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
+These flags control the chunking:
+--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+Rclone will start reading a chunk of size --vfs-read-chunk-size
, and then double the size for each read. When --vfs-read-chunk-size-limit
is specified, and greater than --vfs-read-chunk-size
, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.
With --vfs-read-chunk-size 100M
and --vfs-read-chunk-size-limit 0
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M
is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting --vfs-read-chunk-size
to 0
or "off" disables chunked reading.
These flags may be used to enable/disable features of the VFS for performance or other reasons.
-In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
+These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.
+In particular S3 and Swift benefit hugely from the --no-modtime
flag (or use --use-server-modtime
for a slightly different effect) as each read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
-When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.
-Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.
---vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
---vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
---vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
-When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).
---transfers int Number of file transfers to run in parallel. (default 4)
+--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
-Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
+Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
@@ -1789,56 +1997,57 @@ rclone mount remote:path/to/files * --volname \\cloud\remoteSome backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
rclone mount remote:path /path/to/mountpoint [flags]
- --allow-non-empty Allow mounting over a non-empty directory. Not supported on Windows.
- --allow-other Allow access to other users. Not supported on Windows.
- --allow-root Allow access to root user. Not supported on Windows.
- --async-read Use asynchronous reads. Not supported on Windows. (default true)
- --attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
- --daemon Run mount as a daemon (background mode). Not supported on Windows.
- --daemon-timeout duration Time limit for rclone to respond to kernel. Not supported on Windows.
- --debug-fuse Debug the FUSE internals - needs -v.
- --default-permissions Makes kernel enforce access control based on the file mode. Not supported on Windows.
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+Options
+ --allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
+ --allow-other Allow access to other users (not supported on Windows)
+ --allow-root Allow access to root user (not supported on Windows)
+ --async-read Use asynchronous reads (not supported on Windows) (default true)
+ --attr-timeout duration Time for which file/directory attributes are cached (default 1s)
+ --daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)
+ --daemon-timeout duration Time limit for rclone to respond to kernel (not supported on Windows)
+ --daemon-wait duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
+ --debug-fuse Debug the FUSE internals - needs -v
+ --default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
+ --dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
- --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
- --gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for mount
- --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128Ki)
- --network-mode Mount as remote network drive, instead of fixed disk drive. Supported on Windows only
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- --noappledouble Ignore Apple Double (._) and .DS_Store files. Supported on OSX only. (default true)
- --noapplexattr Ignore all "com.apple.*" extended attributes. Supported on OSX only.
- -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
- --uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
+ --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
+ --network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
+ --no-checksum Don't compare checksums on up/download
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true)
+ --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only)
+ -o, --option stringArray Option for libfuse/WinFsp (repeat if required)
+ --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Mount read-only
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- --vfs-case-insensitive If a file name not found, find a case insensitive match.
- --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
- --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
- --vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
- --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
- --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
- --volname string Set the volume name. Supported on Windows and OSX only.
- --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. Not supported on Windows.
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+ --volname string Set the volume name (supported on Windows and OSX only)
+ --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)
See the global flags page for global options not listed here.
-Move file or directory from source to dest.
-If source:path is a file or directory then it moves it to a file or directory named dest:path.
This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command.
So
@@ -1850,20 +2059,20 @@ rclone mount remote:path/to/files * --volname \\cloud\remote if src is directory move it to dst, overwriting existing files if they exist see move command for full details -This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.
+This doesn't transfer files that are identical on src and dst, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive
/-i
flag.
Note: Use the -P
/--progress
flag to view real-time transfer statistics.
rclone moveto source:path dest:path [flags]
- -h, --help help for moveto
See the global flags page for global options not listed here.
-Explore a remote with a text based user interface.
-This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".
To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.
Here are the keys - press '?' to toggle the help on and off
@@ -1873,6 +2082,7 @@ if src is directory c toggle counts g toggle graph a toggle average size in directory + u toggle human-readable format n,s,C,A sort by name,size,count,average size d delete file/directory y copy current path to clipboard @@ -1883,16 +2093,16 @@ if src is directoryThis an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands.
Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously.
rclone ncdu remote:path [flags]
- -h, --help help for ncdu
See the global flags page for global options not listed here.
-Obscure password for use in the rclone config file.
-In the rclone config file, human readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident.
Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token.
This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline.
@@ -1900,16 +2110,16 @@ if src is directoryIf there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself.
If you want to encrypt the config file then please use config file encryption - see rclone config for more info.
rclone obscure password [flags]
- -h, --help help for obscure
See the global flags page for global options not listed here.
-Run a command against a running rclone.
-This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"
A username and password can be passed in with --user and --pass.
Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.
@@ -1928,24 +2138,24 @@ if src is directoryrclone rc --loopback operations/about fs=/
Use "rclone rc" to see a list of all possible commands.
rclone rc commands parameter [flags]
- -a, --arg stringArray Argument placed in the "arg" array.
+Options
+ -a, --arg stringArray Argument placed in the "arg" array
-h, --help help for rc
- --json string Input JSON - use instead of key=value args.
- --loopback If set connect to this rclone instance not via HTTP.
- --no-output If set, don't output the JSON result.
- -o, --opt stringArray Option in the form name=value or name placed in the "opt" array.
- --pass string Password to use to connect to rclone remote control.
- --url string URL to connect to rclone remote control. (default "http://localhost:5572/")
- --user string Username to use to rclone remote control.
+ --json string Input JSON - use instead of key=value args
+ --loopback If set connect to this rclone instance not via HTTP
+ --no-output If set, don't output the JSON result
+ -o, --opt stringArray Option in the form name=value or name placed in the "opt" array
+ --pass string Password to use to connect to rclone remote control
+ --url string URL to connect to rclone remote control (default "http://localhost:5572/")
+ --user string Username to use to rclone remote control
See the global flags page for global options not listed here.
-Copies standard input to file on remote.
-rclone rcat reads from standard input (stdin) and copies it to a single remote file.
echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file
@@ -1955,48 +2165,48 @@ ffmpeg - | rclone rcat remote:path/to/file
|--size| should be the exact size of the input stream in bytes. If the size of the stream is different in length to the |--size| passed in then the transfer will likely fail.
Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move
it to the destination.
rclone rcat remote:path [flags]
- -h, --help help for rcat
--size int File size hint to preallocate (default -1)
See the global flags page for global options not listed here.
-Run rclone listening to remote control commands only.
-This runs rclone so that it only listens to remote control commands.
This is useful if you are controlling rclone via the rc API.
If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run.
See the rc documentation for more info on the rc flags.
rclone rcd <path to files to serve>* [flags]
- -h, --help help for rcd
See the global flags page for global options not listed here.
-Remove empty directories under the path.
-This recursively removes any empty directories (including directories that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the --leave-root
flag.
Use command rmdir
to delete just the empty directory given by path, not recurse.
This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete
command will delete files but leave the directory structure (unless used with option --rmdirs
).
To delete a path and any objects in it, use purge
command.
rclone rmdirs remote:path [flags]
- -h, --help help for rmdirs
--leave-root Do not remove root directory if empty
See the global flags page for global options not listed here.
-Update the rclone binary.
-This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and cryptographically signed signature.
If used without flags (or with implied --stable
flag), this command will install the latest stable release. However, some issues may be fixed (or features added) only in the latest beta release. In such cases you should run the command with the --beta
flag, i.e. rclone selfupdate --beta
. You can check in advance what version would be installed by adding the --check
flag, then repeat the command without it when you are satisfied.
Sometimes the rclone team may recommend you a concrete beta or stable rclone release to troubleshoot your issue or add a bleeding edge feature. The --version VER
flag, if given, will update to the concrete version instead of the latest one. If you omit micro version from VER
(for example 1.53
), the latest matching micro version will be used.
Note: Windows forbids deletion of a currently running executable so this command will rename the old executable to 'rclone.old.exe' upon success.
Please note that this command was not available before rclone version 1.55. If it fails for you with the message unknown command "selfupdate"
then you will need to update manually following the install instructions located at https://rclone.org/install/
rclone selfupdate [flags]
- --beta Install beta release.
- --check Check for latest release, do not download.
+Options
+ --beta Install beta release
+ --check Check for latest release, do not download
-h, --help help for selfupdate
--output string Save the downloaded binary at a given path (default: replace running binary)
--package string Package format: zip|deb|rpm (default: zip)
--stable Install stable release (this is the default)
--version string Install the given rclone version (default: latest)
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
rclone serve
Serve a remote over a protocol.
-Synopsis
+Synopsis
rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, e.g.
rclone serve http remote:
Each subcommand has its own options which you can see in their help.
rclone serve <protocol> [opts] <remote> [flags]
-Options
+Options
-h, --help help for serve
See the global flags page for global options not listed here.
-SEE ALSO
+SEE ALSO
- rclone - Show help for rclone commands, flags and backends.
- rclone serve dlna - Serve remote:path over DLNA
@@ -2042,7 +2252,7 @@ ffmpeg - | rclone rcat remote:path/to/file
Serve remote:path over DLNA
-rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.
Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+--dir-cache-time duration Time to cache directory entries for (default 5m0s)
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
@@ -2075,13 +2285,13 @@ ffmpeg - | rclone rcat remote:path/to/file
Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
---vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
---vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
---vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
+--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
+Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size
note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache.
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.
-This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.
-When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.
-When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.
+This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
writes.
When reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
+These flags control the chunking:
+--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+Rclone will start reading a chunk of size --vfs-read-chunk-size
, and then double the size for each read. When --vfs-read-chunk-size-limit
is specified, and greater than --vfs-read-chunk-size
, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.
With --vfs-read-chunk-size 100M
and --vfs-read-chunk-size-limit 0
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M
is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting --vfs-read-chunk-size
to 0
or "off" disables chunked reading.
These flags may be used to enable/disable features of the VFS for performance or other reasons.
-In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
+These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.
+In particular S3 and Swift benefit hugely from the --no-modtime
flag (or use --use-server-modtime
for a slightly different effect) as each read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
-When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.
-Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.
---vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
---vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
---vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
-When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).
---transfers int Number of file transfers to run in parallel. (default 4)
+--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
-Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
+Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
@@ -2145,42 +2359,42 @@ ffmpeg - | rclone rcat remote:path/to/fileSome backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
rclone serve dlna remote:path [flags]
- --addr string ip:port or :port to bind the DLNA http server to. (default ":7879")
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+Options
+ --addr string The ip:port or :port to bind the DLNA http server to (default ":7879")
+ --dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
- --gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for dlna
- --log-trace enable trace logging of SOAP traffic
- --name string name of DLNA server
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
- --uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
+ --log-trace Enable trace logging of SOAP traffic
+ --name string Name of DLNA server
+ --no-checksum Don't compare checksums on up/download
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Mount read-only
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- --vfs-case-insensitive If a file name not found, find a case insensitive match.
- --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
- --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
- --vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
- --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
- --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
See the global flags page for global options not listed here.
-Serve any remote on docker's volume plugin API.
-This command implements the Docker volume plugin API allowing docker to use rclone as a data storage mechanism for various cloud providers. rclone provides docker volume plugin based on it.
To create a docker plugin, one must create a Unix or TCP socket that Docker will look for when you use the plugin and then it listens for commands from docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example:
sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
@@ -2194,8 +2408,8 @@ ffmpeg - | rclone rcat remote:path/to/file
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+--dir-cache-time duration Time to cache directory entries for (default 5m0s)
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
@@ -2214,13 +2428,13 @@ ffmpeg - | rclone rcat remote:path/to/file
Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
---vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
---vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
---vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
+--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
+Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size
note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache.
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.
-This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.
-When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.
-When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.
+This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
writes.
When reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
+These flags control the chunking:
+--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+Rclone will start reading a chunk of size --vfs-read-chunk-size
, and then double the size for each read. When --vfs-read-chunk-size-limit
is specified, and greater than --vfs-read-chunk-size
, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.
With --vfs-read-chunk-size 100M
and --vfs-read-chunk-size-limit 0
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M
is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting --vfs-read-chunk-size
to 0
or "off" disables chunked reading.
These flags may be used to enable/disable features of the VFS for performance or other reasons.
-In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
+These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.
+In particular S3 and Swift benefit hugely from the --no-modtime
flag (or use --use-server-modtime
for a slightly different effect) as each read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
-When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.
-Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.
---vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
---vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
---vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
-When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).
---transfers int Number of file transfers to run in parallel. (default 4)
+--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
-Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
+Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
@@ -2284,61 +2502,62 @@ ffmpeg - | rclone rcat remote:path/to/fileSome backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
rclone serve docker [flags]
- --allow-non-empty Allow mounting over a non-empty directory. Not supported on Windows.
- --allow-other Allow access to other users. Not supported on Windows.
- --allow-root Allow access to root user. Not supported on Windows.
- --async-read Use asynchronous reads. Not supported on Windows. (default true)
- --attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
- --base-dir string base directory for volumes (default "/var/lib/docker-volumes/rclone")
- --daemon Run mount as a daemon (background mode). Not supported on Windows.
- --daemon-timeout duration Time limit for rclone to respond to kernel. Not supported on Windows.
- --debug-fuse Debug the FUSE internals - needs -v.
- --default-permissions Makes kernel enforce access control based on the file mode. Not supported on Windows.
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+Options
+ --allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
+ --allow-other Allow access to other users (not supported on Windows)
+ --allow-root Allow access to root user (not supported on Windows)
+ --async-read Use asynchronous reads (not supported on Windows) (default true)
+ --attr-timeout duration Time for which file/directory attributes are cached (default 1s)
+ --base-dir string Base directory for volumes (default "/var/lib/docker-volumes/rclone")
+ --daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)
+ --daemon-timeout duration Time limit for rclone to respond to kernel (not supported on Windows)
+ --daemon-wait duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
+ --debug-fuse Debug the FUSE internals - needs -v
+ --default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
+ --dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
- --forget-state skip restoring previous state
- --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
- --gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
+ --forget-state Skip restoring previous state
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for docker
- --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. Not supported on Windows. (default 128Ki)
- --network-mode Mount as remote network drive, instead of fixed disk drive. Supported on Windows only
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- --no-spec do not write spec file
- --noappledouble Ignore Apple Double (._) and .DS_Store files. Supported on OSX only. (default true)
- --noapplexattr Ignore all "com.apple.*" extended attributes. Supported on OSX only.
- -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
- --socket-addr string <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)
+ --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
+ --network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
+ --no-checksum Don't compare checksums on up/download
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --no-spec Do not write spec file
+ --noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true)
+ --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only)
+ -o, --option stringArray Option for libfuse/WinFsp (repeat if required)
+ --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Mount read-only
+ --socket-addr string Address <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)
--socket-gid int GID for unix socket (default: current process GID) (default 1000)
- --uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- --vfs-case-insensitive If a file name not found, find a case insensitive match.
- --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
- --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
- --vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
- --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
- --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
- --volname string Set the volume name. Supported on Windows and OSX only.
- --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used. Not supported on Windows.
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+ --volname string Set the volume name (supported on Windows and OSX only)
+ --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)
See the global flags page for global options not listed here.
-Serve remote:path over FTP.
-rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it.
Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
@@ -2352,8 +2571,8 @@ ffmpeg - | rclone rcat remote:path/to/fileThe VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+--dir-cache-time duration Time to cache directory entries for (default 5m0s)
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
@@ -2372,13 +2591,13 @@ ffmpeg - | rclone rcat remote:path/to/file
Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
---vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
---vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
---vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
+--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
+Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size
note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache.
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.
-This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.
-When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.
-When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.
+This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
writes.
When reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
+These flags control the chunking:
+--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+Rclone will start reading a chunk of size --vfs-read-chunk-size
, and then double the size for each read. When --vfs-read-chunk-size-limit
is specified, and greater than --vfs-read-chunk-size
, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.
With --vfs-read-chunk-size 100M
and --vfs-read-chunk-size-limit 0
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M
is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting --vfs-read-chunk-size
to 0
or "off" disables chunked reading.
These flags may be used to enable/disable features of the VFS for performance or other reasons.
-In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
+These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.
+In particular S3 and Swift benefit hugely from the --no-modtime
flag (or use --use-server-modtime
for a slightly different effect) as each read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
-When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.
-Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.
---vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
---vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
---vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
-When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).
---transfers int Number of file transfers to run in parallel. (default 4)
+--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
-Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
+Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
@@ -2472,47 +2695,47 @@ ffmpeg - | rclone rcat remote:path/to/fileNote that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve ftp remote:path [flags]
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
- --auth-proxy string A program to use to create the backend from the auth.
+Options
+ --addr string IPaddress:Port or :Port to bind server to (default "localhost:2121")
+ --auth-proxy string A program to use to create the backend from the auth
--cert string TLS PEM key (concatenation of certificate and CA certificate)
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
- --gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for ftp
--key string TLS PEM Private key
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- --pass string Password for authentication. (empty value allow every password)
- --passive-port string Passive port range to use. (default "30000-32000")
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --public-ip string Public IP address to advertise for passive connections.
- --read-only Mount read-only.
- --uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
- --user string User name for authentication. (default "anonymous")
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
+ --no-checksum Don't compare checksums on up/download
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --pass string Password for authentication (empty value allow every password)
+ --passive-port string Passive port range to use (default "30000-32000")
+ --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --public-ip string Public IP address to advertise for passive connections
+ --read-only Mount read-only
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --user string User name for authentication (default "anonymous")
+ --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- --vfs-case-insensitive If a file name not found, find a case insensitive match.
- --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
- --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
- --vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
- --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
- --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
See the global flags page for global options not listed here.
-Serve the remote over HTTP.
-rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.
You can use the filter flags (e.g. --include, --exclude) to control what is served.
The server will log errors. Use -v to see access logs.
@@ -2614,14 +2837,15 @@ htpasswd -B htpasswd user htpasswd -B htpasswd anotherUserThe password file can be updated while rclone is running.
Use --realm to set the authentication realm.
+Use --salt to change the password hashing salt from the default.
This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.
Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+--dir-cache-time duration Time to cache directory entries for (default 5m0s)
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
@@ -2640,13 +2864,13 @@ htpasswd -B htpasswd anotherUser
Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
---vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
---vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
---vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
+--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
+Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size
note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache.
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.
-This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.
-When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.
-When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.
+This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
writes.
When reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
+These flags control the chunking:
+--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+Rclone will start reading a chunk of size --vfs-read-chunk-size
, and then double the size for each read. When --vfs-read-chunk-size-limit
is specified, and greater than --vfs-read-chunk-size
, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.
With --vfs-read-chunk-size 100M
and --vfs-read-chunk-size-limit 0
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M
is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting --vfs-read-chunk-size
to 0
or "off" disables chunked reading.
These flags may be used to enable/disable features of the VFS for performance or other reasons.
-In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
+These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.
+In particular S3 and Swift benefit hugely from the --no-modtime
flag (or use --use-server-modtime
for a slightly different effect) as each read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
-When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.
-Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.
---vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
---vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
---vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
-When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).
---transfers int Number of file transfers to run in parallel. (default 4)
+--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
-Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
+Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
@@ -2710,52 +2938,53 @@ htpasswd -B htpasswd anotherUserSome backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
rclone serve http remote:path [flags]
- --addr string IPaddress:Port or :Port to bind server to. (default "127.0.0.1:8080")
- --baseurl string Prefix for URLs - leave blank for root.
+Options
+ --addr string IPaddress:Port or :Port to bind server to (default "127.0.0.1:8080")
+ --baseurl string Prefix for URLs - leave blank for root
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
- --gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for http
- --htpasswd string htpasswd file - if not provided no authentication is done
+ --htpasswd string A htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- --pass string Password for authentication.
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
- --realm string realm for authentication
+ --no-checksum Don't compare checksums on up/download
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --pass string Password for authentication
+ --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Mount read-only
+ --realm string Realm for authentication
+ --salt string Password hashing salt (default "dlPL2MqE")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --template string User Specified Template.
- --uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
- --user string User name for authentication.
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
+ --template string User-specified template
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --user string User name for authentication
+ --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- --vfs-case-insensitive If a file name not found, find a case insensitive match.
- --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
- --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
- --vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
- --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
- --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
See the global flags page for global options not listed here.
-Serve the remote for restic's REST API.
-rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.
Restic is a command line program for doing backups.
The server will log errors. Use -v to see access logs.
@@ -2895,40 +3124,40 @@ htpasswd -B htpasswd anotherUserBy default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
--cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
rclone serve restic remote:path [flags]
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
- --append-only disallow deletion of repository data
- --baseurl string Prefix for URLs - leave blank for root.
- --cache-objects cache listed objects (default true)
+Options
+ --addr string IPaddress:Port or :Port to bind server to (default "localhost:8080")
+ --append-only Disallow deletion of repository data
+ --baseurl string Prefix for URLs - leave blank for root
+ --cache-objects Cache listed objects (default true)
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
-h, --help help for restic
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
- --pass string Password for authentication.
- --private-repos users can only access their private repo
+ --pass string Password for authentication
+ --private-repos Users can only access their private repo
--realm string realm for authentication (default "rclone")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --stdio run an HTTP2 server on stdin/stdout
- --template string User Specified Template.
- --user string User name for authentication.
+ --stdio Run an HTTP2 server on stdin/stdout
+ --template string User-specified template
+ --user string User name for authentication
See the global flags page for global options not listed here.
-Serve the remote over SFTP.
-rclone serve sftp implements an SFTP server to serve the remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.
You can use the filter flags (e.g. --include, --exclude) to control what is served.
The server will log errors. Use -v to see access logs.
--bwlimit will be respected for file transfers. Use --stats to control the stats printing.
You must provide some means of authentication, either with --user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in.
Note that this also implements a small number of shell commands so that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend.
-If you don't supply a --key then rclone will generate one and cache it for later use.
+If you don't supply a host --key then rclone will generate rsa, ecdsa and ed25519 variants, and cache them for later use in rclone's cache directory (see "rclone help flags cache-dir") in the "serve-sftp" directory.
By default the server binds to localhost:2022 - if you want it to be reachable externally then supply "--addr :2022" for example.
Note that the default of "--vfs-cache-mode off" is fine for the rclone sftp backend, but it may not be with other SFTP clients.
If --stdio is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example:
@@ -2939,8 +3168,8 @@ htpasswd -B htpasswd anotherUserThe VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+--dir-cache-time duration Time to cache directory entries for (default 5m0s)
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
@@ -2959,13 +3188,13 @@ htpasswd -B htpasswd anotherUser
Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
---vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
---vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
---vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
+--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
+Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size
note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache.
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.
-This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.
-When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.
-When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.
+This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
writes.
When reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
+These flags control the chunking:
+--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+Rclone will start reading a chunk of size --vfs-read-chunk-size
, and then double the size for each read. When --vfs-read-chunk-size-limit
is specified, and greater than --vfs-read-chunk-size
, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.
With --vfs-read-chunk-size 100M
and --vfs-read-chunk-size-limit 0
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M
is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting --vfs-read-chunk-size
to 0
or "off" disables chunked reading.
These flags may be used to enable/disable features of the VFS for performance or other reasons.
-In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
+These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.
+In particular S3 and Swift benefit hugely from the --no-modtime
flag (or use --use-server-modtime
for a slightly different effect) as each read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
-When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.
-Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.
---vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
---vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
---vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
-When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).
---transfers int Number of file transfers to run in parallel. (default 4)
+--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
-Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
+Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
@@ -3059,47 +3292,47 @@ htpasswd -B htpasswd anotherUserNote that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve sftp remote:path [flags]
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2022")
- --auth-proxy string A program to use to create the backend from the auth.
+Options
+ --addr string IPaddress:Port or :Port to bind server to (default "localhost:2022")
+ --auth-proxy string A program to use to create the backend from the auth
--authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys")
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
- --gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for sftp
--key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate)
- --no-auth Allow connections with no authentication if set.
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- --pass string Password for authentication.
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
+ --no-auth Allow connections with no authentication if set
+ --no-checksum Don't compare checksums on up/download
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --pass string Password for authentication
+ --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Mount read-only
--stdio Run an sftp server on run stdin/stdout
- --uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
- --user string User name for authentication.
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --user string User name for authentication
+ --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- --vfs-case-insensitive If a file name not found, find a case insensitive match.
- --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
- --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
- --vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
- --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
- --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
See the global flags page for global options not listed here.
-Serve remote:path over webdav.
-rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client, through a web browser, or you can make a remote of type webdav to read and write it.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+--dir-cache-time duration Time to cache directory entries for (default 5m0s)
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
You can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
@@ -3230,13 +3463,13 @@ htpasswd -B htpasswd anotherUser
Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
---vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
---vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
---vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
+--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back second. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
+Note that files are written back to the remote only when they are closed and if they haven't been accessed for --vfs-write-back
seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.
If using --vfs-cache-max-size
note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval
. Secondly because open files cannot be evicted from the cache.
You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using --vfs-cache-mode > off
. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir
. You don't need to worry about this if the remotes in use don't overlap.
In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well.
In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.
-This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode writes.
-When reading a file rclone will read --buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.
-When using this mode it is recommended that --buffer-size is not set too big and --vfs-read-ahead is set large if required.
+This mode should support all normal file system operations and is otherwise identical to --vfs-cache-mode
writes.
When reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
+These flags control the chunking:
+--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+Rclone will start reading a chunk of size --vfs-read-chunk-size
, and then double the size for each read. When --vfs-read-chunk-size-limit
is specified, and greater than --vfs-read-chunk-size
, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.
With --vfs-read-chunk-size 100M
and --vfs-read-chunk-size-limit 0
the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M
is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
Setting --vfs-read-chunk-size
to 0
or "off" disables chunked reading.
These flags may be used to enable/disable features of the VFS for performance or other reasons.
-In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
+These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature.
+In particular S3 and Swift benefit hugely from the --no-modtime
flag (or use --use-server-modtime
for a slightly different effect) as each read of the modification time takes a transaction.
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
--read-only Mount read-only.
-When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This is advantageous because some cloud providers account for reads being all the data requested, not all the data delivered.
-Rclone will keep doubling the chunk size requested starting at --vfs-read-chunk-size with a maximum of --vfs-read-chunk-size-limit unless it is set to "off" in which case there will be no limit.
---vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default "off")
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
---vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
---vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
-When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers have no effect on mount).
---transfers int Number of file transfers to run in parallel. (default 4)
+--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
-Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
+Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
@@ -3330,55 +3567,55 @@ htpasswd -B htpasswd anotherUserNote that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve webdav remote:path [flags]
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
- --auth-proxy string A program to use to create the backend from the auth.
- --baseurl string Prefix for URLs - leave blank for root.
+Options
+ --addr string IPaddress:Port or :Port to bind server to (default "localhost:8080")
+ --auth-proxy string A program to use to create the backend from the auth
+ --baseurl string Prefix for URLs - leave blank for root
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--disable-dir-list Disable HTML directory list on GET request for a directory
--etag-hash string Which hash to use for the ETag, or auto or blank for off
--file-perms FileMode File permissions (default 0666)
- --gid uint32 Override the gid field set by the filesystem. Not supported on Windows. (default 1000)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for webdav
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- --pass string Password for authentication.
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
+ --no-checksum Don't compare checksums on up/download
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --pass string Password for authentication
+ --poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Mount read-only
--realm string realm for authentication (default "rclone")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --template string User Specified Template.
- --uid uint32 Override the uid field set by the filesystem. Not supported on Windows. (default 1000)
- --umask int Override the permission bits set by the filesystem. Not supported on Windows. (default 2)
- --user string User name for authentication.
- --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
- --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
+ --template string User-specified template
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --user string User name for authentication
+ --vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
- --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
- --vfs-case-insensitive If a file name not found, find a case insensitive match.
- --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full.
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128Mi)
- --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- --vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
- --vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
- --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
- --vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
See the global flags page for global options not listed here.
-Changes storage class/tier of objects in remote.
-rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true
You can use it to tier single object
@@ -3388,25 +3625,25 @@ htpasswd -B htpasswd anotherUserOr just provide remote directory and all files in directory will be tiered
rclone settier tier remote:path/dir
rclone settier tier remote:path [flags]
- -h, --help help for settier
See the global flags page for global options not listed here.
-Run a test command
-Rclone test is used to run test commands.
Select which test comand you want with the subcommand, eg
rclone test memory remote:
Each subcommand has its own options which you can see in their help.
NB Be careful running these commands, they may do strange things so reading their documentation first is recommended.
- -h, --help help for test
See the global flags page for global options not listed here.
-Log any change notify requests for the remote passed in.
rclone test changenotify remote: [flags]
- -h, --help help for changenotify
- --poll-interval duration Time to wait between polling for changes. (default 10s)
+ --poll-interval duration Time to wait between polling for changes (default 10s)
See the global flags page for global options not listed here.
-Makes a histogram of file name characters.
-This command outputs JSON which shows the histogram of characters used in filenames in the remote:path specified.
The data doesn't contain any identifying information but is useful for the rclone developers when developing filename compression.
rclone test histogram [remote:path] [flags]
- -h, --help help for histogram
See the global flags page for global options not listed here.
-Discovers file name or other limitations for paths.
-rclone info discovers what filenames and upload methods are possible to write to the paths passed in and how long they can be. It can take some time. It will write test files into the remote:path passed in. It outputs a bit of go code for each one.
NB this can create undeletable files and other hazards - use with care
rclone test info [remote:path]+ [flags]
- --all Run all tests.
- --check-control Check control characters.
- --check-length Check max filename length.
- --check-normalization Check UTF-8 Normalization.
- --check-streaming Check uploads with indeterminate file size.
+Options
+ --all Run all tests
+ --check-control Check control characters
+ --check-length Check max filename length
+ --check-normalization Check UTF-8 Normalization
+ --check-streaming Check uploads with indeterminate file size
-h, --help help for info
- --upload-wait duration Wait after writing a file.
- --write-json string Write results to file.
+ --upload-wait duration Wait after writing a file
+ --write-json string Write results to file
See the global flags page for global options not listed here.
-Make a random file hierarchy in a directory
rclone test makefiles <dir> [flags]
- --files int Number of files to create (default 1000)
--files-per-directory int Average number of files per directory (default 10)
-h, --help help for makefiles
@@ -3472,46 +3709,48 @@ htpasswd -B htpasswd anotherUser
--min-name-length int Minimum size of file names (default 4)
--seed int Seed for the random number generator (0 for random) (default 1)
See the global flags page for global options not listed here.
-Load all the objects at remote:path into memory and report memory stats.
rclone test memory remote:path [flags]
- -h, --help help for memory
See the global flags page for global options not listed here.
-Create new file or change file modification time.
-Set the modification time on object(s) as specified by remote:path to have the current time.
-If remote:path does not exist then a zero sized object will be created unless the --no-create flag is provided.
-If --timestamp is used then it will set the modification time to that time instead of the current time. Times may be specified as one of:
+Set the modification time on file(s) as specified by remote:path to have the current time.
+If remote:path does not exist then a zero sized file will be created, unless --no-create
or --recursive
is provided.
If --recursive
is used then recursively sets the modification time on all existing files that is found under the path. Filters are supported, and you can test with the --dry-run
or the --interactive
flag.
If --timestamp
is used then sets the modification time to that time instead of the current time. Times may be specified as one of:
Note that --timestamp is in UTC if you want local time then add the --localtime flag.
+Note that value of --timestamp
is in UTC. If you want local time then add the --localtime
flag.
rclone touch remote:path [flags]
- -h, --help help for touch
- --localtime Use localtime for timestamp, not UTC.
- -C, --no-create Do not create the file if it does not exist.
- -t, --timestamp string Use specified time instead of the current time of day.
+ --localtime Use localtime for timestamp, not UTC
+ -C, --no-create Do not create the file if it does not exist (implied with --recursive)
+ -R, --recursive Recursively touch all files
+ -t, --timestamp string Use specified time instead of the current time of day
See the global flags page for global options not listed here.
-List the contents of the remote in a tree like fashion.
-rclone tree lists the contents of a remote in a similar way to the unix tree command.
For example
$ rclone tree remote:path
@@ -3527,30 +3766,30 @@ htpasswd -B htpasswd anotherUser
You can use any of the filtering options with the tree command (e.g. --include and --exclude). You can also use --fast-list.
The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.
rclone tree remote:path [flags]
- -a, --all All files are listed (list . files too).
- -C, --color Turn colorization on always.
- -d, --dirs-only List directories only.
- --dirsfirst List directories before files (-U disables).
- --full-path Print the full path prefix for each file.
+Options
+ -a, --all All files are listed (list . files too)
+ -C, --color Turn colorization on always
+ -d, --dirs-only List directories only
+ --dirsfirst List directories before files (-U disables)
+ --full-path Print the full path prefix for each file
-h, --help help for tree
--human Print the size in a more human readable way.
- --level int Descend only level directories deep.
+ --level int Descend only level directories deep
-D, --modtime Print the date of last modification.
- --noindent Don't print indentation lines.
- --noreport Turn off file/directory count at end of tree listing.
- -o, --output string Output to file instead of stdout.
+ --noindent Don't print indentation lines
+ --noreport Turn off file/directory count at end of tree listing
+ -o, --output string Output to file instead of stdout
-p, --protections Print the protections for each file.
-Q, --quote Quote filenames with double quotes.
-s, --size Print the size in bytes of each file.
- --sort string Select sort: name,version,size,mtime,ctime.
- --sort-ctime Sort files by last status change time.
- -t, --sort-modtime Sort files by last modification time.
- -r, --sort-reverse Reverse the order of the sort.
- -U, --unsorted Leave files unsorted.
- --version Sort files alphanumerically by version.
+ --sort string Select sort: name,version,size,mtime,ctime
+ --sort-ctime Sort files by last status change time
+ -t, --sort-modtime Sort files by last modification time
+ -r, --sort-reverse Reverse the order of the sort
+ -U, --unsorted Leave files unsorted
+ --version Sort files alphanumerically by version
See the global flags page for global options not listed here.
-Will get their own names
DEBUG : :s3: detected overridden config - adding "{YTu53}" suffix to name
Remote names are case sensitive, and must adhere to the following rules: - May only contain 0
-9
, A
-Z
, a
-z
, _
, -
and space. - May not start with -
or space.
When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.
Here are some gotchas which may help users unfamiliar with the shell rules
@@ -3665,11 +3901,11 @@ rclone copy :sftp,host=example.com:path/to/dir /tmp/dirThis can be used when scripting to make aged backups efficiently, e.g.
rclone sync -i remote:current-backup remote:previous-backup
rclone sync -i /path/to/files remote:current-backup
-Rclone has a number of options to control its behaviour.
Options that take parameters can have the values passed in two ways, --option=value
or --option value
. However boolean (true/false) options behave slightly differently to the other options in that --boolean
sets the option to true
and the absence of the flag sets it to false
. It is also possible to specify --boolean=false
or --boolean=true
. Note that --boolean false
is not valid - this is parsed as --boolean
and the false
is parsed as an extra command line argument for rclone.
Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
-Options which use SIZE use KiByte (multiples of 1024 bytes) by default. However, a suffix of B
for Byte, K
for KiByte, M
for MiByte, G
for GiByte, T
for TiByte and P
for PiByte may be used. These are the binary units, e.g. 1, 2**10, 2**20, 2**30 respectively.
Options which use SIZE use KiB (multiples of 1024 bytes) by default. However, a suffix of B
for Byte, K
for KiB, M
for MiB, G
for GiB, T
for TiB and P
for PiB may be used. These are the binary units, e.g. 1, 2**10, 2**20, 2**30 respectively.
When using sync
, copy
or move
any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.
If --suffix
is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.
This option controls the bandwidth limit. For example
--bwlimit 10M
-would mean limit the upload and download bandwidth to 10 MiByte/s. NB this is bytes per second not bits per second. To use a single limit, specify the desired bandwidth in KiByte/s, or use a suffix B|K|M|G|T|P. The default is 0
which means to not limit bandwidth.
would mean limit the upload and download bandwidth to 10 MiB/s. NB this is bytes per second not bits per second. To use a single limit, specify the desired bandwidth in KiB/s, or use a suffix B|K|M|G|T|P. The default is 0
which means to not limit bandwidth.
The upload and download bandwidth can be specified seperately, as --bwlimit UP:DOWN
, so
--bwlimit 10M:100k
-would mean limit the upload bandwidth to 10 MiByte/s and the download bandwidth to 100 KiByte/s. Either limit can be "off" meaning no limit, so to just limit the upload bandwidth you would use
+would mean limit the upload bandwidth to 10 MiB/s and the download bandwidth to 100 KiB/s. Either limit can be "off" meaning no limit, so to just limit the upload bandwidth you would use
--bwlimit 10M:off
-this would limit the upload bandwidth to 10 MiByte/s but the download bandwidth would be unlimited.
+this would limit the upload bandwidth to 10 MiB/s but the download bandwidth would be unlimited.
When specified as above the bandwidth limits last for the duration of run of the rclone binary.
It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH...
where: WEEKDAY
is optional element.
An example of a typical timetable to avoid link saturation during daytime working hours could be:
--bwlimit "08:00,512k 12:00,10M 13:00,512k 18:00,30M 23:00,off"
In this example, the transfer bandwidth will be set to 512 KiByte/s at 8am every day. At noon, it will rise to 10 MiByte/s, and drop back to 512 KiByte/sec at 1pm. At 6pm, the bandwidth limit will be set to 30 MiByte/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited.
+In this example, the transfer bandwidth will be set to 512 KiB/s at 8am every day. At noon, it will rise to 10 MiB/s, and drop back to 512 KiB/sec at 1pm. At 6pm, the bandwidth limit will be set to 30 MiB/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited.
An example of timetable with WEEKDAY
could be:
--bwlimit "Mon-00:00,512 Fri-23:59,10M Sat-10:00,1M Sun-20:00,off"
It means that, the transfer bandwidth will be set to 512 KiByte/s on Monday. It will rise to 10 MiByte/s before the end of Friday. At 10:00 on Saturday it will be set to 1 MiByte/s. From 20:00 on Sunday it will be unlimited.
+It means that, the transfer bandwidth will be set to 512 KiB/s on Monday. It will rise to 10 MiB/s before the end of Friday. At 10:00 on Saturday it will be set to 1 MiB/s. From 20:00 on Sunday it will be unlimited.
Timeslots without WEEKDAY
are extended to the whole week. So this example:
--bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"
Is equivalent to this:
--bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"
Bandwidth limit apply to the data transfer for all backends. For most backends the directory listing bandwidth is also included (exceptions being the non HTTP backends, ftp
, sftp
and tardigrade
).
Note that the units are Byte/s, not bit/s. Typically connections are measured in bit/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625 MiByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
Note that the units are Byte/s, not bit/s. Typically connections are measured in bit/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625 MiB/s so you would use a --bwlimit 0.625M
parameter for rclone.
On Unix systems (Linux, macOS, …) the bandwidth limiter can be toggled by sending a SIGUSR2
signal to rclone. This allows to remove the limitations of a long running rclone transfer and to restore it back to the value specified with --bwlimit
quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this:
kill -SIGUSR2 $(pidof rclone)
If you configure rclone with a remote control then you can use change the bwlimit dynamically:
rclone rc core/bwlimit rate=1M
This option controls per file bandwidth limit. For the options see the --bwlimit
flag.
For example use this to allow no transfers to be faster than 1 MiByte/s
+For example use this to allow no transfers to be faster than 1 MiB/s
--bwlimit-file 1M
This can be used in conjunction with --bwlimit
.
Note that if a schedule is provided the file will use the schedule in effect at the start of the transfer.
@@ -3724,6 +3960,11 @@ rclone sync -i /path/to/files remote:current-backupWhen using mount
or cmount
each open file descriptor will use this much memory for buffering. See the mount documentation for more details.
Set to 0
to disable the buffering for the minimum memory usage.
Note that the memory allocation of the buffers is influenced by the --use-mmap flag.
+Specify the directory rclone will use for caching, to override the default.
+Default value is depending on operating system: - Windows %LocalAppData%\rclone
, if LocalAppData
is defined. - macOS $HOME/Library/Caches/rclone
if HOME
is defined. - Unix $XDG_CACHE_HOME/rclone
if XDG_CACHE_HOME
is defined, else $HOME/.cache/rclone
if HOME
is defined. - Fallback (on all OS) to $TMPDIR/rclone
, where TMPDIR
is the value from --temp-dir.
You can use the config paths command to see the current value.
+Cache directory is heavily used by the VFS File Caching mount feature, but also by serve, GUI and other parts of rclone.
If this flag is set then in a sync
, copy
or move
, rclone will do all the checks to see whether files need to be transferred before doing any of the transfers. Normally rclone would start running transfers as soon as possible.
This flag can be useful on IO limited systems where transfers interfere with checking.
@@ -3833,6 +4074,14 @@ pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qHAdd an HTTP header for all upload transactions. The flag can be repeated to add multiple headers.
rclone sync -i ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"
See the GitHub issue here for currently supported backends.
+Rclone commands output values for sizes (e.g. number of bytes) and counts (e.g. number of files) either as raw numbers, or in human-readable format.
+In human-readable format the values are scaled to larger units, indicated with a suffix shown after the value, and rounded to three decimals. Rclone consistently uses binary units (powers of 2) for sizes and decimal units (powers of 10) for counts. The unit prefix for size is according to IEC standard notation, e.g. Ki
for kibi. Used with byte unit, 1 KiB
means 1024 Byte. In list type of output, only the unit prefix appended to the value (e.g. 9.762Ki
), while in more textual output the full unit is shown (e.g. 9.762 KiB
). For counts the SI standard notation is used, e.g. prefix k
for kilo. Used with file counts, 1k
means 1000 files.
The various list commands output raw numbers by default. Option --human-readable
will make them output values in human-readable format instead (with the short unit prefix).
The about command outputs human-readable by default, with a command-specific option --full
to output the raw numbers instead.
Command size outputs both human-readable and raw numbers in the same output.
+The tree command also considers --human-readable
, but it will not use the exact same notation as the other commands: It rounds to one decimal, and uses single letter suffix, e.g. K
instead of Ki
. The reason for this is that it relies on an external library.
The interactive command ncdu shows human-readable by default, and responds to key u
for toggling human-readable format.
Using this option will cause rclone to ignore the case of the files when synchronizing so files will not be copied/synced when the existing filenames are the same, even if the casing is different.
Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.
While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.
+When performing a move
/moveto
command, this flag will leave skipped files in the source location unchanged when a file with the same name exists on the destination.
Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If --checksum
is set then it only checks the checksum.
It will also cause rclone to skip verifying the sizes are the same after transfer.
@@ -3880,7 +4130,7 @@ y/n/s/!/q> nIf FILE exists then rclone will append to it.
Note that if you are using the logrotate
program to manage rclone's logs, then you should use the copytruncate
option as rclone doesn't have a signal to rotate logs.
Comma separated list of log format options. date
, time
, microseconds
, longfile
, shortfile
, UTC
. The default is "date
,time
".
Comma separated list of log format options. Accepted options are date
, time
, microseconds
, pid
, longfile
, shortfile
, UTC
. Any other keywords will be silently ignored. pid
will tag log messages with process identifier which useful with rclone mount --daemon
. Other accepted options are explained in the go documentation. The default log format is "date
,time
".
This sets the log level for rclone. The default log level is NOTICE
.
DEBUG
is equivalent to -vv
. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.
This can be useful for running rclone in a script or rclone mount
.
If using --syslog
this sets the syslog facility (e.g. KERN
, USER
). See man syslog
for a list of possible facilities. The default facility is DAEMON
.
Specify the directory rclone will use for temporary files, to override the default. Make sure the directory exists and have accessible permissions.
+By default the operating system's temp directory will be used: - On Unix systems, $TMPDIR
if non-empty, else /tmp
. - On Windows, the first non-empty value from %TMP%
, %TEMP%
, %USERPROFILE%
, or the Windows directory.
When overriding the default with this option, the specified path will be set as value of environment variable TMPDIR
on Unix systems and TMP
and TEMP
on Windows.
You can use the config paths command to see the current value.
Limit transactions per second to this number. Default is 0 which is used to mean unlimited transactions per second.
A transaction is roughly defined as an API call; its exact meaning will depend on the backend. For HTTP based backends it is an HTTP PUT/GET/POST/etc and its response. For FTP/SFTP it is a round trip transaction over TCP.
@@ -4303,7 +4558,7 @@ export RCLONE_CONFIG_PASSRclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.
-Every option in rclone can have its default set by environment variable.
To find the name of the environment variable, first, take the long option name, strip the leading --
, change -
to _
, make upper case and prepend RCLONE_
.
For example, to always set --stats 5s
, set the environment variable RCLONE_STATS=5s
. If you set stats on the command line this will override the environment variable setting.
$ export RCLONE_CONFIG_MYS3_TYPE=s3
$ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
$ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX
-$ rclone lsd MYS3:
+$ rclone lsd mys3:
-1 2016-09-21 12:54:21 -1 my-bucket
$ rclone listremotes | grep mys3
mys3:
Note that if you want to create a remote using environment variables you must create the ..._TYPE
variable as above.
Note that the name of a remote created using environment variable is case insensitive, in contrast to regular remotes stored in config file as documented above. You must write the name in uppercase in the environment variable, but as seen from example above it will be listed and can be accessed in lowercase, while you can also refer to the same remote in uppercase:
+$ rclone lsd mys3:
+ -1 2016-09-21 12:54:21 -1 my-bucket
+$ rclone lsd MYS3:
+ -1 2016-09-21 12:54:21 -1 my-bucket
Note that you can only set the options of the immediate backend, so RCLONE_CONFIG_MYS3CRYPT_ACCESS_KEY_ID has no effect, if myS3Crypt is a crypt remote based on an S3 remote. However RCLONE_S3_ACCESS_KEY_ID will set the access key of all remotes using S3, including myS3Crypt.
Note also that now rclone has connection strings, it is probably easier to use those instead which makes the above example
rclone lsd :s3,access_key_id=XXX,secret_access_key=XXX:
@@ -4660,11 +4920,11 @@ user2/prefect
If the rclone error Command .... needs .... arguments maximum: you provided .... non flag arguments:
is encountered, the cause is commonly spaces within the name of a remote or flag value. The fix then is to quote values containing spaces.
--min-size
- Don't transfer any file smaller than thisControls the minimum size file within the scope of an rclone command. Default units are KiByte
but abbreviations K
, M
, G
, T
or P
are valid.
E.g. rclone ls remote: --min-size 50k
lists files on remote:
of 50 KiByte size or larger.
Controls the minimum size file within the scope of an rclone command. Default units are KiB
but abbreviations K
, M
, G
, T
or P
are valid.
E.g. rclone ls remote: --min-size 50k
lists files on remote:
of 50 KiB size or larger.
--max-size
- Don't transfer any file larger than thisControls the maximum size file within the scope of an rclone command. Default units are KiByte
but abbreviations K
, M
, G
, T
or P
are valid.
E.g. rclone ls remote: --max-size 1G
lists files on remote:
of 1 GiByte size or smaller.
Controls the maximum size file within the scope of an rclone command. Default units are KiB
but abbreviations K
, M
, G
, T
or P
are valid.
E.g. rclone ls remote: --max-size 1G
lists files on remote:
of 1 GiB size or smaller.
--max-age
- Don't transfer any file older than thisControls the maximum age of files within the scope of an rclone command. Default units are seconds or the following abbreviations are valid:
In conjunction with rclone sync
, --delete-excluded
deletes any files on the destination which are excluded from the command.
E.g. the scope of rclone sync -i A: B:
can be restricted:
rclone --min-size 50k --delete-excluded sync A: B:
-All files on B:
which are less than 50 KiByte are deleted because they are excluded from the rclone sync command.
All files on B:
which are less than 50 KiB are deleted because they are excluded from the rclone sync command.
--dump filters
- dump the filters to the outputDumps the defined filters to standard output in regular expression format.
Useful for debugging.
@@ -4996,18 +5256,18 @@ dir1/dir2/dir3/.ignore }This takes the following parameters
+This takes the following parameters:
Returns
+Returns:
For example
+Example:
rclone rc backend/command command=noop fs=. -o echo=yes -o blue -a path1 -a path2
Returns
{
@@ -5044,7 +5304,7 @@ rclone rc cache/expire remote=/ withData=true
Show statistics for the cache remote.
This takes the following parameters
+This takes the following parameters:
See the listremotes command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
See the config providers command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified.
In either case "rate" is returned as a human readable string, and "bytesPerSecond" is returned as a number.
This takes the following parameters
+This takes the following parameters:
Returns
+Returns:
For example
+Example:
rclone rc core/command command=ls -a mydrive:/ -o max-depth=1
rclone rc core/command -a ls -a mydrive:/ -o max-depth=1
-Returns
+Returns:
{
"error": false,
"result": "<Raw command line output>"
@@ -5204,20 +5464,20 @@ OR
This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats
The most interesting values for most people are:
-- HeapAlloc: This is the amount of memory rclone is actually using
-- HeapSys: This is the amount of memory rclone has obtained from the OS
-- Sys: this is the total amount of memory requested from the OS
+
- HeapAlloc - this is the amount of memory rclone is actually using
+- HeapSys - this is the amount of memory rclone has obtained from the OS
+- Sys - this is the total amount of memory requested from the OS
- It is virtual memory so may include unused memory
core/obscure: Obscures a string passed in.
Pass a clear string and rclone will obscure it for the config file: - clear - string
-Returns - obscured - string
+Returns: - obscured - string
core/pid: Return PID of current process
This returns PID of current process. Useful for stopping rclone process.
core/quit: Terminates the app.
-(optional) Pass an exit code to be used for terminating the app: - exitCode - int
+(Optional) Pass an exit code to be used for terminating the app: - exitCode - int
core/stats: Returns stats about current transfers.
This returns all available stats:
rclone rc core/stats
@@ -5261,7 +5521,7 @@ OR
}
Values for "transferring", "checking" and "lastError" are only assigned if data is available. The value for "eta" is null if an eta cannot be determined.
This deletes entire stats group
+This deletes entire stats group.
Parameters
This shows the current version of go and the go runtime
+This shows the current version of go and the go runtime:
To include every blocking event in the profile, pass rate = 1. To turn off profiling entirely, pass rate <= 0.
After calling this you can use this to see the blocking profile:
go tool pprof http://localhost:5572/debug/pprof/block
-Parameters
+Parameters:
To turn off profiling entirely, pass rate 0. To just read the current rate, pass rate < 0. (For n>1 the details of sampling may change.)
Once this is set you can look use this to profile the mutex contention:
go tool pprof http://localhost:5572/debug/pprof/mutex
-Parameters
+Parameters:
Results
+Results:
Returns - entries - number of items in the cache
Authentication is required for this call.
Parameters - None
-Results
+Parameters: None.
+Results:
Parameters
+Parameters:
Results
+Results:
Parameters
+Parameters:
This shows currently mounted points, which can be used for performing an unmount
+This shows currently mounted points, which can be used for performing an unmount.
This takes no parameters and returns
rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
If no mountType is provided, the priority is given as follows: 1. mount 2.cmount 3.mount2
-This takes the following parameters
+This takes the following parameters:
Eg
+Example:
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=mount
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'
-The vfsOpt are as described in options/get and can be seen in the the "vfs" section when running and the mountOpt can be seen in the "mount" section.
+The vfsOpt are as described in options/get and can be seen in the the "vfs" section when running and the mountOpt can be seen in the "mount" section:
rclone rc options/get
Authentication is required for this call.
Authentication is required for this call.
rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
-This takes the following parameters
+This takes the following parameters:
Eg
+Example:
rclone rc mount/unmount mountPoint=/home/<user>/mountPoint
Authentication is required for this call.
This shows currently mounted points, which can be used for performing an unmount
+This shows currently mounted points, which can be used for performing an unmount.
This takes no parameters and returns error if unmount does not succeed.
Eg
rclone rc mount/unmountall
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
See the about command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
See the cleanup command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
See the delete command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
See the deletefile command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
This command does not have a command line equivalent so use this instead:
rclone rc --loopback operations/fsinfo fs=remote:
This takes the following parameters
+This takes the following parameters:
The result is
+Returns:
See the lsjson command for more information on the above and examples.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
See the mkdir command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
Returns
+Returns:
See the link command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
See the purge command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
See the rmdir command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
See the rmdirs command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
Returns
+Returns:
See the size command command for more information on the above.
Authentication is required for this call.
-This takes the following parameters
The result is
+Note that if you are only interested in files then it is much more efficient to set the filesOnly flag in the options.
+See the lsjson command for more information on the above and examples.
+Authentication is required for this call.
+This takes the following parameters:
+Authentication is required for this call.
Returns - options - a list of the options block names
+Returns: - options - a list of the options block names
Returns an object where keys are option block names and values are an object with the current option values in.
Note that these are the global options which are unaffected by use of the _config and _filter parameters. If you wish to read the parameters set in _config then use options/config and for _filter use options/filter.
@@ -5629,7 +5909,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheThis call is mostly useful for seeing if _config and _filter passing is working.
This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions.
Parameters
+Parameters:
And this sets NOTICE level logs (normal without -v)
rclone rc options/set --json '{"main": {"LogLevel": "NOTICE"}}'
used for adding a plugin to the webgui
-This takes the following parameters
+Used for adding a plugin to the webgui.
+This takes the following parameters:
Eg
+Example:
rclone rc pluginsctl/addPlugin
Authentication is required for this call.
This shows all possible plugins by a mime type
-This takes the following parameters
+This shows all possible plugins by a mime type.
+This takes the following parameters:
and returns
+Returns:
Eg
+Example:
rclone rc pluginsctl/getPluginsForType type=video/mp4
Authentication is required for this call.
This allows you to get the currently enabled plugins and their details.
-This takes no parameters and returns
+This takes no parameters and returns:
Eg
+E.g.
rclone rc pluginsctl/listPlugins
Authentication is required for this call.
allows listing of test plugins with the rclone.test set to true in package.json of the plugin
-This takes no parameters and returns
+Allows listing of test plugins with the rclone.test set to true in package.json of the plugin.
+This takes no parameters and returns:
Eg
+E.g.
rclone rc pluginsctl/listTestPlugins
Authentication is required for this call.
This allows you to remove a plugin using it's name
-This takes parameters
+This allows you to remove a plugin using it's name.
+This takes parameters:
author
/plugin_name
author
/plugin_name
.Eg
+E.g.
rclone rc pluginsctl/removePlugin name=rclone/video-plugin
Authentication is required for this call.
This allows you to remove a plugin using it's name
-This takes the following parameters
+This allows you to remove a plugin using it's name.
+This takes the following parameters:
author
/plugin_name
author
/plugin_name
.Eg
+Example:
rclone rc pluginsctl/removeTestPlugin name=rclone/rclone-webui-react
Authentication is required for this call.
This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
See the copy command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
See the move command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
+This takes the following parameters:
If a cloud storage system allows duplicate files then it can have two objects with the same name.
This confuses rclone greatly when syncing - use the rclone dedupe
command to rename or remove duplicates.
Some cloud storage systems might have restrictions on the characters that are usable in file or directory names. When rclone
detects such a name during a file upload, it will transparently replace the restricted characters with similar looking Unicode characters.
Some cloud storage systems might have restrictions on the characters that are usable in file or directory names. When rclone
detects such a name during a file upload, it will transparently replace the restricted characters with similar looking Unicode characters. To handle the different sets of restricted characters for different backends, rclone uses something it calls encoding.
This process is designed to avoid ambiguous file names as much as possible and allow to move files between many cloud storage systems transparently.
The name shown by rclone
to the user or during log output will only contain a minimal set of replaced characters to ensure correct formatting and not necessarily the actual name used on the cloud storage.
This transformation is reversed when downloading a file or parsing rclone
arguments. For example, when uploading a file named my file?.txt
to Onedrive will be displayed as my file?.txt
on the console, but stored as my file?.txt
(the ?
gets replaced by the similar looking ?
character) to Onedrive. The reverse transformation allows to read a fileunusual/name.txt
from Google Drive, by passing the name unusual/name.txt
(the /
needs to be replaced by the similar looking /
character) on the command line.
This transformation is reversed when downloading a file or parsing rclone
arguments. For example, when uploading a file named my file?.txt
to Onedrive, it will be displayed as my file?.txt
on the console, but stored as my file?.txt
to Onedrive (the ?
gets replaced by the similar looking ?
character, the so-called "fullwidth question mark"). The reverse transformation allows to read a file unusual/name.txt
from Google Drive, by passing the name unusual/name.txt
on the command line (the /
needs to be replaced by the similar looking /
character).
The filename encoding system works well in most cases, at least where file names are written in English or similar languages. You might not even notice it: It just works. In some cases it may lead to issues, though. E.g. when file names are written in Chinese, or Japanese, where it is always the Unicode fullwidth variants of the punctuation marks that are used.
+On Windows, the characters :
, *
and ?
are examples of restricted characters. If these are used in filenames on a remote that supports it, Rclone will transparently convert them to their fullwidth Unicode variants *
, ?
and :
when downloading to Windows, and back again when uploading. This way files with names that are not allowed on Windows can still be stored.
However, if you have files on your Windows system originally with these same Unicode characters in their names, they will be included in the same conversion process. E.g. if you create a file in your Windows filesystem with name Test:1.jpg
, where :
is the Unicode fullwidth colon symbol, and use rclone to upload it to Google Drive, which supports regular :
(halfwidth question mark), rclone will replace the fullwidth :
with the halfwidth :
and store the file as Test:1.jpg
in Google Drive. Since both Windows and Google Drive allows the name Test:1.jpg
, it would probably be better if rclone just kept the name as is in this case.
With the opposite situation; if you have a file named Test:1.jpg
, in your Google Drive, e.g. uploaded from a Linux system where :
is valid in file names. Then later use rclone to copy this file to your Windows computer you will notice that on your local disk it gets renamed to Test:1.jpg
. The original filename is not legal on Windows, due to the :
, and rclone therefore renames it to make the copy possible. That is all good. However, this can also lead to an issue: If you already had a different file named Test:1.jpg
on Windows, and then use rclone to copy either way. Rclone will then treat the file originally named Test:1.jpg
on Google Drive and the file originally named Test:1.jpg
on Windows as the same file, and replace the contents from one with the other.
Its virtually impossible to handle all cases like these correctly in all situations, but by customizing the encoding option, changing the set of characters that rclone should convert, you should be able to create a configuration that works well for your specific situation. See also the example below.
+(Windows was used as an example of a file system with many restricted characters, and Google drive a storage system with few.)
The table below shows the characters that are replaced by default.
When a replacement character is found in a filename, this character will be escaped with the ‛
character to avoid ambiguous file names. (e.g. a file named ␀.txt
would shown as ‛␀.txt
)
In this case all invalid UTF-8 bytes will be replaced with a quoted representation of the byte value to allow uploading a file to such a backend. For example, the invalid byte 0xFE
will be encoded as ‛FE
.
A common source of invalid UTF-8 bytes are local filesystems, that store names in a different encoding than UTF-8 or UTF-16, like latin1. See the local filenames section for details.
Most backends have an encoding options, specified as a flag --backend-encoding
where backend
is the name of the backend, or as a config parameter encoding
(you'll need to select the Advanced config in rclone config
to see it).
Most backends have an encoding option, specified as a flag --backend-encoding
where backend
is the name of the backend, or as a config parameter encoding
(you'll need to select the Advanced config in rclone config
to see it).
This will have default value which encodes and decodes characters in such a way as to preserve the maximum number of characters (see above).
-However this can be incorrect in some scenarios, for example if you have a Windows file system with characters such as *
and ?
that you want to remain as those characters on the remote rather than being translated to *
and ?
.
However this can be incorrect in some scenarios, for example if you have a Windows file system with Unicode fullwidth characters *
, ?
or :
, that you want to remain as those characters on the remote rather than being translated to regular (halfwidth) *
, ?
and :
.
The --backend-encoding
flags allow you to change that. You can disable the encoding completely with --backend-encoding None
or set encoding = None
in the config file.
Encoding takes a comma separated list of encodings. You can see the list of all available characters by passing an invalid value to this flag, e.g. --local-encoding "help"
and rclone help flags encoding
will show you the defaults for the backends.
Encoding takes a comma separated list of encodings. You can see the list of all possible values by passing an invalid value to this flag, e.g. --local-encoding "help"
. The command rclone help flags encoding
will show you the defaults for the backends.
Dot | -. |
+. or .. as entire string |
DoubleQuote | @@ -6563,8 +6858,13 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalSlash | / |
SquareBracket | +[ , ] |
+
To take a specific example, the FTP backend's default encoding is
--ftp-encoding "Slash,Del,Ctl,RightSpace,Dot"
However, let's say the FTP server is running on Windows and can't have any of the invalid Windows characters in file names. You are backing up Linux servers to this FTP server which do have those characters in file names. So you would add the Windows set which are
@@ -6572,9 +6872,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalto the existing ones, giving:
Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot,Del,RightSpace
This can be specified using the --ftp-encoding
flag or using an encoding
parameter in the config file.
Or let's say you have a Windows server but you want to preserve *
and ?
, you would then have this as the encoding (the Windows encoding minus Asterisk
and Question
).
Slash,LtGt,DoubleQuote,Colon,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
-This can be specified using the --local-encoding
flag or using an encoding
parameter in the config file.
As a nother example, take a Windows system where there is a file with name Test:1.jpg
, where :
is the Unicode fullwidth colon symbol. When using rclone to copy this to a remote which supports :
, the regular (halfwidth) colon (such as Google Drive), you will notice that the file gets renamed to Test:1.jpg
.
To avoid this you can change the set of characters rclone should convert for the local filesystem, using command-line argument --local-encoding
. Rclone's default behavior on Windows corresponds to
--local-encoding "Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot"
+If you want to use fullwidth characters :
, *
and ?
in your filenames without rclone changing them when uploading to a remote, then set the same as the default value but without Colon,Question,Asterisk
:
--local-encoding "Slash,LtGt,DoubleQuote,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot"
+Alternatively, you can disable the conversion of any characters with --local-encoding None
.
Instead of using command-line argument --local-encoding
, you may also set it as environment variable RCLONE_LOCAL_ENCODING
, or configure a remote of type local
in your config, and set the encoding
option there.
The risk by doing this is that if you have a filename with the regular (halfwidth) :
, *
and ?
in your cloud storage, and you try to download it to your Windows filesystem, this will fail. These characters are not valid in filenames on Windows, and you have told rclone not to work around this by converting them to valid fullwidth variants.
MIME types (also known as media types) classify types of documents using a simple text classification, e.g. text/html
or application/pdf
.
Some cloud storage systems support reading (R
) the MIME type of objects and some support writing (W
) the MIME type of objects.
This describes the global flags available to every rclone command split into two groups, non backend and backend flags.
These flags are available for every command.
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --auto-confirm If enabled, do not request console confirmation.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16Mi)
- --bwlimit BwTimetable Bandwidth limit in KiByte/s, or use suffix B|K|M|G|T|P or a full timetable.
- --bwlimit-file BwTimetable Bandwidth limit per file in KiByte/s, or use suffix B|K|M|G|T|P or a full timetable.
+ --ask-password Allow prompt for password for encrypted configuration (default true)
+ --auto-confirm If enabled, do not request console confirmation
+ --backup-dir string Make backups into hierarchy based in DIR
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name
+ --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
+ --bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
+ --bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--ca-cert string CA certificate used to verify servers
- --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
- --check-first Do all the checks before starting transfers.
- --checkers int Number of checkers to run in parallel. (default 8)
+ --cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone")
+ --check-first Do all the checks before starting transfers
+ --checkers int Number of checkers to run in parallel (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
- --compare-dest stringArray Include additional comma separated server-side paths during comparison.
- --config string Config file. (default "$HOME/.config/rclone/rclone.conf")
+ --compare-dest stringArray Include additional comma separated server-side paths during comparison
+ --config string Config file (default "$HOME/.config/rclone/rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
- --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination.
+ --copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
--cpuprofile string Write cpu profile to file
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
--delete-after When synchronizing, delete files on destination after transferring (default)
--delete-before When synchronizing, delete files on destination before transferring
--delete-during When synchronizing, delete files during transfer
--delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use --disable help to see a list.
- --disable-http2 Disable HTTP/2 in the global transport.
+ --disable string Disable a comma separated list of features (use --disable help to see a list)
+ --disable-http2 Disable HTTP/2 in the global transport
-n, --dry-run Do a trial run with no permanent changes
- --dscp string Set DSCP value to connections. Can be value or names, eg. CS1, LE, DF, AF21.
+ --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21
--dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info
@@ -7138,544 +7444,565 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--exclude-from stringArray Read exclude patterns from file (use - to read from stdin)
--exclude-if-present string Exclude directories if filename is present
--expect-continue-timeout duration Timeout when using expect / 100-continue in HTTP (default 1s)
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --fast-list Use recursive list if available; uses more memory but fewer transactions
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file (use - to read from stdin)
- --fs-cache-expire-duration duration cache remotes for this long (0 to disable caching) (default 5m0s)
- --fs-cache-expire-interval duration interval to check for expired remotes (default 1m0s)
+ --fs-cache-expire-duration duration Cache remotes for this long (0 to disable caching) (default 5m0s)
+ --fs-cache-expire-interval duration Interval to check for expired remotes (default 1m0s)
--header stringArray Set HTTP header for all transactions
--header-download stringArray Set HTTP header for download transactions
--header-upload stringArray Set HTTP header for upload transactions
+ --human-readable Print numbers in a human-readable format, sizes with suffix Ki|Mi|Gi|Ti|Pi
--ignore-case Ignore case in filters (case insensitive)
--ignore-case-sync Ignore case when synchronizing
- --ignore-checksum Skip post copy check of checksums.
- --ignore-errors delete even if there are I/O errors
+ --ignore-checksum Skip post copy check of checksums
+ --ignore-errors Delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
+ --ignore-size Ignore size when skipping use mod-time or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
+ --immutable Do not modify files, fail if existing files have been modified
--include stringArray Include files matching pattern
--include-from stringArray Read include patterns from file (use - to read from stdin)
-i, --interactive Enable interactive mode
+ --kv-lock-time duration Maximum time to keep key-value database locked by process (default 1s)
--log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time")
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --log-systemd Activate systemd integration for the logger.
- --low-level-retries int Number of low level retries to do. (default 10)
+ --log-systemd Activate systemd integration for the logger
+ --low-level-retries int Number of low level retries to do (default 10)
--max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-delete int When synchronizing, limit the number of deletes (default -1)
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-duration duration Maximum duration rclone will transfer data for.
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-duration duration Maximum duration rclone will transfer data for
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
- --max-stats-groups int Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000)
- --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
+ --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
+ --max-transfer SizeSuffix Maximum size of data to transfer (default off)
--memprofile string Write memory profile to file
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
--modify-window duration Max time diff to be considered the same (default 1ns)
- --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250Mi)
- --multi-thread-streams int Max number of streams to use for multi-thread downloads. (default 4)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-check-dest Don't check the destination, copy regardless.
- --no-console Hide console window. Supported on Windows only.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-unicode-normalization Don't normalize unicode characters in filenames.
- --no-update-modtime Don't update destination mod-time if files identical.
+ --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size (default 250Mi)
+ --multi-thread-streams int Max number of streams to use for multi-thread downloads (default 4)
+ --no-check-certificate Do not verify the server SSL certificate (insecure)
+ --no-check-dest Don't check the destination, copy regardless
+ --no-console Hide console window (supported on Windows only)
+ --no-gzip-encoding Don't set Accept-Encoding: gzip
+ --no-traverse Don't traverse destination file system on copy
+ --no-unicode-normalization Don't normalize unicode characters in filenames
+ --no-update-modtime Don't update destination mod-time if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
- --password-command SpaceSepList Command for supplying password for encrypted configuration.
- -P, --progress Show progress during transfer.
- --progress-terminal-title Show progress on the terminal title. Requires -P/--progress.
+ --password-command SpaceSepList Command for supplying password for encrypted configuration
+ -P, --progress Show progress during transfer
+ --progress-terminal-title Show progress on the terminal title (requires -P/--progress)
-q, --quiet Print as little stuff as possible
- --rc Enable the remote control server.
- --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
- --rc-allow-origin string Set the allowed origin for CORS.
- --rc-baseurl string Prefix for URLs - leave blank for root.
+ --rc Enable the remote control server
+ --rc-addr string IPaddress:Port or :Port to bind server to (default "localhost:5572")
+ --rc-allow-origin string Set the allowed origin for CORS
+ --rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
--rc-client-ca string Client certificate authority to verify clients with
--rc-enable-metrics Enable prometheus metrics on /metrics
- --rc-files string Path to local files to serve on the HTTP server.
+ --rc-files string Path to local files to serve on the HTTP server
--rc-htpasswd string htpasswd file - if not provided no authentication is done
--rc-job-expire-duration duration expire finished async jobs older than this value (default 1m0s)
--rc-job-expire-interval duration interval to check for expired async jobs (default 10s)
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-no-auth Don't require auth for certain methods.
- --rc-pass string Password for authentication.
+ --rc-no-auth Don't require auth for certain methods
+ --rc-pass string Password for authentication
--rc-realm string realm for authentication (default "rclone")
- --rc-serve Enable the serving of remote objects.
+ --rc-serve Enable the serving of remote objects
--rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
- --rc-template string User Specified Template.
- --rc-user string User name for authentication.
- --rc-web-fetch-url string URL to fetch the releases for webgui. (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
+ --rc-template string User-specified template
+ --rc-user string User name for authentication
+ --rc-web-fetch-url string URL to fetch the releases for webgui (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
--rc-web-gui Launch WebGUI on localhost
--rc-web-gui-force-update Force update to latest version of web gui
--rc-web-gui-no-open-browser Don't open the browser automatically
--rc-web-gui-update Check and update to latest version of web gui
- --refresh-times Refresh the modtime of remote files.
+ --refresh-times Refresh the modtime of remote files
--retries int Retry operations this many times if they fail (default 3)
- --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable)
--size-only Skip based on size only, not mod-time or checksum
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
+ --stats duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-one-line Make the stats fit on one line.
- --stats-one-line-date Enables --stats-one-line and add current date/time prefix.
- --stats-one-line-date-format string Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format
+ --stats-one-line Make the stats fit on one line
+ --stats-one-line-date Enable --stats-one-line and add current date/time prefix
+ --stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format
--stats-unit string Show data rate in stats as either 'bits' or 'bytes' per second (default "bytes")
- --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100Ki)
- --suffix string Suffix to add to changed files.
- --suffix-keep-extension Preserve the extension when using --suffix.
+ --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
+ --suffix string Suffix to add to changed files
+ --suffix-keep-extension Preserve the extension when using --suffix
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, e.g. KERN,USER,... (default "DAEMON")
+ --temp-dir string Directory rclone will use for temporary files (default "/tmp")
--timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --tpslimit float Limit HTTP transactions per second to this
+ --tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--track-renames When synchronizing, track file renames and do a server-side move if possible
--track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash")
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --use-cookies Enable session cookiejar.
- --use-json-log Use json log format.
- --use-mmap Use mmap allocator (see docs).
+ --transfers int Number of file transfers to run in parallel (default 4)
+ -u, --update Skip files that are newer on the destination
+ --use-cookies Enable session cookiejar
+ --use-json-log Use json log format
+ --use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.56.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.57.0")
-v, --verbose count Print lots more stuff (repeat for more)
Backend Flags
These flags are available for every command. They control the backends and may be set in the config file.
- --acd-auth-url string Auth server URL.
- --acd-client-id string OAuth Client Id
- --acd-client-secret string OAuth Client Secret
- --acd-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9Gi)
- --acd-token string OAuth Access Token as a JSON blob.
- --acd-token-url string Token server url.
- --acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --alias-remote string Remote or path to alias.
- --azureblob-access-tier string Access tier of blob: hot, cool or archive.
- --azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator)
- --azureblob-archive-tier-delete Delete archive tier blobs before overwriting.
- --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100 MiB). (default 4Mi)
- --azureblob-disable-checksum Don't store MD5 checksum with object metadata.
- --azureblob-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
- --azureblob-endpoint string Endpoint for the service
- --azureblob-key string Storage Account Key (leave blank to use SAS URL or Emulator)
- --azureblob-list-chunk int Size of blob list. (default 5000)
- --azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s)
- --azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool.
- --azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any. Leave blank if msi_object_id or msi_mi_res_id specified.
- --azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any. Leave blank if msi_client_id or msi_object_id specified.
- --azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any. Leave blank if msi_client_id or msi_mi_res_id specified.
- --azureblob-public-access string Public access level of a container: blob, container.
- --azureblob-sas-url string SAS URL for container level access only
- --azureblob-service-principal-file string Path to file containing credentials for use with a service principal.
- --azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB). (Deprecated)
- --azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
- --azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
- --b2-account string Account ID or Application Key ID
- --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96Mi)
- --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
- --b2-disable-checksum Disable checksums for large (> upload cutoff) files
- --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w)
- --b2-download-url string Custom endpoint for downloads.
- --b2-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
- --b2-endpoint string Endpoint for the service.
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-key string Application Key
- --b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s)
- --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
- --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200Mi)
- --b2-versions Include old versions in directory listings.
- --box-access-token string Box App Primary Access Token
- --box-auth-url string Auth server URL.
- --box-box-config-file string Box App config.json location
- --box-box-sub-type string (default "user")
- --box-client-id string OAuth Client Id
- --box-client-secret string OAuth Client Secret
- --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
- --box-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
- --box-root-folder-id string Fill in for rclone to use a non root folder as its starting point.
- --box-token string OAuth Access Token as a JSON blob.
- --box-token-url string Token server url.
- --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50 MiB). (default 50Mi)
- --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
- --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
- --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
- --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5Mi)
- --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10Gi)
- --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
- --cache-db-purge Clear all the cached data for this remote on start.
- --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
- --cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc.). (default 6h0m0s)
- --cache-plex-insecure string Skip all certificate verification when connecting to the Plex server
- --cache-plex-password string The password of the Plex user (obscured)
- --cache-plex-url string The URL of the Plex server
- --cache-plex-username string The username of the Plex user
- --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
- --cache-remote string Remote to cache.
- --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
- --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
- --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
- --cache-workers int How many workers should run in parallel to download chunks. (default 4)
- --cache-writes Cache file data on writes through the FS
- --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks. (default 2Gi)
- --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks.
- --chunker-hash-type string Choose how chunker handles hash sums. All modes but "none" require metadata. (default "md5")
- --chunker-remote string Remote to chunk/unchunk.
- --compress-level int GZIP compression level (-2 to 9). (default -1)
- --compress-mode string Compression mode. (default "gzip")
- --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size. (default 20Mi)
- --compress-remote string Remote to compress.
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
- --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
- --crypt-no-data-encryption Option to either encrypt file data or leave it unencrypted.
- --crypt-password string Password or pass phrase for encryption. (obscured)
- --crypt-password2 string Password or pass phrase for salt. Optional but recommended. (obscured)
- --crypt-remote string Remote to encrypt/decrypt.
- --crypt-server-side-across-configs Allow server-side operations (e.g. copy) to work across different crypt configs.
- --crypt-show-mapping For all files listed show how the names encrypt.
- --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
- --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-auth-url string Auth server URL.
- --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8Mi)
- --drive-client-id string Google Application Client Id
- --drive-client-secret string OAuth Client Secret
- --drive-disable-http2 Disable drive using http2 (default true)
- --drive-encoding MultiEncoder This sets the encoding for the backend. (default InvalidUtf8)
- --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-formats string Deprecated: see export_formats
- --drive-impersonate string Impersonate this user when using a service account.
- --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
- --drive-keep-revision-forever Keep new head revision of each file forever.
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
- --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
- --drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive.
- --drive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different drive configs.
- --drive-service-account-credentials string Service Account Credentials JSON blob
- --drive-service-account-file string Service Account Credentials JSON file path
- --drive-shared-with-me Only show files that are shared with me.
- --drive-size-as-quota Show sizes as storage quota usage, not actual size.
- --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-skip-shortcuts If set skip shortcut files
- --drive-starred-only Only show files that are starred.
- --drive-stop-on-download-limit Make download limit errors be fatal
- --drive-stop-on-upload-limit Make upload limit errors be fatal
- --drive-team-drive string ID of the Shared Drive (Team Drive)
- --drive-token string OAuth Access Token as a JSON blob.
- --drive-token-url string Token server url.
- --drive-trashed-only Only show files that are in the trash.
- --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8Mi)
- --drive-use-created-date Use file created date instead of modified date.,
- --drive-use-shared-date Use date file was shared instead of modified date.
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
- --dropbox-auth-url string Auth server URL.
- --dropbox-batch-mode string Upload file batching sync|async|off. (default "sync")
- --dropbox-batch-size int Max number of files in upload batch.
- --dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
- --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150Mi). (default 48Mi)
- --dropbox-client-id string OAuth Client Id
- --dropbox-client-secret string OAuth Client Secret
- --dropbox-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
- --dropbox-impersonate string Impersonate this user when using a business account.
- --dropbox-shared-files Instructs rclone to work on individual shared files.
- --dropbox-shared-folders Instructs rclone to work on shared folders.
- --dropbox-token string OAuth Access Token as a JSON blob.
- --dropbox-token-url string Token server url.
- --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
- --fichier-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
- --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
- --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
- --fichier-shared-folder string If you want to download a shared folder, add this parameter
- --filefabric-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
- --filefabric-permanent-token string Permanent Authentication Token
- --filefabric-root-folder-id string ID of the root folder
- --filefabric-token string Session Token
- --filefabric-token-expiry string Token expiry time
- --filefabric-url string URL of the Enterprise File Fabric to connect to
- --filefabric-version string Version read from the file fabric
- --ftp-close-timeout Duration Maximum time to wait for a response to close. (default 1m0s)
- --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
- --ftp-disable-epsv Disable using EPSV even if server advertises support
- --ftp-disable-mlsd Disable using MLSD even if server advertises support
- --ftp-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,RightSpace,Dot)
- --ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
- --ftp-host string FTP host to connect to
- --ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
- --ftp-no-check-certificate Do not verify the TLS certificate of the server
- --ftp-pass string FTP password (obscured)
- --ftp-port string FTP port, leave blank to use default (21)
- --ftp-tls Use Implicit FTPS (FTP over TLS)
- --ftp-user string FTP username, leave blank for current username, $USER
- --gcs-anonymous Access public buckets and objects without credentials
- --gcs-auth-url string Auth server URL.
- --gcs-bucket-acl string Access Control List for new buckets.
- --gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
- --gcs-client-id string OAuth Client Id
- --gcs-client-secret string OAuth Client Secret
- --gcs-encoding MultiEncoder This sets the encoding for the backend. (default Slash,CrLf,InvalidUtf8,Dot)
- --gcs-location string Location for the newly created buckets.
- --gcs-object-acl string Access Control List for new objects.
- --gcs-project-number string Project number.
- --gcs-service-account-file string Service Account Credentials JSON file path
- --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
- --gcs-token string OAuth Access Token as a JSON blob.
- --gcs-token-url string Token server url.
- --gphotos-auth-url string Auth server URL.
- --gphotos-client-id string OAuth Client Id
- --gphotos-client-secret string OAuth Client Secret
- --gphotos-include-archived Also view and download archived media.
- --gphotos-read-only Set to make the Google Photos backend read only.
- --gphotos-read-size Set to read the size of media items.
- --gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
- --gphotos-token string OAuth Access Token as a JSON blob.
- --gphotos-token-url string Token server url.
- --hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
- --hdfs-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
- --hdfs-namenode string hadoop name node and port
- --hdfs-service-principal-name string Kerberos service principal name for the namenode
- --hdfs-username string hadoop user name
- --http-headers CommaSepList Set HTTP headers for all transactions
- --http-no-head Don't use HEAD requests to find file sizes in dir listing
- --http-no-slash Set this if the site doesn't end directories with /
- --http-url string URL of http host to connect to
- --hubic-auth-url string Auth server URL.
- --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5Gi)
- --hubic-client-id string OAuth Client Id
- --hubic-client-secret string OAuth Client Secret
- --hubic-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8)
- --hubic-no-chunk Don't chunk files during streaming upload.
- --hubic-token string OAuth Access Token as a JSON blob.
- --hubic-token-url string Token server url.
- --jottacloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
- --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
- --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10Mi)
- --jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them.
- --jottacloud-trashed-only Only show files that are in the trash.
- --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10Mi)
- --koofr-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
- --koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
- --koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
- --koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
- --koofr-setmtime Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. (default true)
- --koofr-user string Your Koofr user name
- -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
- --local-case-insensitive Force the filesystem to report itself as case insensitive
- --local-case-sensitive Force the filesystem to report itself as case sensitive.
- --local-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Dot)
- --local-no-check-updated Don't check to see if the files change during upload
- --local-no-preallocate Disable preallocation of disk space for transferred files
- --local-no-set-modtime Disable setting modtime
- --local-no-sparse Disable sparse files for multi-thread downloads
- --local-nounc string Disable UNC (long path names) conversion on Windows
- --local-unicode-normalization Apply unicode NFC normalization to paths and filenames
- --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (Deprecated)
- --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
- --mailru-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
- --mailru-pass string Password (obscured)
- --mailru-speedup-enable Skip full upload if there is another file with same data hash. (default true)
- --mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash). (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
- --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi)
- --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk. (default 32Mi)
- --mailru-user string User name (usually email)
- --mega-debug Output more debug from Mega.
- --mega-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
- --mega-hard-delete Delete files permanently rather than putting them into the trash.
- --mega-pass string Password. (obscured)
- --mega-user string User name
- -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
- --onedrive-auth-url string Auth server URL.
- --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes). (default 10Mi)
- --onedrive-client-id string OAuth Client Id
- --onedrive-client-secret string OAuth Client Secret
- --onedrive-drive-id string The ID of the drive to use
- --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
- --onedrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
- --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
- --onedrive-link-password string Set the password for links created by the link command.
- --onedrive-link-scope string Set the scope of the links created by the link command. (default "anonymous")
- --onedrive-link-type string Set the type of the links created by the link command. (default "view")
- --onedrive-list-chunk int Size of listing chunk. (default 1000)
- --onedrive-no-versions Remove all versions on modifying operations
- --onedrive-region string Choose national cloud region for OneDrive. (default "global")
- --onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs.
- --onedrive-token string OAuth Access Token as a JSON blob.
- --onedrive-token-url string Token server url.
- --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size. (default 10Mi)
- --opendrive-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
- --opendrive-password string Password. (obscured)
- --opendrive-username string Username
- --pcloud-auth-url string Auth server URL.
- --pcloud-client-id string OAuth Client Id
- --pcloud-client-secret string OAuth Client Secret
- --pcloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
- --pcloud-hostname string Hostname to connect to. (default "api.pcloud.com")
- --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point. (default "d0")
- --pcloud-token string OAuth Access Token as a JSON blob.
- --pcloud-token-url string Token server url.
- --premiumizeme-encoding MultiEncoder This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
- --putio-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
- --qingstor-access-key-id string QingStor Access Key ID
- --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4Mi)
- --qingstor-connection-retries int Number of connection retries. (default 3)
- --qingstor-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8)
- --qingstor-endpoint string Enter an endpoint URL to connection QingStor API.
- --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
- --qingstor-secret-access-key string QingStor Secret Access Key (password)
- --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
- --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
- --qingstor-zone string Zone to connect to.
- --s3-access-key-id string AWS Access Key ID.
- --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
- --s3-bucket-acl string Canned ACL used when creating buckets.
- --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5Mi)
- --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
- --s3-disable-checksum Don't store MD5 checksum with object metadata
- --s3-disable-http2 Disable usage of http2 for S3 backends
- --s3-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8,Dot)
- --s3-endpoint string Endpoint for S3 API.
- --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
- --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
- --s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
- --s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request). (default 1000)
- --s3-location-constraint string Location constraint - must be set to match the Region.
- --s3-max-upload-parts int Maximum number of parts in a multipart upload. (default 10000)
- --s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed. (default 1m0s)
- --s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool.
- --s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
- --s3-no-head If set, don't HEAD uploaded objects to check integrity
- --s3-no-head-object If set, don't HEAD objects
- --s3-profile string Profile to use in the shared credentials file
- --s3-provider string Choose your S3 provider.
- --s3-region string Region to connect to.
- --s3-requester-pays Enables requester pays option when interacting with S3 bucket.
- --s3-secret-access-key string AWS Secret Access Key (password)
- --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
- --s3-session-token string An AWS session token
- --s3-shared-credentials-file string Path to the shared credentials file
- --s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3.
- --s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.
- --s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
- --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
- --s3-storage-class string The storage class to use when storing new objects in S3.
- --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
- --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
- --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint.
- --s3-v2-auth If true use v2 authentication.
- --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
- --seafile-create-library Should rclone create a library if it doesn't exist
- --seafile-encoding MultiEncoder This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
- --seafile-library string Name of the library. Leave blank to access all non-encrypted libraries.
- --seafile-library-key string Library password (for encrypted libraries only). Leave blank if you pass it through the command line. (obscured)
- --seafile-pass string Password (obscured)
- --seafile-url string URL of seafile host to connect to
- --seafile-user string User name (usually email address)
- --sftp-ask-password Allow asking for SFTP password when needed.
- --sftp-disable-concurrent-reads If set don't use concurrent reads
- --sftp-disable-concurrent-writes If set don't use concurrent writes
- --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
- --sftp-host string SSH host to connect to
- --sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
- --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
- --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. (obscured)
- --sftp-key-pem string Raw PEM-encoded private key, If specified, will override key_file parameter.
- --sftp-key-use-agent When set forces the usage of the ssh-agent.
- --sftp-known-hosts-file string Optional path to known_hosts file.
- --sftp-md5sum-command string The command used to read md5 hashes. Leave blank for autodetect.
- --sftp-pass string SSH password, leave blank to use ssh-agent. (obscured)
- --sftp-path-override string Override path used by SSH connection.
- --sftp-port string SSH port, leave blank to use default (22)
- --sftp-pubkey-file string Optional path to public key file.
- --sftp-server-command string Specifies the path or command to run a sftp server on the remote host.
- --sftp-set-modtime Set the modified time on the remote if set. (default true)
- --sftp-sha1sum-command string The command used to read sha1 hashes. Leave blank for autodetect.
- --sftp-skip-links Set to skip any symlinks and any other non regular files.
- --sftp-subsystem string Specifies the SSH2 subsystem on the remote host. (default "sftp")
- --sftp-use-fstat If set use fstat instead of stat
- --sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods.
- --sftp-user string SSH username, leave blank for current username, $USER
- --sharefile-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 64Mi)
- --sharefile-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
- --sharefile-endpoint string Endpoint for API calls.
- --sharefile-root-folder-id string ID of the root folder
- --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload. (default 128Mi)
- --skip-links Don't warn about skipped symlinks.
- --sugarsync-access-key-id string Sugarsync Access Key ID.
- --sugarsync-app-id string Sugarsync App ID.
- --sugarsync-authorization string Sugarsync authorization
- --sugarsync-authorization-expiry string Sugarsync authorization expiry
- --sugarsync-deleted-id string Sugarsync deleted folder id
- --sugarsync-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Ctl,InvalidUtf8,Dot)
- --sugarsync-hard-delete Permanently delete files if true
- --sugarsync-private-access-key string Sugarsync Private Access Key
- --sugarsync-refresh-token string Sugarsync refresh token
- --sugarsync-root-id string Sugarsync root id
- --sugarsync-user string Sugarsync user
- --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
- --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
- --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
- --swift-auth string Authentication URL for server (OS_AUTH_URL).
- --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
- --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
- --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5Gi)
- --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-encoding MultiEncoder This sets the encoding for the backend. (default Slash,InvalidUtf8)
- --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
- --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
- --swift-key string API key or password (OS_PASSWORD).
- --swift-leave-parts-on-error If true avoid calling abort upload on a failure. It should be set to true for resuming uploads across different sessions.
- --swift-no-chunk Don't chunk files during streaming upload.
- --swift-region string Region name - optional (OS_REGION_NAME)
- --swift-storage-policy string The storage policy to use when creating a new container
- --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
- --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
- --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
- --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
- --swift-user string User name to log in (OS_USERNAME).
- --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
- --tardigrade-access-grant string Access Grant.
- --tardigrade-api-key string API Key.
- --tardigrade-passphrase string Encryption Passphrase. To access existing objects enter passphrase used for uploading.
- --tardigrade-provider string Choose an authentication method. (default "existing")
- --tardigrade-satellite-address <nodeid>@<address>:<port> Satellite Address. Custom satellite address should match the format: <nodeid>@<address>:<port>. (default "us-central-1.tardigrade.io")
- --union-action-policy string Policy to choose upstream on ACTION category. (default "epall")
- --union-cache-time int Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used. (default 120)
- --union-create-policy string Policy to choose upstream on CREATE category. (default "epmfs")
- --union-search-policy string Policy to choose upstream on SEARCH category. (default "ff")
- --union-upstreams string List of space separated upstreams.
- --uptobox-access-token string Your access Token, get it from https://uptobox.com/my_account
- --uptobox-encoding MultiEncoder This sets the encoding for the backend. (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
- --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
- --webdav-bearer-token-command string Command to run to get a bearer token
- --webdav-encoding string This sets the encoding for the backend.
- --webdav-headers CommaSepList Set HTTP headers for all transactions
- --webdav-pass string Password. (obscured)
- --webdav-url string URL of http host to connect to
- --webdav-user string User name. In case NTLM authentication is used, the username should be in the format 'Domain\User'.
- --webdav-vendor string Name of the Webdav site/service/software you are using
- --yandex-auth-url string Auth server URL.
- --yandex-client-id string OAuth Client Id
- --yandex-client-secret string OAuth Client Secret
- --yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
- --yandex-token string OAuth Access Token as a JSON blob.
- --yandex-token-url string Token server url.
- --zoho-auth-url string Auth server URL.
- --zoho-client-id string OAuth Client Id
- --zoho-client-secret string OAuth Client Secret
- --zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
- --zoho-region string Zoho region to connect to.
- --zoho-token string OAuth Access Token as a JSON blob.
- --zoho-token-url string Token server url.
+ --acd-auth-url string Auth server URL
+ --acd-client-id string OAuth Client Id
+ --acd-client-secret string OAuth Client Secret
+ --acd-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
+ --acd-token string OAuth Access Token as a JSON blob
+ --acd-token-url string Token server url
+ --acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s)
+ --alias-remote string Remote or path to alias
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive
+ --azureblob-account string Storage Account Name
+ --azureblob-archive-tier-delete Delete archive tier blobs before overwriting
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100 MiB) (default 4Mi)
+ --azureblob-disable-checksum Don't store MD5 checksum with object metadata
+ --azureblob-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key
+ --azureblob-list-chunk int Size of blob list (default 5000)
+ --azureblob-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
+ --azureblob-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
+ --azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any
+ --azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
+ --azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any
+ --azureblob-no-head-object If set, do not do HEAD before GET when getting objects
+ --azureblob-public-access string Public access level of a container: blob or container
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-service-principal-file string Path to file containing credentials for use with a service principal
+ --azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
+ --azureblob-use-emulator Uses local storage emulator if provided as 'true'
+ --azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size (default 96Mi)
+ --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
+ --b2-disable-checksum Disable checksums for large (> upload cutoff) files
+ --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
+ --b2-download-url string Custom endpoint for downloads
+ --b2-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --b2-endpoint string Endpoint for the service
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files
+ --b2-key string Application Key
+ --b2-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
+ --b2-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --b2-versions Include old versions in directory listings
+ --box-access-token string Box App Primary Access Token
+ --box-auth-url string Auth server URL
+ --box-box-config-file string Box App config.json location
+ --box-box-sub-type string (default "user")
+ --box-client-id string OAuth Client Id
+ --box-client-secret string OAuth Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file (default 100)
+ --box-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --box-list-chunk int Size of listing chunk 1-1000 (default 1000)
+ --box-owned-by string Only show items owned by the login (email address) passed in
+ --box-root-folder-id string Fill in for rclone to use a non root folder as its starting point
+ --box-token string OAuth Access Token as a JSON blob
+ --box-token-url string Token server url
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50 MiB) (default 50Mi)
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cache chunk files (default "$HOME/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data) (default 5Mi)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk (default 10Gi)
+ --cache-db-path string Directory to store file structure metadata DB (default "$HOME/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verification when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user (obscured)
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-remote string Remote to cache
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Cache file data on writes through the FS
+ --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi)
+ --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
+ --chunker-hash-type string Choose how chunker handles hash sums (default "md5")
+ --chunker-remote string Remote to chunk/unchunk
+ --compress-level int GZIP compression level (-2 to 9) (default -1)
+ --compress-mode string Compression mode (default "gzip")
+ --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi)
+ --compress-remote string Remote to compress
+ -L, --copy-links Follow symlinks and copy the pointed to item
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true)
+ --crypt-filename-encryption string How to encrypt the filenames (default "standard")
+ --crypt-no-data-encryption Option to either encrypt file data or leave it unencrypted
+ --crypt-password string Password or pass phrase for encryption (obscured)
+ --crypt-password2 string Password or pass phrase for salt (obscured)
+ --crypt-remote string Remote to encrypt/decrypt
+ --crypt-server-side-across-configs Allow server-side operations (e.g. copy) to work across different crypt configs
+ --crypt-show-mapping For all files listed show how the names encrypt
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs
+ --drive-auth-owner-only Only consider files owned by the authenticated user
+ --drive-auth-url string Auth server URL
+ --drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string OAuth Client Secret
+ --drive-disable-http2 Disable drive using http2 (default true)
+ --drive-encoding MultiEncoder This sets the encoding for the backend (default InvalidUtf8)
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: See export_formats
+ --drive-impersonate string Impersonate this user when using a service account
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs
+ --drive-keep-revision-forever Keep new head revision of each file forever
+ --drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
+ --drive-pacer-burst int Number of API calls to allow without sleeping (default 100)
+ --drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive
+ --drive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different drive configs
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-size-as-quota Show sizes as storage quota usage, not actual size
+ --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only
+ --drive-skip-gdocs Skip google documents in all listings
+ --drive-skip-shortcuts If set skip shortcut files
+ --drive-starred-only Only show files that are starred
+ --drive-stop-on-download-limit Make download limit errors be fatal
+ --drive-stop-on-upload-limit Make upload limit errors be fatal
+ --drive-team-drive string ID of the Shared Drive (Team Drive)
+ --drive-token string OAuth Access Token as a JSON blob
+ --drive-token-url string Token server url
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8Mi)
+ --drive-use-created-date Use file created date instead of modified date
+ --drive-use-shared-date Use date file was shared instead of modified date
+ --drive-use-trash Send files to the trash instead of deleting permanently (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
+ --dropbox-auth-url string Auth server URL
+ --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish comitting (default 10m0s)
+ --dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
+ --dropbox-batch-size int Max number of files in upload batch
+ --dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
+ --dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
+ --dropbox-client-id string OAuth Client Id
+ --dropbox-client-secret string OAuth Client Secret
+ --dropbox-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
+ --dropbox-impersonate string Impersonate this user when using a business account
+ --dropbox-shared-files Instructs rclone to work on individual shared files
+ --dropbox-shared-folders Instructs rclone to work on shared folders
+ --dropbox-token string OAuth Access Token as a JSON blob
+ --dropbox-token-url string Token server url
+ --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
+ --fichier-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
+ --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
+ --fichier-shared-folder string If you want to download a shared folder, add this parameter
+ --filefabric-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --filefabric-permanent-token string Permanent Authentication Token
+ --filefabric-root-folder-id string ID of the root folder
+ --filefabric-token string Session Token
+ --filefabric-token-expiry string Token expiry time
+ --filefabric-url string URL of the Enterprise File Fabric to connect to
+ --filefabric-version string Version read from the file fabric
+ --ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s)
+ --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
+ --ftp-disable-epsv Disable using EPSV even if server advertises support
+ --ftp-disable-mlsd Disable using MLSD even if server advertises support
+ --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
+ --ftp-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
+ --ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
+ --ftp-host string FTP host to connect to
+ --ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
+ --ftp-no-check-certificate Do not verify the TLS certificate of the server
+ --ftp-pass string FTP password (obscured)
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s)
+ --ftp-tls Use Implicit FTPS (FTP over TLS)
+ --ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
+ --ftp-user string FTP username, leave blank for current username, $USER
+ --ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
+ --gcs-anonymous Access public buckets and objects without credentials
+ --gcs-auth-url string Auth server URL
+ --gcs-bucket-acl string Access Control List for new buckets
+ --gcs-bucket-policy-only Access checks should use bucket-level IAM policies
+ --gcs-client-id string OAuth Client Id
+ --gcs-client-secret string OAuth Client Secret
+ --gcs-encoding MultiEncoder This sets the encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gcs-location string Location for the newly created buckets
+ --gcs-object-acl string Access Control List for new objects
+ --gcs-project-number string Project number
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage
+ --gcs-token string OAuth Access Token as a JSON blob
+ --gcs-token-url string Token server url
+ --gphotos-auth-url string Auth server URL
+ --gphotos-client-id string OAuth Client Id
+ --gphotos-client-secret string OAuth Client Secret
+ --gphotos-encoding MultiEncoder This sets the encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gphotos-include-archived Also view and download archived media
+ --gphotos-read-only Set to make the Google Photos backend read only
+ --gphotos-read-size Set to read the size of media items
+ --gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
+ --gphotos-token string OAuth Access Token as a JSON blob
+ --gphotos-token-url string Token server url
+ --hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default)
+ --hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1)
+ --hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
+ --hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
+ --hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
+ --hdfs-encoding MultiEncoder This sets the encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
+ --hdfs-namenode string Hadoop name node and port
+ --hdfs-service-principal-name string Kerberos service principal name for the namenode
+ --hdfs-username string Hadoop user name
+ --http-headers CommaSepList Set HTTP headers for all transactions
+ --http-no-head Don't use HEAD requests to find file sizes in dir listing
+ --http-no-slash Set this if the site doesn't end directories with /
+ --http-url string URL of http host to connect to
+ --hubic-auth-url string Auth server URL
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
+ --hubic-client-id string OAuth Client Id
+ --hubic-client-secret string OAuth Client Secret
+ --hubic-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8)
+ --hubic-no-chunk Don't chunk files during streaming upload
+ --hubic-token string OAuth Access Token as a JSON blob
+ --hubic-token-url string Token server url
+ --jottacloud-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
+ --jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
+ --jottacloud-trashed-only Only show files that are in the trash
+ --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
+ --koofr-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
+ --koofr-mountid string Mount ID of the mount to use
+ --koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
+ --koofr-setmtime Does the backend support setting modification time (default true)
+ --koofr-user string Your Koofr user name
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
+ --local-case-insensitive Force the filesystem to report itself as case insensitive
+ --local-case-sensitive Force the filesystem to report itself as case sensitive
+ --local-encoding MultiEncoder This sets the encoding for the backend (default Slash,Dot)
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-preallocate Disable preallocation of disk space for transferred files
+ --local-no-set-modtime Disable setting modtime
+ --local-no-sparse Disable sparse files for multi-thread downloads
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --local-unicode-normalization Apply unicode NFC normalization to paths and filenames
+ --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
+ --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
+ --mailru-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --mailru-pass string Password (obscured)
+ --mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
+ --mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
+ --mailru-speedup-max-disk SizeSuffix This option allows you to disable speedup (put by hash) for large files (default 3Gi)
+ --mailru-speedup-max-memory SizeSuffix Files larger than the size given below will always be hashed on disk (default 32Mi)
+ --mailru-user string User name (usually email)
+ --mega-debug Output more debug from Mega
+ --mega-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --mega-hard-delete Delete files permanently rather than putting them into the trash
+ --mega-pass string Password (obscured)
+ --mega-user string User name
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only)
+ --onedrive-auth-url string Auth server URL
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
+ --onedrive-client-id string OAuth Client Id
+ --onedrive-client-secret string OAuth Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
+ --onedrive-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
+ --onedrive-link-password string Set the password for links created by the link command
+ --onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
+ --onedrive-link-type string Set the type of the links created by the link command (default "view")
+ --onedrive-list-chunk int Size of listing chunk (default 1000)
+ --onedrive-no-versions Remove all versions on modifying operations
+ --onedrive-region string Choose national cloud region for OneDrive (default "global")
+ --onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs
+ --onedrive-token string OAuth Access Token as a JSON blob
+ --onedrive-token-url string Token server url
+ --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
+ --opendrive-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
+ --opendrive-password string Password (obscured)
+ --opendrive-username string Username
+ --pcloud-auth-url string Auth server URL
+ --pcloud-client-id string OAuth Client Id
+ --pcloud-client-secret string OAuth Client Secret
+ --pcloud-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
+ --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
+ --pcloud-token string OAuth Access Token as a JSON blob
+ --pcloud-token-url string Token server url
+ --premiumizeme-encoding MultiEncoder This sets the encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --putio-encoding MultiEncoder This sets the encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
+ --qingstor-connection-retries int Number of connection retries (default 3)
+ --qingstor-encoding MultiEncoder This sets the encoding for the backend (default Slash,Ctl,InvalidUtf8)
+ --qingstor-endpoint string Enter an endpoint URL to connection QingStor API
+ --qingstor-env-auth Get QingStor credentials from runtime
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-upload-concurrency int Concurrency for multipart uploads (default 1)
+ --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --qingstor-zone string Zone to connect to
+ --s3-access-key-id string AWS Access Key ID
+ --s3-acl string Canned ACL used when creating buckets and storing or copying objects
+ --s3-bucket-acl string Canned ACL used when creating buckets
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
+ --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-disable-http2 Disable usage of http2 for S3 backends
+ --s3-download-url string Custom endpoint for downloads
+ --s3-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --s3-endpoint string Endpoint for S3 API
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
+ --s3-force-path-style If true use path style access if false use virtual hosted style (default true)
+ --s3-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
+ --s3-list-chunk int Size of listing chunk (response list for each ListObject S3 request) (default 1000)
+ --s3-location-constraint string Location constraint - must be set to match the Region
+ --s3-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
+ --s3-memory-pool-flush-time Duration How often internal memory buffer pools will be flushed (default 1m0s)
+ --s3-memory-pool-use-mmap Whether to use mmap buffers in internal memory pool
+ --s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
+ --s3-no-head If set, don't HEAD uploaded objects to check integrity
+ --s3-no-head-object If set, do not do HEAD before GET when getting objects
+ --s3-profile string Profile to use in the shared credentials file
+ --s3-provider string Choose your S3 provider
+ --s3-region string Region to connect to
+ --s3-requester-pays Enables requester pays option when interacting with S3 bucket
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
+ --s3-session-token string An AWS session token
+ --s3-shared-credentials-file string Path to the shared credentials file
+ --s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
+ --s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data
+ --s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
+ --s3-storage-class string The storage class to use when storing new objects in S3
+ --s3-upload-concurrency int Concurrency for multipart uploads (default 4)
+ --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
+ --s3-v2-auth If true use v2 authentication
+ --seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
+ --seafile-create-library Should rclone create a library if it doesn't exist
+ --seafile-encoding MultiEncoder This sets the encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
+ --seafile-library string Name of the library
+ --seafile-library-key string Library password (for encrypted libraries only) (obscured)
+ --seafile-pass string Password (obscured)
+ --seafile-url string URL of seafile host to connect to
+ --seafile-user string User name (usually email address)
+ --sftp-ask-password Allow asking for SFTP password when needed
+ --sftp-disable-concurrent-reads If set don't use concurrent reads
+ --sftp-disable-concurrent-writes If set don't use concurrent writes
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
+ --sftp-host string SSH host to connect to
+ --sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
+ --sftp-key-file string Path to PEM-encoded private key file
+ --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file (obscured)
+ --sftp-key-pem string Raw PEM-encoded private key
+ --sftp-key-use-agent When set forces the usage of the ssh-agent
+ --sftp-known-hosts-file string Optional path to known_hosts file
+ --sftp-md5sum-command string The command used to read md5 hashes
+ --sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
+ --sftp-path-override string Override path used by SSH connection
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-pubkey-file string Optional path to public key file
+ --sftp-server-command string Specifies the path or command to run a sftp server on the remote host
+ --sftp-set-modtime Set the modified time on the remote if set (default true)
+ --sftp-sha1sum-command string The command used to read sha1 hashes
+ --sftp-skip-links Set to skip any symlinks and any other non regular files
+ --sftp-subsystem string Specifies the SSH2 subsystem on the remote host (default "sftp")
+ --sftp-use-fstat If set use fstat instead of stat
+ --sftp-use-insecure-cipher Enable the use of insecure ciphers and key exchange methods
+ --sftp-user string SSH username, leave blank for current username, $USER
+ --sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
+ --sharefile-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --sharefile-endpoint string Endpoint for API calls
+ --sharefile-root-folder-id string ID of the root folder
+ --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
+ --sia-api-password string Sia Daemon API Password (obscured)
+ --sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
+ --sia-encoding MultiEncoder This sets the encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
+ --sia-user-agent string Siad User Agent (default "Sia-Agent")
+ --skip-links Don't warn about skipped symlinks
+ --sugarsync-access-key-id string Sugarsync Access Key ID
+ --sugarsync-app-id string Sugarsync App ID
+ --sugarsync-authorization string Sugarsync authorization
+ --sugarsync-authorization-expiry string Sugarsync authorization expiry
+ --sugarsync-deleted-id string Sugarsync deleted folder id
+ --sugarsync-encoding MultiEncoder This sets the encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
+ --sugarsync-hard-delete Permanently delete files if true
+ --sugarsync-private-access-key string Sugarsync Private Access Key
+ --sugarsync-refresh-token string Sugarsync refresh token
+ --sugarsync-root-id string Sugarsync root id
+ --sugarsync-user string Sugarsync user
+ --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
+ --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
+ --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
+ --swift-auth string Authentication URL for server (OS_AUTH_URL)
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-encoding MultiEncoder This sets the encoding for the backend (default Slash,InvalidUtf8)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form
+ --swift-key string API key or password (OS_PASSWORD)
+ --swift-leave-parts-on-error If true avoid calling abort upload on a failure
+ --swift-no-chunk Don't chunk files during streaming upload
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME)
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID)
+ --tardigrade-access-grant string Access grant
+ --tardigrade-api-key string API key
+ --tardigrade-passphrase string Encryption passphrase
+ --tardigrade-provider string Choose an authentication method (default "existing")
+ --tardigrade-satellite-address string Satellite address (default "us-central-1.tardigrade.io")
+ --union-action-policy string Policy to choose upstream on ACTION category (default "epall")
+ --union-cache-time int Cache time of usage and free space (in seconds) (default 120)
+ --union-create-policy string Policy to choose upstream on CREATE category (default "epmfs")
+ --union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
+ --union-upstreams string List of space separated upstreams
+ --uptobox-access-token string Your access token
+ --uptobox-encoding MultiEncoder This sets the encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
+ --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
+ --webdav-bearer-token-command string Command to run to get a bearer token
+ --webdav-encoding string This sets the encoding for the backend
+ --webdav-headers CommaSepList Set HTTP headers for all transactions
+ --webdav-pass string Password (obscured)
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-auth-url string Auth server URL
+ --yandex-client-id string OAuth Client Id
+ --yandex-client-secret string OAuth Client Secret
+ --yandex-encoding MultiEncoder This sets the encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --yandex-token string OAuth Access Token as a JSON blob
+ --yandex-token-url string Token server url
+ --zoho-auth-url string Auth server URL
+ --zoho-client-id string OAuth Client Id
+ --zoho-client-secret string OAuth Client Secret
+ --zoho-encoding MultiEncoder This sets the encoding for the backend (default Del,Ctl,InvalidUtf8)
+ --zoho-region string Zoho region to connect to
+ --zoho-token string OAuth Access Token as a JSON blob
+ --zoho-token-url string Token server url
Docker Volume Plugin
Introduction
Docker 1.9 has added support for creating named volumes via command-line interface and mounting them in containers as a way to share data between them. Since Docker 1.10 you can create named volumes with Docker Compose by descriptions in docker-compose.yml files for use by container groups on a single host. As of Docker 1.12 volumes are supported by Docker Swarm included with Docker Engine and created from descriptions in swarm compose v3 files for use with swarm stacks across multiple cluster nodes.
@@ -7688,8 +8015,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Create two directories required by rclone docker plugin:
sudo mkdir -p /var/lib/docker-plugins/rclone/config
sudo mkdir -p /var/lib/docker-plugins/rclone/cache
-Install the managed rclone docker plugin:
-docker plugin install rclone/docker-volume-rclone args="-v" --alias rclone --grant-all-permissions
+Install the managed rclone docker plugin for your architecture (here amd64
):
+docker plugin install rclone/docker-volume-rclone:amd64 args="-v" --alias rclone --grant-all-permissions
docker plugin list
Create your SFTP volume:
docker volume create firstvolume -d rclone -o type=sftp -o sftp-host=_hostname_ -o sftp-user=_username_ -o sftp-pass=_password_ -o allow-other=true
@@ -7774,9 +8101,14 @@ docker volume inspect vol1
Notice a few important details: - YAML prefers _
in option names instead of -
. - YAML treats single and double quotes interchangeably. Simple strings and integers can be left unquoted. - Boolean values must be quoted like 'true'
or "false"
because these two words are reserved by YAML. - The filesystem string is keyed with remote
(or with fs
). Normally you can omit quotes here, but if the string ends with colon, you must quote it like remote: "storage_box:"
. - YAML is picky about surrounding braces in values as this is in fact another syntax for key/value mappings. For example, JSON access tokens usually contain double quotes and surrounding braces, so you must put them in single quotes.
Installing as Managed Plugin
Docker daemon can install plugins from an image registry and run them managed. We maintain the docker-volume-rclone plugin image on Docker Hub.
+Rclone volume plugin requires Docker Engine >= 19.03.15
The plugin requires presence of two directories on the host before it can be installed. Note that plugin will not create them automatically. By default they must exist on host at the following locations (though you can tweak the paths): - /var/lib/docker-plugins/rclone/config
is reserved for the rclone.conf
config file and must exist even if it's empty and the config file is not present. - /var/lib/docker-plugins/rclone/cache
holds the plugin state file as well as optional VFS caches.
You can install managed plugin with default settings as follows:
-docker plugin install rclone/docker-volume-rclone:latest --grant-all-permissions --alias rclone
+docker plugin install rclone/docker-volume-rclone:amd64 --grant-all-permissions --alias rclone
+The :amd64
part of the image specification after colon is called a tag. Usually you will want to install the latest plugin for your architecture. In this case the tag will just name it, like amd64
above. The following plugin architectures are currently available: - amd64
- arm64
- arm-v7
+Sometimes you might want a concrete plugin version, not the latest one. Then you should use image tag in the form :ARCHITECTURE-VERSION
. For example, to install plugin version v1.56.2
on architecture arm64
you will use tag arm64-1.56.2
(note the removed v
) so the full image specification becomes rclone/docker-volume-rclone:arm64-1.56.2
.
+We also provide the latest
plugin tag, but since docker does not support multi-architecture plugins as of the time of this writing, this tag is currently an alias for amd64
. By convention the latest
tag is the default one and can be omitted, thus both rclone/docker-volume-rclone:latest
and just rclone/docker-volume-rclone
will refer to the latest plugin release for the amd64
platform.
+Also the amd64
part can be omitted from the versioned rclone plugin tags. For example, rclone image reference rclone/docker-volume-rclone:amd64-1.56.2
can be abbreviated as rclone/docker-volume-rclone:1.56.2
for convenience. However, for non-intel architectures you still have to use the full tag as amd64
or latest
will fail to start.
Managed plugin is in fact a special container running in a namespace separate from normal docker containers. Inside it runs the rclone serve docker
command. The config and cache directories are bind-mounted into the container at start. The docker daemon connects to a unix socket created by the command inside the container. The command creates on-demand remote mounts right inside, then docker machinery propagates them through kernel mount namespaces and bind-mounts into requesting user containers.
You can tweak a few plugin settings after installation when it's disabled (not in use), for instance:
docker plugin disable rclone
@@ -7784,14 +8116,15 @@ docker plugin set rclone RCLONE_VERBOSE=2 config=/etc/rclone args="--vfs-ca
docker plugin enable rclone
docker plugin inspect rclone
Note that if docker refuses to disable the plugin, you should find and remove all active volumes connected with it as well as containers and swarm services that use them. This is rather tedious so please carefully plan in advance.
-You can tweak the following settings: args
, config
, cache
, and RCLONE_VERBOSE
. It's your task to keep plugin settings in sync across swarm cluster nodes.
+You can tweak the following settings: args
, config
, cache
, HTTP_PROXY
, HTTPS_PROXY
, NO_PROXY
and RCLONE_VERBOSE
. It's your task to keep plugin settings in sync across swarm cluster nodes.
args
sets command-line arguments for the rclone serve docker
command (none by default). Arguments should be separated by space so you will normally want to put them in quotes on the docker plugin set command line. Both serve docker flags and generic rclone flags are supported, including backend parameters that will be used as defaults for volume creation. Note that plugin will fail (due to this docker bug) if the args
value is empty. Use e.g. args="-v"
as a workaround.
config=/host/dir
sets alternative host location for the config directory. Plugin will look for rclone.conf
here. It's not an error if the config file is not present but the directory must exist. Please note that plugin can periodically rewrite the config file, for example when it renews storage access tokens. Keep this in mind and try to avoid races between the plugin and other instances of rclone on the host that might try to change the config simultaneously resulting in corrupted rclone.conf
. You can also put stuff like private key files for SFTP remotes in this directory. Just note that it's bind-mounted inside the plugin container at the predefined path /data/config
. For example, if your key file is named sftp-box1.key
on the host, the corresponding volume config option should read -o sftp-key-file=/data/config/sftp-box1.key
.
cache=/host/dir
sets alternative host location for the cache directory. The plugin will keep VFS caches here. Also it will create and maintain the docker-plugin.state
file in this directory. When the plugin is restarted or reinstalled, it will look in this file to recreate any volumes that existed previously. However, they will not be re-mounted into consuming containers after restart. Usually this is not a problem as the docker daemon normally will restart affected user containers after failures, daemon restarts or host reboots.
RCLONE_VERBOSE
sets plugin verbosity from 0
(errors only, by default) to 2
(debugging). Verbosity can be also tweaked via args="-v [-v] ..."
. Since arguments are more generic, you will rarely need this setting. The plugin output by default feeds the docker daemon log on local host. Log entries are reflected as errors in the docker log but retain their actual level assigned by rclone in the encapsulated message string.
+HTTP_PROXY
, HTTPS_PROXY
, NO_PROXY
customize the plugin proxy settings.
You can set custom plugin options right when you install it, in one go:
docker plugin remove rclone
-docker plugin install rclone/docker-volume-rclone:latest \
+docker plugin install rclone/docker-volume-rclone:amd64 \
--alias rclone --grant-all-permissions \
args="-v --allow-other" config=/etc/rclone
docker plugin inspect rclone
@@ -7811,7 +8144,7 @@ docker plugin inspect rclone
First, install rclone. You can just run it (type rclone serve docker
and hit enter) for the test.
Install FUSE:
sudo apt-get -y install fuse
-Download two systemd configuration files: docker-volume-rclone.service and docker-volume-rclone.socket.
+Download two systemd configuration files: docker-volume-rclone.service and docker-volume-rclone.socket.
Put them to the /etc/systemd/system/
directory:
cp docker-volume-plugin.service /etc/systemd/system/
cp docker-volume-plugin.socket /etc/systemd/system/
@@ -7833,7 +8166,7 @@ systemctl restart docker
docker plugin inspect rclone
Note that docker (including latest 20.10.7) will not show actual values of args
, just the defaults.
Use journalctl --unit docker
to see managed plugin output as part of the docker daemon log. Note that docker reflects plugin lines as errors but their actual level can be seen from encapsulated message string.
You will usually install the latest version of managed plugin. Use the following commands to print the actual installed version:
+You will usually install the latest version of managed plugin for your platform. Use the following commands to print the actual installed version:
PLUGID=$(docker plugin list --no-trunc | awk '/rclone/{print$1}')
sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone version
You can even use runc
to run shell inside the plugin container:
though this is rarely needed.
+Finally I'd like to mention a caveat with updating volume settings. Docker CLI does not have a dedicated command like docker volume update
. It may be tempting to invoke docker volume create
with updated options on existing volume, but there is a gotcha. The command will do nothing, it won't even return an error. I hope that docker maintainers will fix this some day. In the meantime be aware that you must remove your volume before recreating it with new settings:
docker volume remove my_vol
docker volume create my_vol -d rclone -o opt1=new_val1 ...
@@ -7854,6 +8188,7 @@ docker volume inspect my_vol
This is a backend for the 1fichier cloud storage service. Note that a Premium subscription is required to use the API.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for 1Fichier involves getting the API key from the website which you need to do in your browser.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -7904,7 +8239,7 @@ y/e/d> y
1Fichier can have two files with exactly the same name and path (unlike a normal file system).
Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.
-In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to fichier (1Fichier).
Your API Key, get it from https://1fichier.com/console/params.pl
+Your API Key, get it from https://1fichier.com/console/params.pl.
Here are the advanced options specific to fichier (1Fichier).
If you want to download a shared folder, add this parameter
+If you want to download a shared folder, add this parameter.
If you want to download a shared file that is password protected, add this parameter
+If you want to download a shared file that is password protected, add this parameter.
NB Input to this must be obscured - see rclone obscure.
If you want to list the files in a shared folder that is password protected, add this parameter
+If you want to list the files in a shared folder that is password protected, add this parameter.
NB Input to this must be obscured - see rclone obscure.
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
rclone about
is not supported by the 1Fichier backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
During the initial setup with rclone config
you will specify the target remote. The target remote can either be a local path or another remote.
Subfolders can be used in target remote. Assume an alias remote named backup
with the target mydrive:private/backup
. Invoking rclone mkdir backup:desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop
.
There will be no special handling of paths containing ..
segments. Invoking rclone mkdir backup:../desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop
. The empty path is not allowed as a remote. To alias the current directory use .
instead.
Here is an example of how to make an alias called remote
for local folder. First run:
rclone config
This will guide you through an interactive setup process:
@@ -8075,10 +8411,11 @@ e/n/d/r/c/s/q> qrclone ls remote:
Copy another local directory to the alias directory called source
rclone copy /home/source remote:source
-Here are the standard options specific to alias (Alias for an existing remote).
Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
+Remote or path to alias.
+Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
Important: rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the Amazon Drive developer program is now closed to new entries so if you don't already have your own set of keys you will not be able to use rclone with Amazon Drive.
For the history on why rclone no longer has a set of Amazon Drive API keys see the forum.
If you happen to know anyone who works at Amazon then please ask them to re-instate rclone into the Amazon Drive developer program - thanks!
-The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config
walks you through it.
The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it.
Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own client_id
and client_secret
with Amazon Drive, or use a third party oauth proxy in which case you will need to enter client_id
, client_secret
, auth_url
and token_url
.
Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.
It does store MD5SUMs so for a more accurate sync, you can use the --checksum
flag.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to sharefile (Citrix Sharefile).
ID of the root folder
+ID of the root folder.
Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID).
Here are the advanced options specific to sharefile (Citrix Sharefile).
Cutoff for switching to multipart upload.
@@ -12529,7 +12970,8 @@ y/e/d> yUpload chunk size. Must a power of 2 >= 256k.
+Upload chunk size.
+Must a power of 2 >= 256k.
Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.
Reducing this will reduce memory usage but decrease performance.
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
Note that ShareFile is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
+ShareFile only supports filenames up to 256 characters in length.
rclone about
is not supported by the Citrix ShareFile backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
The encryption is a secret-key encryption (also called symmetric key encryption) algorithm, where a password (or pass phrase) is used to generate real encryption key. The password can be supplied by user, or you may chose to let rclone generate one. It will be stored in the configuration file, in a lightly obscured form. If you are in an environment where you are not able to keep your configuration secured, you should add configuration encryption as protection. As long as you have this configuration file, you will be able to decrypt your data. Without the configuration file, as long as you remember the password (or keep it in a safe place), you can re-create the configuration and gain access to the existing data. You may also configure a corresponding remote in a different installation to access the same data. See below for guidance to changing password.
Encryption uses cryptographic salt, to permute the encryption key so that the same string may be encrypted in different ways. When configuring the crypt remote it is optional to enter a salt, or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique string. Normally in cryptography, the salt is stored together with the encrypted content, and do not have to be memorized by the user. This is not the case in rclone, because rclone does not store any additional information on the remotes. Use of custom salt is effectively a second password that must be memorized.
File content encryption is performed using NaCl SecretBox, based on XSalsa20 cipher and Poly1305 for integrity. Names (file- and directory names) are also encrypted by default, but this has some implications and is therefore possible to turned off.
-Here is an example of how to make a remote called secret
.
To use crypt
, first set up the underlying remote. Follow the rclone config
instructions for the specific backend.
Before configuring the crypt remote, check the underlying remote is working. In this example the underlying remote is called remote
. We will configure a path path
within this remote to contain the encrypted content. Anything inside remote:path
will be encrypted and anything outside will not.
Crypt stores modification times using the underlying remote so support depends on that.
Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.
Use the rclone cryptcheck
command to check the integrity of a crypted remote instead of rclone check
which can't check the checksums properly.
Here are the standard options specific to crypt (Encrypt/Decrypt a remote).
Remote to encrypt/decrypt. Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
+Remote to encrypt/decrypt.
+Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
Password or pass phrase for salt. Optional but recommended. Should be different to the previous password.
+Password or pass phrase for salt.
+Optional but recommended. Should be different to the previous password.
NB Input to this must be obscured - see rclone obscure.
Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).
Allow server-side operations (e.g. copy) to work across different crypt configs.
@@ -12865,21 +13315,21 @@ $ rclone -q ls secret: -Here are the commands specific to the crypt backend.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
See the "rclone backend" command for more info on how to pass options and arguments.
These can be run on a running backend using the rc command backend/command.
-Encode the given filename(s)
rclone backend encode remote: [options] [<arguments>+]
This encodes the filenames given as arguments returning a list of strings of the encoded results.
Usage Example:
rclone backend encode crypt: file1 [file2...]
rclone rc backend/command command=encode fs=crypt: file1 [file2...]
-Decode the given filename(s)
rclone backend decode remote: [options] [<arguments>+]
This decodes the filenames given as arguments returning a list of strings of the decoded results. It will return an error if any of the inputs are invalid.
@@ -12950,14 +13400,15 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfileRclone uses scrypt
with parameters N=16384, r=8, p=1
with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.
scrypt
makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.
This remote is currently experimental. Things may break and data may be lost. Anything you do with this remote is at your own risk. Please understand the risks associated with using experimental code and don't use this remote in critical applications.
The Compress
remote adds compression to another remote. It is best used with remotes containing many large compressible files.
To use this remote, all you need to do is specify another remote and a compression mode to use:
Current remotes:
@@ -13007,12 +13458,12 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
Currently only gzip compression is supported, it provides a decent balance between speed and strength and is well supported by other application. Compression strength can further be configured via an advanced setting where 0 is no compression and 9 is strongest compression.
-If you open a remote wrapped by press, you will see that there are many files with an extension corresponding to the compression algorithm you chose. These files are standard files that can be opened by various archive programs, but they have some hidden metadata that allows them to be used by rclone. While you may download and decompress these files at will, do not manually delete or rename files. Files without correct metadata files will not be recognized by rclone.
+Currently only gzip compression is supported. It provides a decent balance between speed and size and is well supported by other applications. Compression strength can further be configured via an advanced setting where 0 is no compression and 9 is strongest compression.
+If you open a remote wrapped by compress, you will see that there are many files with an extension corresponding to the compression algorithm you chose. These files are standard files that can be opened by various archive programs, but they have some hidden metadata that allows them to be used by rclone. While you may download and decompress these files at will, do not manually delete or rename files. Files without correct metadata files will not be recognized by rclone.
The compressed files will be named *.###########.gz
where *
is the base file and the #
part is base64 encoded size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend.
Here are the standard options specific to compress (Compress a remote).
Remote to compress.
@@ -13037,17 +13488,12 @@ y/e/d> y -Here are the advanced options specific to compress (Compress a remote).
GZIP compression level (-2 to 9).
- Generally -1 (default, equivalent to 5) is recommended.
- Levels 1 to 9 increase compressiong at the cost of speed.. Going past 6
- generally offers very little return.
-
- Level -2 uses Huffmann encoding only. Only use if you now what you
- are doing
- Level 0 turns off compression.
+Generally -1 (default, equivalent to 5) is recommended. Levels 1 to 9 increase compression at the cost of speed. Going past 6 generally offers very little return.
+Level -2 uses Huffmann encoding only. Only use if you know what you are doing. Level 0 turns off compression.
Some remotes don't allow the upload of files with unknown size. In this case the compressed file will need to be cached to determine it's size.
- Files smaller than this limit will be cached in RAM, file larger than
- this limit will be cached on disk
+Files smaller than this limit will be cached in RAM, files larger than this limit will be cached on disk.
Paths are specified as remote:path
Dropbox paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -13187,10 +13633,11 @@ y/e/d> y
This provides the maximum possible upload speed especially with lots of small files, however rclone can't check the file got uploaded properly using this mode.
If you are using this mode then using "rclone check" after the transfer completes is recommended. Or you could do an initial transfer with --dropbox-batch-mode async
then do a final transfer with --dropbox-batch-mode sync
(the default).
Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode.
-Here are the standard options specific to dropbox (Dropbox).
OAuth Client Id Leave blank normally.
+OAuth Client Id.
+Leave blank normally.
OAuth Client Secret Leave blank normally.
+OAuth Client Secret.
+Leave blank normally.
Here are the advanced options specific to dropbox (Dropbox).
OAuth Access Token as a JSON blob.
@@ -13216,7 +13664,8 @@ y/e/d> yAuth server URL. Leave blank to use the provider defaults.
+Auth server URL.
+Leave blank to use the provider defaults.
Token server url. Leave blank to use the provider defaults.
+Token server url.
+Leave blank to use the provider defaults.
Upload chunk size. (< 150Mi).
+Upload chunk size (< 150Mi).
Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128 MiB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.
Max time to allow an idle upload batch before uploading
+Max time to allow an idle upload batch before uploading.
If an upload batch is idle for more than this long then it will be uploaded.
The default for this is 0 which means rclone will choose a sensible default based on the batch_mode in use.
Default: 0s
Max time to wait for a batch to finish comitting
+This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are some file names such as thumbs.db
which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading
if it attempts to upload one of those file names, but the sync won't fail.
Some errors may occur if you try to sync copyright-protected files because Dropbox has its own copyright detector that prevents this sort of file being downloaded. This will return the error ERROR : /path/to/your/file: Failed to copy: failed to open source object: path/restricted_content/.
If you have more than 10,000 files in a directory then rclone purge dropbox:dir
will return the error Failed to purge: There are too many files involved in this operation
. As a work-around do an rclone delete dropbox:dir
followed by an rclone rmdir dropbox:dir
.
When using rclone link
you'll need to set --expire
if using a non-personal account otherwise the visibility may not be correct. (Note that --expire
isn't supported on personal accounts). See the forum discussion and the dropbox SDK issue.
When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users.
Here is how to create your own Dropbox App ID for rclone:
This backend supports Storage Made Easy's Enterprise File Fabric™ which provides a software solution to integrate and unify File and Object Storage accessible through a global file system.
+The initial setup for the Enterprise File Fabric backend involves getting a token from the the Enterprise File Fabric which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -13441,10 +13900,10 @@ y/e/d> y
120673757,My contacts/
120673761,S3 Storage/
The ID for "S3 Storage" would be 120673761
.
Here are the standard options specific to filefabric (Enterprise File Fabric).
URL of the Enterprise File Fabric to connect to
+URL of the Enterprise File Fabric to connect to.
ID of the root folder Leave blank normally.
+ID of the root folder.
+Leave blank normally.
Fill in to make rclone start with directory of a given ID.
Permanent Authentication Token
+Permanent Authentication Token.
A Permanent Authentication Token can be created in the Enterprise File Fabric, on the users Dashboard under Security, there is an entry you'll see called "My Authentication Tokens". Click the Manage button to create one.
These tokens are normally valid for several years.
For more info see: https://docs.storagemadeeasy.com/organisationcloud/api-tokens
@@ -13486,10 +13946,10 @@ y/e/d> yHere are the advanced options specific to filefabric (Enterprise File Fabric).
Session Token
+Session Token.
This is a session token which rclone caches in the config file. It is usually valid for 1 hour.
Don't set this value - rclone will set it automatically.
Token expiry time
+Token expiry time.
Don't set this value - rclone will set it automatically.
Version read from the file fabric
+Version read from the file fabric.
Don't set this value - rclone will set it automatically.
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
FTP is the File Transfer Protocol. Rclone FTP support is provided using the github.com/jlaffaye/ftp package.
-Limitations of Rclone's FTP backend
+Limitations of Rclone's FTP backend
Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user's home directory.
To create an FTP configuration named remote
, run
rclone config
Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, use anonymous
as username and your email address as password.
rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=`rclone obscure dummy`
Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the FTP backend config for the remote, or with --ftp-tls
. The default FTPS port is 990
, not 21
and can be set with --ftp-port
.
Here are the standard options specific to ftp (FTP Connection).
-FTP host to connect to
-FTP username, leave blank for current username, $USER
-FTP port, leave blank to use default (21)
-FTP password
-NB Input to this must be obscured - see rclone obscure.
-Use Implicit FTPS (FTP over TLS) When using implicit FTP over TLS the client connects using TLS right from the start which breaks compatibility with non-TLS-aware servers. This is usually served over port 990 rather than port 21. Cannot be used in combination with explicit FTP.
-Use Explicit FTPS (FTP over TLS) When using explicit FTP over TLS the client explicitly requests security from the server in order to upgrade a plain text connection to an encrypted one. Cannot be used in combination with implicit FTP.
-Here are the advanced options specific to ftp (FTP Connection).
-Maximum number of FTP simultaneous connections, 0 for unlimited
-Do not verify the TLS certificate of the server
-Disable using EPSV even if server advertises support
-Disable using MLSD even if server advertises support
-Max time before closing idle connections
-If no connections have been returned to the connection pool in the time given, rclone will empty the connection pool.
-Set to 0 to keep connections indefinitely.
-Maximum time to wait for a response to close.
-This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
-Modified times are not supported. Times you see on the FTP server through rclone are those of upload.
-Rclone's FTP backend does not support any checksums but can compare file sizes.
-rclone about
is not supported by the FTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-The implementation of : --dump headers
, --dump bodies
, --dump auth
for debugging isn't the same as for rclone HTTP based backends - it has less fine grained control.
--timeout
isn't supported (but --contimeout
is).
--bind
isn't supported.
Rclone's FTP backend could support server-side move but does not at present.
-The ftp_proxy
environment variable is not currently supported.
FTP servers acting as rclone remotes must support 'passive' mode. Rclone's FTP implementation is not compatible with 'active' mode.
-In addition to the default restricted characters set the following characters are also replaced:
File names cannot end with the following characters. Repacement is limited to the last character in a file name:
This backend's interactive configuration wizard provides a selection of sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd. Just hit a selection number when prompted.
+Here are the standard options specific to ftp (FTP Connection).
+FTP host to connect to.
+E.g. "ftp.example.com".
+FTP username, leave blank for current username, $USER.
+FTP port, leave blank to use default (21).
+FTP password.
+NB Input to this must be obscured - see rclone obscure.
+Use Implicit FTPS (FTP over TLS).
+When using implicit FTP over TLS the client connects using TLS right from the start which breaks compatibility with non-TLS-aware servers. This is usually served over port 990 rather than port 21. Cannot be used in combination with explicit FTP.
+Use Explicit FTPS (FTP over TLS).
+When using explicit FTP over TLS the client explicitly requests security from the server in order to upgrade a plain text connection to an encrypted one. Cannot be used in combination with implicit FTP.
+Here are the advanced options specific to ftp (FTP Connection).
+Maximum number of FTP simultaneous connections, 0 for unlimited.
+Do not verify the TLS certificate of the server.
+Disable using EPSV even if server advertises support.
+Disable using MLSD even if server advertises support.
+Use MDTM to set modification time (VsFtpd quirk)
+Max time before closing idle connections.
+If no connections have been returned to the connection pool in the time given, rclone will empty the connection pool.
+Set to 0 to keep connections indefinitely.
+Maximum time to wait for a response to close.
+Size of TLS session cache for all control and data connections.
+TLS cache allows to resume TLS sessions and reuse PSK between connections. Increase if default size is not enough resulting in TLS resumption errors. Enabled by default. Use 0 to disable.
+Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
+Maximum time to wait for data connection closing status.
+This sets the encoding for the backend.
+See the encoding section in the overview for more info.
+FTP servers acting as rclone remotes must support passive
mode. The mode cannot be configured as passive
is the only supported one. Rclone's FTP implementation is not compatible with active
mode as the library it uses doesn't support it. This will likely never be supported due to security concerns.
Rclone's FTP backend does not support any checksums but can compare file sizes.
+rclone about
is not supported by the FTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+The implementation of : --dump headers
, --dump bodies
, --dump auth
for debugging isn't the same as for rclone HTTP based backends - it has less fine grained control.
--timeout
isn't supported (but --contimeout
is).
--bind
isn't supported.
Rclone's FTP backend could support server-side move but does not at present.
+The ftp_proxy
environment variable is not currently supported.
File modification time (timestamps) is supported to 1 second resolution for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server. The VsFTPd
server has non-standard implementation of time related protocol commands and needs a special configuration setting: writing_mdtm = true
.
Support for precise file time with other FTP servers varies depending on what protocol extensions they advertise. If all the MLSD
, MDTM
and MFTM
extensions are present, rclone will use them together to provide precise time. Otherwise the times you see on the FTP server through rclone are those of the last file upload.
You can use the following command to check whether rclone can use precise time with your FTP server: rclone backend features your_ftp_remote:
(the trailing colon is important). Look for the number in the line tagged by Precision
designating the remote time precision expressed as nanoseconds. A value of 1000000000
means that file time precision of 1 second is available. A value of 3153600000000000000
(or another large number) means "unsupported".
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -13963,10 +14473,11 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
OAuth Client Id Leave blank normally.
+OAuth Client Id.
+Leave blank normally.
OAuth Client Secret Leave blank normally.
+OAuth Client Secret.
+Leave blank normally.
Project number. Optional - needed only for list/create/delete buckets - see your developer console.
+Project number.
+Optional - needed only for list/create/delete buckets - see your developer console.
Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login.
+Service Account Credentials JSON file path.
+Leave blank normally. Needed only if you want use SA instead of interactive login.
Leading ~
will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}
.
Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login.
+Service Account Credentials JSON blob.
+Leave blank normally. Needed only if you want use SA instead of interactive login.
Access public buckets and objects without credentials Set to 'true' if you just want to download files and don't configure credentials.
+Access public buckets and objects without credentials.
+Set to 'true' if you just want to download files and don't configure credentials.
Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
OAuth Access Token as a JSON blob.
@@ -14237,7 +14763,8 @@ y/e/d> yAuth server URL. Leave blank to use the provider defaults.
+Auth server URL.
+Leave blank to use the provider defaults.
Token server url. Leave blank to use the provider defaults.
+Token server url.
+Leave blank to use the provider defaults.
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
rclone about
is not supported by the Google Cloud Storage backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
Paths are specified as drive:path
Drive paths may be as deep as required, e.g. drive:directory/subdirectory
.
The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -14409,7 +14938,7 @@ client_secret> # Can be left blank
scope> # Select your scope, 1 for example
root_folder_id> # Can be left blank
service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes!
-y/n> # Auto config, y
+y/n> # Auto config, n
gdrive:backup
- use the remote called gdrive, work in the folder named backup.Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using --drive-impersonate
, do this instead: - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step #1 - use rclone without specifying the --drive-impersonate
option, like this: rclone -v foo@example.com lsf gdrive:backup
Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using --drive-impersonate
, do this instead: - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step #1 - use rclone without specifying the --drive-impersonate
option, like this: rclone -v lsf gdrive:backup
If you want to configure the remote to point to a Google Shared Drive (previously known as Team Drives) then answer y
to the question Configure this as a Shared Drive (Team Drive)?
.
This will fetch the list of Shared Drives from google and allow you to configure which one you want to use. You can also type in a Shared Drive ID if you prefer.
@@ -14473,9 +15002,9 @@ trashed=false and 'c' in parents--fast-list
: 22:05 min--fast-list
: 58sGoogle drive stores modification times accurate to 1 ms.
-Only Invalid UTF-8 bytes will be replaced, as they can't be used in JSON strings.
In contrast to other backends, /
can also be used in names and .
or ..
are valid names.
Here are the standard options specific to drive (Google Drive).
Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance.
@@ -14746,7 +15275,8 @@ trashed=false and 'c' in parentsOAuth Client Secret Leave blank normally.
+OAuth Client Secret.
+Leave blank normally.
ID of the root folder Leave blank normally.
+ID of the root folder. Leave blank normally.
Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point.
Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login.
+Service Account Credentials JSON file path.
+Leave blank normally. Needed only if you want use SA instead of interactive login.
Leading ~
will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}
.
Deprecated: no longer needed
+Deprecated: No longer needed.
Here are the advanced options specific to drive (Google Drive).
OAuth Access Token as a JSON blob.
@@ -14825,7 +15356,8 @@ trashed=false and 'c' in parentsAuth server URL. Leave blank to use the provider defaults.
+Auth server URL.
+Leave blank to use the provider defaults.
Token server url. Leave blank to use the provider defaults.
+Token server url.
+Leave blank to use the provider defaults.
Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login.
+Service Account Credentials JSON blob.
+Leave blank normally. Needed only if you want use SA instead of interactive login.
ID of the Shared Drive (Team Drive)
+ID of the Shared Drive (Team Drive).
Send files to the trash instead of deleting permanently. Defaults to true, namely sending files to the trash. Use --drive-use-trash=false
to delete files permanently instead.
Send files to the trash instead of deleting permanently.
+Defaults to true, namely sending files to the trash. Use --drive-use-trash=false
to delete files permanently instead.
Skip google documents in all listings. If given, gdocs practically become invisible to rclone.
+Skip google documents in all listings.
+If given, gdocs practically become invisible to rclone.
Only show files that are in the trash. This will show trashed files in their original directory structure.
+Only show files that are in the trash.
+This will show trashed files in their original directory structure.
Deprecated: see export_formats
+Deprecated: See export_formats.
Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+Allow the filetype to change when uploading Google docs.
+E.g. file.doc to file.docx. This will confuse sync and reupload every time.
Use file created date instead of modified date.,
+Use file created date instead of modified date.
Useful when downloading data and you want the creation date used in place of the last modified date.
WARNING: This flag may have some unexpected consequences.
When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the "--checksum" flag.
@@ -14973,7 +15511,7 @@ trashed=false and 'c' in parentsSize of listing chunk 100-1000. 0 to disable.
+Size of listing chunk 100-1000, 0 to disable.
Cutoff for switching to chunked upload
+Cutoff for switching to chunked upload.
Upload chunk size. Must a power of 2 >= 256k.
+Upload chunk size.
+Must a power of 2 >= 256k.
Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.
Reducing this will reduce memory usage but decrease performance.
Disable drive using http2
+Disable drive using http2.
There is currently an unsolved issue with the google drive backend and HTTP/2. HTTP/2 is therefore disabled by default for the drive backend but can be re-enabled here. When the issue is solved this flag will be removed.
See: https://github.com/rclone/rclone/issues/3631
Make upload limit errors be fatal
+Make upload limit errors be fatal.
At the time of writing it is only possible to upload 750 GiB of data to Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync.
Note that this detection is relying on error message strings which Google don't document so it may break in the future.
See: https://github.com/rclone/rclone/issues/3857
@@ -15090,7 +15629,7 @@ trashed=false and 'c' in parentsMake download limit errors be fatal
+Make download limit errors be fatal.
At the time of writing it is only possible to download 10 TiB of data from Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync.
Note that this detection is relying on error message strings which Google don't document so it may break in the future.
If set skip shortcut files
+If set skip shortcut files.
Normally rclone dereferences shortcut files making them appear as if they are the original file (see the shortcuts section). If this flag is set then rclone will ignore shortcut files completely.
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
Here are the commands specific to the drive backend.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
See the "rclone backend" command for more info on how to pass options and arguments.
These can be run on a running backend using the rc command backend/command.
-Get command for fetching the drive config parameters
rclone backend get remote: [options] [<arguments>+]
This is a get command which will be used to fetch the various drive config parameters
@@ -15136,7 +15675,7 @@ rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chSet command for updating the drive config parameters
rclone backend set remote: [options] [<arguments>+]
This is a set command which will be used to update the various drive config parameters
@@ -15148,7 +15687,7 @@ rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.jsonCreate shortcuts from files or directories
rclone backend shortcut remote: [options] [<arguments>+]
This command creates shortcuts from files or directories.
@@ -15161,12 +15700,12 @@ rclone backend shortcut drive: source_item -o target=drive2: destination_shortcuList the Shared Drives available to this account
rclone backend drives remote: [options] [<arguments>+]
This command lists the Shared Drives (Team Drives) available to this account.
Usage:
-rclone backend drives drive:
+rclone backend [-o config] drives drive:
This will return a JSON list of objects like this
[
{
@@ -15180,7 +15719,16 @@ rclone backend shortcut drive: source_item -o target=drive2: destination_shortcu
"name": "Test Drive"
}
]
-With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives found.
+[My Drive]
+type = alias
+remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
+
+[Test Drive]
+type = alias
+remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
+Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. This may require manual editing of the names.
+Untrash files and directories
rclone backend untrash remote: [options] [<arguments>+]
This command untrashes all the files and directories in the directory passed in recursively.
@@ -15194,7 +15742,7 @@ rclone backend -i untrash drive:directory subdir "Untrashed": 17, "Errors": 0 } -Copy files by ID
rclone backend copyid remote: [options] [<arguments>+]
This command copies files by ID
@@ -15205,10 +15753,10 @@ rclone backend copyid drive: ID1 path1 ID2 path2The path should end with a / to indicate copy the file as named to this directory. If it doesn't end with a / then the last path component will be used as the file name.
If the destination is a drive backend then server-side copying will be attempted if possible.
Use the -i flag to see what would be copied before copying.
-Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiByte/s but lots of small files can take a long time.
+Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiB/s but lots of small files can take a long time.
Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server-side copies with --disable copy
to download and upload the files if you prefer.
Google docs will appear as size -1 in rclone ls
and as size 0 in anything which uses the VFS layer, e.g. rclone mount
, rclone serve
.
This is because rclone can't find out the size of the Google docs without downloading them.
Google docs will transfer correctly with rclone sync
, rclone copy
etc as rclone knows to ignore the size when doing the transfer.
The most likely cause of this is the duplicated file issue above - run rclone dedupe
and check your logs for duplicate object or directory messages.
This can also be caused by a delay/caching on google drive's end when comparing directory listings. Specifically with team drives used in combination with --fast-list. Files that were uploaded recently may not appear on the directory list sent to rclone when using --fast-list.
Waiting a moderate period of time between attempts (estimated to be approximately 1 hour) and/or not using --fast-list both seem to be effective in preventing the problem.
-When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.
It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower.
Here is how to create your own Google Drive client ID for rclone:
@@ -15236,10 +15784,11 @@ rclone backend copyid drive: ID1 path1 ID2 path2(PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this has not been tested/documented so far).
Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".
Choose an application type of "Desktop app" if you using a Google account or "Other" if you using a GSuite account and click "Create". (the default name is fine)
Choose an application type of "Desktop app" and click "Create". (the default name is fine)
It will show you a client ID and client secret. Make a note of these.
Go to "Oauth consent screen" and press "Publish App"
Provide the noted client ID and client secret to rclone.
Click "OAuth consent screen", then click "PUBLISH APP" button and confirm, or add your account under "Test users".
Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it's not such a big deal).
(Thanks to @balazer on github for these instructions.)
@@ -15247,7 +15796,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos.
NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use.
-The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -15321,7 +15870,7 @@ y/e/d> y
rclone ls remote:album/newAlbum
Sync /home/local/images
to the Google Photos, removing any excess files in the album.
rclone sync -i /home/local/image remote:album/newAlbum
-As Google Photos is not a general purpose cloud storage system the backend is laid out to help you navigate it.
The directories under media
show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup remote:media/by-month
. (NB remote:media/by-day
is rather slow at the moment so avoid for syncing.)
Note that all your photos and videos will appear somewhere under media
, but they may not appear under album
unless you've put them into albums.
This means that you can use the album
path pretty much like a normal filesystem and it is a good target for repeated syncing.
The shared-album
directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.
Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item.
-Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and will count towards your storage quota in your Google Account. The API does not offer a way to upload in "high quality" mode..
-rclone about
is not supported by the Google Photos backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
-When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.
-The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort
-When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044.
-If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg
would then appear as file {123456}.jpg
and file {ABCDEF}.jpg
(the actual IDs are a lot longer alas!).
If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload
then uploaded the same image to album/my_album
the filename of the image in album/my_album
will be what it was uploaded with initially, not what you uploaded it with to album
. In practise this shouldn't cause too many problems.
The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known.
-This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes.
-The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check.
-It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is very slow and uses up a lot of transactions. This can be enabled with the --gphotos-read-size
option or the read_size = true
config parameter.
If you want to use the backend with rclone mount
you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You'll need to experiment to see if it works for you without the flag.
Rclone can only upload files to albums it created. This is a limitation of the Google Photos API.
-Rclone can remove files it uploaded from albums it created only.
-Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See bug #109759781.
-Rclone cannot delete files anywhere except under album
.
The Google Photos API does not support deleting albums - see bug #135714733.
-Here are the standard options specific to google photos (Google Photos).
OAuth Client Id Leave blank normally.
+OAuth Client Id.
+Leave blank normally.
OAuth Client Secret Leave blank normally.
+OAuth Client Secret.
+Leave blank normally.
Here are the advanced options specific to google photos (Google Photos).
OAuth Access Token as a JSON blob.
@@ -15466,7 +15989,8 @@ y/e/d> yAuth server URL. Leave blank to use the provider defaults.
+Auth server URL.
+Leave blank to use the provider defaults.
Token server url. Leave blank to use the provider defaults.
+Token server url.
+Leave blank to use the provider defaults.
Year limits the photos to be downloaded to those which are uploaded after the given year
+Year limits the photos to be downloaded to those which are uploaded after the given year.
This sets the encoding for the backend.
+See the encoding section in the overview for more info.
+Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item.
+Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and will count towards your storage quota in your Google Account. The API does not offer a way to upload in "high quality" mode..
+rclone about
is not supported by the Google Photos backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.
+The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort
+When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044.
+If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg
would then appear as file {123456}.jpg
and file {ABCDEF}.jpg
(the actual IDs are a lot longer alas!).
If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload
then uploaded the same image to album/my_album
the filename of the image in album/my_album
will be what it was uploaded with initially, not what you uploaded it with to album
. In practise this shouldn't cause too many problems.
The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known.
+This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes.
+The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check.
+It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is very slow and uses up a lot of transactions. This can be enabled with the --gphotos-read-size
option or the read_size = true
config parameter.
If you want to use the backend with rclone mount
you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You'll need to experiment to see if it works for you without the flag.
Rclone can only upload files to albums it created. This is a limitation of the Google Photos API.
+Rclone can remove files it uploaded from albums it created only.
+Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See bug #109759781.
+Rclone cannot delete files anywhere except under album
.
The Google Photos API does not support deleting albums - see bug #135714733.
+Hasher is a special overlay backend to create remotes which handle checksums for other remotes. It's main functions include: - Emulate hash types unimplemented by backends - Cache checksums to help with slow hashing of large local or (S)FTP files - Warm up checksum cache from external SUM files
+To use Hasher, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote. Check that your base remote is working.
+Let's call the base remote myRemote:path
here. Note that anything inside myRemote:path
will be handled by hasher and anything outside won't. This means that if you are using a bucket based remote (S3, B2, Swift) then you should put the bucket in the remote s3:bucket
.
Now proceed to interactive or manual configuration.
+Run rclone config
:
No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> Hasher1
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / Handle checksums for other remotes
+ \ "hasher"
+[snip]
+Storage> hasher
+Remote to cache checksums for, like myremote:mypath.
+Enter a string value. Press Enter for the default ("").
+remote> myRemote:path
+Comma separated list of supported checksum types.
+Enter a string value. Press Enter for the default ("md5,sha1").
+hashsums> md5
+Maximum time to keep checksums in cache. 0 = no cache, off = cache forever.
+max_age> off
+Edit advanced config? (y/n)
+y) Yes
+n) No
+y/n> n
+Remote config
+--------------------
+[Hasher1]
+type = hasher
+remote = myRemote:path
+hashsums = md5
+max_age = off
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Run rclone config path
to see the path of current active config file, usually YOURHOME/.config/rclone/rclone.conf
. Open it in your favorite text editor, find section for the base remote and create new section for hasher like in the following examples:
[Hasher1]
+type = hasher
+remote = myRemote:path
+hashes = md5
+max_age = off
+
+[Hasher2]
+type = hasher
+remote = /local/path
+hashes = dropbox,sha1
+max_age = 24h
+Hasher takes basically the following parameters: - remote
is required, - hashes
is a comma separated list of supported checksums (by default md5,sha1
), - max_age
- maximum time to keep a checksum value in the cache, 0
will disable caching completely, off
will cache "forever" (that is until the files get changed).
Make sure the remote
has :
(colon) in. If you specify the remote without a colon then rclone will use a local directory of that name. So if you use a remote of /local/path
then rclone will handle hashes for that directory. If you use remote = name
literally then rclone will put files in a directory called name
located under current directory.
Now you can use it as Hasher2:subdir/file
instead of base remote. Hasher will transparently update cache with new checksums when a file is fully read or overwritten, like:
rclone copy External:path/file Hasher:dest/path
+
+rclone cat Hasher:path/to/file > /dev/null
+The way to refresh all cached checksums (even unsupported by the base backend) for a subtree is to re-download all files in the subtree. For example, use hashsum --download
using any supported hashsum on the command line (we just care to re-read):
rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null
+
+rclone backend dump Hasher:path/to/subtree
+You can print or drop hashsum cache using custom backend commands:
+rclone backend dump Hasher:dir/subdir
+
+rclone backend drop Hasher:
+Hasher supports two backend commands: generic SUM file import
and faster but less consistent stickyimport
.
rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM [--checkers 4]
+Instead of SHA1 it can be any hash supported by the remote. The last argument can point to either a local or an other-remote:path
text file in SUM format. The command will parse the SUM file, then walk down the path given by the first argument, snapshot current fingerprints and fill in the cache entries correspondingly. - Paths in the SUM file are treated as relative to hasher:dir/subdir
. - The command will not check that supplied values are correct. You must know what you are doing. - This is a one-time action. The SUM file will not get "attached" to the remote. Cache entries can still be overwritten later, should the object's fingerprint change. - The tree walk can take long depending on the tree size. You can increase --checkers
to make it faster. Or use stickyimport
if you don't care about fingerprints and consistency.
rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1
+stickyimport
is similar to import
but works much faster because it does not need to stat existing files and skips initial tree walk. Instead of binding cache entries to file fingerprints it creates sticky entries bound to the file name alone ignoring size, modification time etc. Such hash entries can be replaced only by purge
, delete
, backend drop
or by full re-read/re-write of the files.
Here are the standard options specific to hasher (Better checksums for other remotes).
+Remote to cache checksums for (e.g. myRemote:path).
+Comma separated list of supported checksum types.
+Maximum time to keep checksums in cache (0 = no cache, off = cache forever).
+Here are the advanced options specific to hasher (Better checksums for other remotes).
+Auto-update checksum for files smaller than this size (disabled by default).
+Here are the commands specific to the hasher backend.
+Run them with
+rclone backend COMMAND remote:
+The help below will explain what arguments each command takes.
+See the "rclone backend" command for more info on how to pass options and arguments.
+These can be run on a running backend using the rc command backend/command.
+Drop cache
+rclone backend drop remote: [options] [<arguments>+]
+Completely drop checksum cache. Usage Example: rclone backend drop hasher:
+Dump the database
+rclone backend dump remote: [options] [<arguments>+]
+Dump cache records covered by the current remote
+Full dump of the database
+rclone backend fulldump remote: [options] [<arguments>+]
+Dump all cache records in the database
+Import a SUM file
+rclone backend import remote: [options] [<arguments>+]
+Amend hash cache from a SUM file and bind checksums to files by size/time. Usage Example: rclone backend import hasher:subdir md5 /path/to/sum.md5
+Perform fast import of a SUM file
+rclone backend stickyimport remote: [options] [<arguments>+]
+Fill hash cache from a SUM file without verifying file fingerprints. Usage Example: rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5
+This section explains how various rclone operations work on a hasher remote.
+Disclaimer. This section describes current implementation which can change in future rclone versions!.
+The rclone hashsum
(or md5sum
or sha1sum
) command will:
auto_size
then download object and calculate requested hashes on the fly.fingerprint
(including size, modtime if supported, first-found other hash if any).move
will update keys of existing cache entriesdeletefile
will remove a single cache entrypurge
will remove all cache entries under the purged pathNote that setting max_age = 0
will disable checksum caching completely.
If you set max_age = off
, checksums in cache will never age, unless you fully rewrite or delete the file.
Cached checksums are stored as bolt
database files under rclone cache directory, usually ~/.cache/rclone/kv/
. Databases are maintained one per base backend, named like BaseRemote~hasher.bolt
. Checksums for multiple alias
-es into a single base backend will be stored in the single database. All local paths are treated as aliases into the local
backend (unless crypted or chunked) and stored in ~/.cache/rclone/kv/local~hasher.bolt
. Databases can be shared between multiple rclone processes.
HDFS is a distributed file-system, part of the Apache Hadoop framework.
Paths are specified as remote:
or remote:path/to/dir
.
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -15595,7 +16326,7 @@ type = hdfs namenode = 127.0.0.1:8020 username = rootYou can stop this image with docker kill rclone-hdfs
(NB it does not use volumes, so all data uploaded will be lost.)
Time accurate to 1 second is stored.
No checksums are implemented.
@@ -15620,30 +16351,19 @@ username = rootInvalid UTF-8 bytes will also be replaced.
-Move
or DirMove
.Here are the standard options specific to hdfs (Hadoop distributed file system).
hadoop name node and port
+Hadoop name node and port.
+E.g. "namenode:8020" to connect to host namenode at port 8020.
hadoop user name
+Hadoop user name.
Here are the advanced options specific to hdfs (Hadoop distributed file system).
Kerberos service principal name for the namenode
-Enables KERBEROS authentication. Specifies the Service Principal Name (SERVICE/FQDN) for the namenode.
+Kerberos service principal name for the namenode.
+Enables KERBEROS authentication. Specifies the Service Principal Name (SERVICE/FQDN) for the namenode. E.g. "hdfs/namenode.hadoop.docker" for namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'.
Kerberos data transfer protection: authentication|integrity|privacy
+Kerberos data transfer protection: authentication|integrity|privacy.
Specifies whether or not authentication, data signature integrity checks, and wire encryption is required when communicating the the datanodes. Possible values are 'authentication', 'integrity' and 'privacy'. Used only with KERBEROS enabled.
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
Move
or DirMove
.The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!)
Paths are specified as remote:
or remote:path/to/dir
.
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -15756,39 +16475,29 @@ e/n/d/r/c/s/q> qrclone sync -i remote:directory /home/local/directory
This remote is read only - you can't upload files to an HTTP server.
-Most HTTP servers store time accurate to 1 second.
No checksums are stored.
Since the http remote only has one config parameter it is easy to use without a config file:
rclone lsd --http-url https://beta.rclone.org :http:
-Here are the standard options specific to http (http Connection).
URL of http host to connect to
+URL of http host to connect to.
+E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password.
Here are the advanced options specific to http (http Connection).
Set HTTP headers for all transactions
-Use this to set additional HTTP headers for all transactions
+Set HTTP headers for all transactions.
+Use this to set additional HTTP headers for all transactions.
The input format is comma separated list of key,value pairs. Standard CSV encoding may be used.
For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
@@ -15799,7 +16508,7 @@ e/n/d/r/c/s/q> qSet this if the site doesn't end directories with /
+Set this if the site doesn't end directories with /.
Use this if your target website does not use / on the end of directories.
A / on the end of a path is how rclone normally tells the difference between files and directories. If this flag is set, then rclone will treat all files with Content-Type: text/html as directories and read URLs from them rather than downloading them.
Note that this may cause rclone to confuse genuine HTML files with directories.
@@ -15810,7 +16519,7 @@ e/n/d/r/c/s/q> qDon't use HEAD requests to find file sizes in dir listing
+Don't use HEAD requests to find file sizes in dir listing.
If your site is being very slow to load then you can try this option. Normally rclone does a HEAD request for each potential file in a directory listing to:
Default: false
rclone about
is not supported by the HTTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
Paths are specified as remote:path
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:container/path/to/dir
.
The initial setup for Hubic involves getting a token from Hubic which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -15886,14 +16596,15 @@ y/e/d> y
rclone copy /home/source remote:default/backup
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
The modified time is stored as metadata on the object as X-Object-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
Note that Hubic wraps the Swift backend, so most of the properties of are the same.
-Here are the standard options specific to hubic (Hubic).
OAuth Client Id Leave blank normally.
+OAuth Client Id.
+Leave blank normally.
OAuth Client Secret Leave blank normally.
+OAuth Client Secret.
+Leave blank normally.
Here are the advanced options specific to hubic (Hubic).
OAuth Access Token as a JSON blob.
@@ -15919,7 +16631,8 @@ y/e/d> yAuth server URL. Leave blank to use the provider defaults.
+Auth server URL.
+Leave blank to use the provider defaults.
Token server url. Leave blank to use the provider defaults.
+Token server url.
+Leave blank to use the provider defaults.
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.
The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway.
-In addition to the official service at jottacloud.com, there are also several whitelabel versions which should work with this backend.
+Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway. In addition to the official service at jottacloud.com, it also provides white-label solutions to different companies, such as: * Telia * Telia Cloud (cloud.telia.se) * Telia Sky (sky.telia.no) * Tele2 * Tele2 Cloud (mittcloud.tele2.se) * Elkjøp (with subsidiaries): * Elkjøp Cloud (cloud.elkjop.no) * Elgiganten Sweden (cloud.elgiganten.se) * Elgiganten Denmark (cloud.elgiganten.dk) * Giganti Cloud (cloud.gigantti.fi) * ELKO Clouud (cloud.elko.is)
+Most of the white-label versions are supported by this backend, although may require different authentication setup - described below.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
Some of the whitelabel versions uses a different authentication method than the official service, and you have to choose the correct one when setting up the remote.
+To configure Jottacloud you will need to generate a personal security token in the Jottacloud web interface. You will the option to do in your account security settings (for whitelabel version you need to find this page in its web interface). Note that the web interface may refer to this token as a JottaCli token.
-If you are using one of the whitelabel versions (Elgiganten, Com Hem Cloud) you may not have the option to generate a CLI token. In this case you'll have to use the legacy authentication. To to this select yes when the setup asks for legacy authentication and enter your username and password. The rest of the setup is identical to the default setup.
-If you are using one of the whitelabel versions (e.g. from Elkjøp or Tele2) you may not have the option to generate a CLI token. In this case you'll have to use the legacy authentication. To to this select yes when the setup asks for legacy authentication and enter your username and password. The rest of the setup is identical to the default setup.
+Similar to other whitelabel versions Telia Cloud doesn't offer the option of creating a CLI token, and additionally uses a separate authentication flow where the username is generated internally. To setup rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is identical to the default setup.
-Here is an example of how to make a remote called remote
with the default setup. First run:
rclone config
This will guide you through an interactive setup process:
@@ -16058,7 +16773,7 @@ y/e/d> yJottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
Jottacloud supports MD5 type hashes, so you can use the --checksum
flag.
Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the TMPDIR
environment variable points to) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag. When uploading from local disk the source checksum is always available, so this does not apply. Starting with rclone version 1.52 the same is true for crypted remotes (in older versions the crypt backend would not calculate hashes for uploads from local disk, so the Jottacloud backend had to do it as described above).
In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Here are the standard options specific to koofr (Koofr).
Your Koofr user name
+Your Koofr user name.
Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
+Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).
NB Input to this must be obscured - see rclone obscure.
Here are the advanced options specific to koofr (Koofr).
The Koofr API endpoint to use
+The Koofr API endpoint to use.
Mount ID of the mount to use. If omitted, the primary mount is used.
+Mount ID of the mount to use.
+If omitted, the primary mount is used.
Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.
+Does the backend support setting modification time.
+Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Mail.ru Cloud is a cloud storage provided by a Russian internet company Mail.Ru Group. The official desktop client is Disk-O:, available on Windows and Mac OS.
Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclone until it gets eventually implemented.
-remote:directory/subdirectory
last modified time
property, directories don'tHere is an example of making a mailru configuration. First create a Mail.ru Cloud account and choose a tariff, then run
rclone config
This will guide you through an interactive setup process:
@@ -16384,7 +17103,7 @@ y/e/d> yrclone ls remote:directory
Sync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync -i /home/local/directory remote:directory
-Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as "Jan 1 1970".
Hash sums use a custom Mail.ru algorithm based on SHA1. If file size is less than or equal to the SHA1 block size (20 bytes), its hash is simply its data right-padded with zero bytes. Hash sum of a larger file is computed as a SHA1 sum of the file data bytes concatenated with a decimal representation of the data length.
@@ -16392,7 +17111,7 @@ y/e/d> yRemoving a file or directory actually moves it to the trash, which is not visible to rclone but can be seen in a web browser. The trashed file still occupies part of total quota. If you wish to empty your trash and free some quota, you can use the rclone cleanup remote:
command, which will permanently delete all your trashed files. This command does not take any path arguments.
To view your current quota you can use the rclone about remote:
command which will display your usage limit (quota) and the current usage.
In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits.
-Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
-Here are the standard options specific to mailru (Mail.ru Cloud).
User name (usually email)
+User name (usually email).
Password
+Password.
NB Input to this must be obscured - see rclone obscure.
Skip full upload if there is another file with same data hash. This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. It is meaningless and ineffective if source file is unique or encrypted. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization.
+Skip full upload if there is another file with same data hash.
+This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. It is meaningless and ineffective if source file is unique or encrypted. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization.
Here are the advanced options specific to mailru (Mail.ru Cloud).
Comma separated list of file name patterns eligible for speedup (put by hash). Patterns are case insensitive and can contain '*' or '?' meta characters.
+Comma separated list of file name patterns eligible for speedup (put by hash).
+Patterns are case insensitive and can contain '*' or '?' meta characters.
This option allows you to disable speedup (put by hash) for large files (because preliminary hashing can exhaust you RAM or disk space)
+This option allows you to disable speedup (put by hash) for large files.
+Reason is that preliminary hashing can exhaust your RAM or disk space.
What should copy do if file checksum is mismatched or invalid
+What should copy do if file checksum is mismatched or invalid.
HTTP user agent used internally by client. Defaults to "rclone/VERSION" or "--user-agent" provided on command line.
+HTTP user agent used internally by client.
+Defaults to "rclone/VERSION" or "--user-agent" provided on command line.
Comma separated list of internal maintenance flags. This option must not be used by an ordinary user. It is intended only to facilitate remote troubleshooting of backend issues. Strict meaning of flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist unknowndirs
+Comma separated list of internal maintenance flags.
+This option must not be used by an ordinary user. It is intended only to facilitate remote troubleshooting of backend issues. Strict meaning of flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist unknowndirs
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits.
+Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Mega is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded. This prevents anyone (including employees of Mega) from accessing the files without knowledge of the key used for encryption.
This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -16659,7 +17384,7 @@ y/e/d> yrclone copy /home/source remote:backup
Mega does not support modification times or hashes yet.
-Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to opendrive (OpenDrive).
Username
+Username.
Here are the advanced options specific to opendrive (OpenDrive).
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
rclone about
is not supported by the OpenDrive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
Here is an example of making an QingStor configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -17842,10 +18599,11 @@ y/e/d> yThe control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to qingstor (QingCloud Object Storage).
Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+Get QingStor credentials from runtime.
+Only applies if access_key_id and secret_access_key is blank.
QingStor Access Key ID Leave blank for anonymous access or runtime credentials.
+QingStor Access Key ID.
+Leave blank for anonymous access or runtime credentials.
QingStor Secret Access Key (password) Leave blank for anonymous access or runtime credentials.
+QingStor Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443"
+Enter an endpoint URL to connection QingStor API.
+Leave blank will use the default value "https://qingstor.com:443".
Zone to connect to. Default is "pek3a".
+Zone to connect to.
+Default is "pek3a".
Here are the advanced options specific to qingstor (QingCloud Object Storage).
Number of connection retries.
@@ -17924,7 +18686,7 @@ y/e/d> yCutoff for switching to chunked upload
+Cutoff for switching to chunked upload.
Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB.
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
rclone about
is not supported by the qingstor backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+Sia (sia.tech) is a decentralized cloud storage platform based on the blockchain technology. With rclone you can use it like any other remote filesystem or mount Sia folders locally. The technology behind it involves a number of new concepts such as Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on. If you are new to it, you'd better first familiarize yourself using their excellent support documentation.
+Before you can use rclone with Sia, you will need to have a running copy of Sia-UI
or siad
(the Sia daemon) locally on your computer or on local network (e.g. a NAS). Please follow the Get started guide and install one.
rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980
making external access impossible).
However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980
and --disable-api-security
arguments on the daemon command line. - Enforce API password for the siad
daemon via environment variable SIA_API_PASSWORD
or text file named apipassword
in the daemon directory. - Set rclone backend option api_password
taking it from above locations.
Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock
. Alternatively you can make siad
unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD
. 2. If siad
cannot find the SIA_API_PASSWORD
variable or the apipassword
file in the SIA_DIR
directory, it will generate a random password and store in the text file named apipassword
under YOUR_HOME/.sia/
directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword
on Windows. Remember this when you configure password in rclone. 3. The only way to use siad
without API password is to run it on localhost with command line argument --authorize-api=false
, but this is insecure and strongly discouraged.
Here is an example of how to make a sia
remote called mySia
. First, run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> mySia
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+...
+29 / Sia Decentralized Cloud
+ \ "sia"
+...
+Storage> sia
+Sia daemon API URL, like http://sia.daemon.host:9980.
+Note that siad must run with --disable-api-security to open API port for other hosts (not recommended).
+Keep default if Sia daemon runs on localhost.
+Enter a string value. Press Enter for the default ("http://127.0.0.1:9980").
+api_url> http://127.0.0.1:9980
+Sia Daemon API Password.
+Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank (default)
+y/g/n> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+--------------------
+[mySia]
+type = sia
+api_url = http://127.0.0.1:9980
+api_password = *** ENCRYPTED ***
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Once configured, you can then use rclone
like this:
rclone lsd mySia:
+rclone ls mySia:
+rclone copy /home/source mySia:backup
+Here are the standard options specific to sia (Sia Decentralized Cloud).
+Sia daemon API URL, like http://sia.daemon.host:9980.
+Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost.
+Sia Daemon API Password.
+Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.
+NB Input to this must be obscured - see rclone obscure.
+Here are the advanced options specific to sia (Sia Decentralized Cloud).
+Siad User Agent
+Sia daemon requires the 'Sia-Agent' user agent by default for security
+This sets the encoding for the backend.
+See the encoding section in the overview for more info.
+rclone about
not supportedSwift refers to OpenStack Object Storage. Commercial implementations of that being:
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:container/path/to/dir
.
Here is an example of making a swift configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -18112,7 +18991,33 @@ rclone lsd myremote:As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.
For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update
along with --use-server-modtime
, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.
The modified time is stored as metadata on the object as X-Object-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
+Character | +Value | +Replacement | +
---|---|---|
NUL | +0x00 | +␀ | +
/ | +0x2F | +/ | +
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
+Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
Get swift credentials from environment variables in standard OpenStack form.
@@ -18125,11 +19030,12 @@ rclone lsd myremote:Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME).
Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID).
Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME).
Region name - optional (OS_REGION_NAME)
+Region name - optional (OS_REGION_NAME).
Storage URL - optional (OS_STORAGE_URL)
+Storage URL - optional (OS_STORAGE_URL).
Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+Auth Token from alternate authentication - optional (OS_AUTH_TOKEN).
Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
+Application Credential ID (OS_APPLICATION_CREDENTIAL_ID).
Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
+Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME).
Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
+Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET).
AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION).
Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
+Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE).
The storage policy to use when creating a new container
+The storage policy to use when creating a new container.
This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider.
Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
If true avoid calling abort upload on a failure. It should be set to true for resuming uploads across different sessions.
+If true avoid calling abort upload on a failure.
+It should be set to true for resuming uploads across different sessions.
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
The modified time is stored as metadata on the object as X-Object-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
-Character | -Value | -Replacement | -
---|---|---|
NUL | -0x00 | -␀ | -
/ | -0x2F | -/ | -
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
-Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.
So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies
flag.
This may also be caused by specifying the region when you shouldn't have (e.g. OVH).
-This is most likely caused by forgetting to specify your tenant when setting up a swift remote.
+To use rclone with OVH cloud archive, first use rclone config
to set up a swift
backend with OVH, choosing pca
as the storage_policy
.
Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel.
+To retrieve objects use rclone copy
as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following:
2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)
Rclone will wait for the time specified then retry the copy.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -18457,9 +19347,8 @@ y/e/d> y
rclone copy /home/source remote:backup
pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.
-pCloud supports MD5 and SHA1 type hashes in the US region but and SHA1 only in the EU region, so you can use the --checksum
flag.
(Note that pCloud also support SHA256 in the EU region, but rclone does not have support for that yet.)
-pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 hashes in the EU region, so you can use the --checksum
flag.
In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to premiumizeme (premiumize.me).
API Key.
@@ -18656,24 +19550,25 @@ y/e/d>Here are the advanced options specific to premiumizeme (premiumize.me).
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
premiumize.me file names can't have the \
or "
characters in. rclone maps these to and from an identical looking unicode equivalents \
and "
premiumize.me only supports filenames up to 255 characters in length.
Paths are specified as remote:path
put.io paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -18736,7 +19631,7 @@ e/n/d/r/c/s/q> q
rclone ls remote:
To copy a local directory to a put.io directory called backup
rclone copy /home/source remote:backup
-In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the advanced options specific to putio (Put.io).
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x and 7.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users
-There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library
. You may put subdirectories in too, e.g. remote:library/path/to/dir
. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir
. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)
Here is an example of making a seafile configuration for a user with no two-factor authentication. First run
@@ -18927,7 +19822,7 @@ y/e/d> yrclone sync -i /home/local/directory seafile:
Seafile version 7+ supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Please note this is not supported on seafile server version 6.x
In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Here are the standard options specific to uptobox (Uptobox).
Your access Token, get it from https://uptobox.com/my_account
+Your access token.
+Get it from https://uptobox.com/my_account.
Here are the advanced options specific to uptobox (Uptobox).
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
Uptobox will delete inactive files that have not been accessed in 60 days.
rclone about
is not supported by this backend an overview of used space can however been seen in the uptobox web interface.
Attribute :ro
and :nc
can be attach to the end of path to tag the remote as read only or no create, e.g. remote:directory/subdirectory:ro
or remote:directory/subdirectory:nc
.
Subfolders can be used in upstream remotes. Assume a union remote named backup
with the remotes mydrive:private/backup
. Invoking rclone mkdir backup:desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop
.
There will be no special handling of paths containing ..
segments. Invoking rclone mkdir backup:../desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop
.
Here is an example of how to make a union called remote
for local folders. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / Union merges the contents of several remotes
+ \ "union"
+[snip]
+Storage> union
+List of space separated upstreams.
+Can be 'upstreama:test/dir upstreamb:', '\"upstreama:test/space:ro dir\" upstreamb:', etc.
+Enter a string value. Press Enter for the default ("").
+upstreams> remote1:dir1 remote2:dir2 remote3:dir3
+Policy to choose upstream on ACTION class.
+Enter a string value. Press Enter for the default ("epall").
+action_policy>
+Policy to choose upstream on CREATE class.
+Enter a string value. Press Enter for the default ("epmfs").
+create_policy>
+Policy to choose upstream on SEARCH class.
+Enter a string value. Press Enter for the default ("ff").
+search_policy>
+Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used.
+Enter a signed integer. Press Enter for the default ("120").
+cache_time>
+Remote config
+--------------------
+[remote]
+type = union
+upstreams = remote1:dir1 remote2:dir2 remote3:dir3
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+remote union
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> q
+Once configured you can then use rclone
like this,
List directories in top level in remote1:dir1
, remote2:dir2
and remote3:dir3
rclone lsd remote:
+List all the files in remote1:dir1
, remote2:dir2
and remote3:dir3
rclone ls remote:
+Copy another local directory to the union directory called source, which will be placed into remote3:dir3
rclone copy C:\source remote:source
The behavior of union backend is inspired by trapexit/mergerfs. All functions are grouped into 3 categories: action, create and search. These functions and categories can be assigned a policy which dictates what file or directory is chosen when performing that behavior. Any policy can be assigned to a function or category though some may not be very useful in practice. For instance: rand (random) may be useful for file creation (create) but could lead to very odd behavior if used for delete
if there were more than one copy of the file.
Policies, as described below, are of two basic types. path preserving
and non-path preserving
.
All policies which start with ep
(epff, eplfs, eplus, epmfs, eprand) are path preserving
. ep
stands for existing path
.
A path preserving policy will only consider upstreams where the relative path being accessed already exists.
When using non-path preserving policies paths will be created in target upstreams as necessary.
-Some policies rely on quota information. These policies should be used only if your upstreams support the respective quota fields.
To check if your upstream supports the field, run rclone about remote: [flags]
and see if the required field exists.
Policies basically search upstream remotes and create a list of files / paths for functions to work on. The policy is responsible for filtering and sorting. The policy type defines the sorting but filtering is mostly uniform as described below.
If all remotes are filtered an error will be returned.
-The policies definition are inspired by trapexit/mergerfs but not exactly the same. Some policy definition could be different due to the much larger latency of remote file systems.
Here is an example of how to make a union called remote
for local folders. First run:
rclone config
-This will guide you through an interactive setup process:
-No remotes found - make a new one
-n) New remote
-s) Set configuration password
-q) Quit config
-n/s/q> n
-name> remote
-Type of storage to configure.
-Choose a number from below, or type in your own value
-[snip]
-XX / Union merges the contents of several remotes
- \ "union"
-[snip]
-Storage> union
-List of space separated upstreams.
-Can be 'upstreama:test/dir upstreamb:', '\"upstreama:test/space:ro dir\" upstreamb:', etc.
-Enter a string value. Press Enter for the default ("").
-upstreams> remote1:dir1 remote2:dir2 remote3:dir3
-Policy to choose upstream on ACTION class.
-Enter a string value. Press Enter for the default ("epall").
-action_policy>
-Policy to choose upstream on CREATE class.
-Enter a string value. Press Enter for the default ("epmfs").
-create_policy>
-Policy to choose upstream on SEARCH class.
-Enter a string value. Press Enter for the default ("ff").
-search_policy>
-Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used.
-Enter a signed integer. Press Enter for the default ("120").
-cache_time>
-Remote config
---------------------
-[remote]
-type = union
-upstreams = remote1:dir1 remote2:dir2 remote3:dir3
---------------------
-y) Yes this is OK
-e) Edit this remote
-d) Delete this remote
-y/e/d> y
-Current remotes:
-
-Name Type
-==== ====
-remote union
-
-e) Edit existing remote
-n) New remote
-d) Delete remote
-r) Rename remote
-c) Copy remote
-s) Set configuration password
-q) Quit config
-e/n/d/r/c/s/q> q
-Once configured you can then use rclone
like this,
List directories in top level in remote1:dir1
, remote2:dir2
and remote3:dir3
rclone lsd remote:
-List all the files in remote1:dir1
, remote2:dir2
and remote3:dir3
rclone ls remote:
-Copy another local directory to the union directory called source, which will be placed into remote3:dir3
rclone copy C:\source remote:source
-Here are the standard options specific to union (Union merges the contents of several upstream fs).
List of space separated upstreams. Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc.
+List of space separated upstreams.
+Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc.
Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used.
+Cache time of usage and free space (in seconds).
+This option is only useful when a path preserving policy is used.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -20315,25 +21221,19 @@ y/e/d> y
Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times.
Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
-Here are the standard options specific to webdav (Webdav).
URL of http host to connect to
+URL of http host to connect to.
+E.g. https://example.com.
Name of the Webdav site/service/software you are using
+Name of the Webdav site/service/software you are using.
User name. In case NTLM authentication is used, the username should be in the format 'Domain'.
+User name.
+In case NTLM authentication is used, the username should be in the format 'Domain'.
Bearer token instead of user/pass (e.g. a Macaroon)
+Bearer token instead of user/pass (e.g. a Macaroon).
Here are the advanced options specific to webdav (Webdav).
Command to run to get a bearer token
+Command to run to get a bearer token.
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
Default encoding is Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8 for sharepoint-ntlm or identity otherwise.
Set HTTP headers for all transactions
+Set HTTP headers for all transactions.
Use this to set additional HTTP headers for all transactions
The input format is comma separated list of key,value pairs. Standard CSV encoding may be used.
For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
@@ -20492,6 +21393,7 @@ vendor = other bearer_token_command = oidc-token XDCYandex Disk is a cloud storage solution created by Yandex.
+Here is an example of making a yandex configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -20544,7 +21446,7 @@ y/e/d> ySync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync -i /home/local/directory remote:directory
Yandex paths may be as deep as required, e.g. remote:directory/subdirectory
.
Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified
in RFC3339 with nanoseconds format.
MD5 checksums are natively supported by Yandex Disk.
@@ -20552,15 +21454,14 @@ y/e/d> yIf you wish to empty your trash you can use the rclone cleanup remote:
command which will permanently delete all your trashed files. This command does not take any path arguments.
To view your current quota you can use the rclone about remote:
command which will display your usage limit (quota) and the current usage.
The default restricted characters set are replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout
parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers
errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m
, that is --timeout 60m
.
Here are the standard options specific to yandex (Yandex Disk).
OAuth Client Id Leave blank normally.
+OAuth Client Id.
+Leave blank normally.
OAuth Client Secret Leave blank normally.
+OAuth Client Secret.
+Leave blank normally.
Here are the advanced options specific to yandex (Yandex Disk).
OAuth Access Token as a JSON blob.
@@ -20586,7 +21488,8 @@ y/e/d> yAuth server URL. Leave blank to use the provider defaults.
+Auth server URL.
+Leave blank to use the provider defaults.
Token server url. Leave blank to use the provider defaults.
+Token server url.
+Leave blank to use the provider defaults.
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout
parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers
errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m
, that is --timeout 60m
.
Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. Token generation will work without a mail account, but Rclone won't be able to complete any actions.
+[403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
Zoho WorkDrive is a cloud storage solution created by Zoho.
+Here is an example of making a zoho configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -20683,18 +21592,19 @@ y/e/d>Sync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync -i /home/local/directory remote:directory
Zoho paths may be as deep as required, eg remote:directory/subdirectory
.
Modified times are currently not supported for Zoho Workdrive
No checksums are supported.
To view your current quota you can use the rclone about remote:
command which will display your current usage.
Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.
-Here are the standard options specific to zoho (Zoho).
OAuth Client Id Leave blank normally.
+OAuth Client Id.
+Leave blank normally.
OAuth Client Secret Leave blank normally.
+OAuth Client Secret.
+Leave blank normally.
Here are the advanced options specific to zoho (Zoho).
OAuth Access Token as a JSON blob.
@@ -20748,7 +21659,8 @@ y/e/d>Auth server URL. Leave blank to use the provider defaults.
+Auth server URL.
+Leave blank to use the provider defaults.
Token server url. Leave blank to use the provider defaults.
+Token server url.
+Leave blank to use the provider defaults.
This sets the encoding for the backend.
-See: the encoding section in the overview for more info.
+See the encoding section in the overview for more info.
Local paths are specified as normal filesystem paths, e.g. /path/to/wherever
, so
rclone sync -i /home/source /tmp/destination
Will sync /home/source
to /tmp/destination
.
For consistencies sake one can also configure a remote of type local
in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever
, but it is probably easier not to.
Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
Filenames should be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.
@@ -20785,6 +21699,7 @@ y/e/d>If an invalid (non-UTF8) filename is read, the invalid characters will be replaced with a quoted representation of the invalid bytes. The name gro\xdf
will be transferred as gro‛DF
. rclone
will emit a debug message in this case (use -v
to see), e.g.
Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"
With the local backend, restrictions on the characters that are usable in file or directory names depend on the operating system. To check what rclone will replace by default on your system, run rclone help flags local-encoding
.
On non Windows platforms the following characters are replaced when handling file names.