diff --git a/MANUAL.html b/MANUAL.html index eb2a7d197..4c0d99e61 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -13,75 +13,13 @@ div.column{display: inline-block; vertical-align: top; width: 50%;} div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;} ul.task-list{list-style: none;} - pre > code.sourceCode { white-space: pre; position: relative; } - pre > code.sourceCode > span { display: inline-block; line-height: 1.25; } - pre > code.sourceCode > span:empty { height: 1.2em; } - code.sourceCode > span { color: inherit; text-decoration: inherit; } - div.sourceCode { margin: 1em 0; } - pre.sourceCode { margin: 0; } - @media screen { - div.sourceCode { overflow: auto; } - } - @media print { - pre > code.sourceCode { white-space: pre-wrap; } - pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; } - } - pre.numberSource code - { counter-reset: source-line 0; } - pre.numberSource code > span - { position: relative; left: -4em; counter-increment: source-line; } - pre.numberSource code > span > a:first-child::before - { content: counter(source-line); - position: relative; left: -1em; text-align: right; vertical-align: baseline; - border: none; display: inline-block; - -webkit-touch-callout: none; -webkit-user-select: none; - -khtml-user-select: none; -moz-user-select: none; - -ms-user-select: none; user-select: none; - padding: 0 4px; width: 4em; - color: #aaaaaa; - } - pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa; padding-left: 4px; } - div.sourceCode - { } - @media screen { - pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; } - } - code span.al { color: #ff0000; font-weight: bold; } /* Alert */ - code span.an { color: #60a0b0; font-weight: bold; font-style: italic; } /* Annotation */ - code span.at { color: #7d9029; } /* Attribute */ - code span.bn { color: #40a070; } /* BaseN */ - code span.bu { } /* BuiltIn */ - code span.cf { color: #007020; font-weight: bold; } /* ControlFlow */ - code span.ch { color: #4070a0; } /* Char */ - code span.cn { color: #880000; } /* Constant */ - code span.co { color: #60a0b0; font-style: italic; } /* Comment */ - code span.cv { color: #60a0b0; font-weight: bold; font-style: italic; } /* CommentVar */ - code span.do { color: #ba2121; font-style: italic; } /* Documentation */ - code span.dt { color: #902000; } /* DataType */ - code span.dv { color: #40a070; } /* DecVal */ - code span.er { color: #ff0000; font-weight: bold; } /* Error */ - code span.ex { } /* Extension */ - code span.fl { color: #40a070; } /* Float */ - code span.fu { color: #06287e; } /* Function */ - code span.im { } /* Import */ - code span.in { color: #60a0b0; font-weight: bold; font-style: italic; } /* Information */ - code span.kw { color: #007020; font-weight: bold; } /* Keyword */ - code span.op { color: #666666; } /* Operator */ - code span.ot { color: #007020; } /* Other */ - code span.pp { color: #bc7a00; } /* Preprocessor */ - code span.sc { color: #4070a0; } /* SpecialChar */ - code span.ss { color: #bb6688; } /* SpecialString */ - code span.st { color: #4070a0; } /* String */ - code span.va { color: #19177c; } /* Variable */ - code span.vs { color: #4070a0; } /* VerbatimString */ - code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; } /* Warning */
Mar 18, 2022
+Jul 09, 2022
Links
+These backends adapt or modify other storage providers:
+Already installed rclone can be easily updated to the latest version using the rclone selfupdate command.
To install rclone on Linux/macOS/BSD systems, run:
-curl https://rclone.org/install.sh | sudo bash
+sudo -v ; curl https://rclone.org/install.sh | sudo bash
For beta installation, run:
-curl https://rclone.org/install.sh | sudo bash -s beta
+sudo -v ; curl https://rclone.org/install.sh | sudo bash -s beta
Note that this script checks the version of rclone installed first and won't re-download if not needed.
Fetch and unpack
@@ -256,7 +213,7 @@ sudo mv rclone /usr/local/bin/rclone config
When downloading a binary with a web browser, the browser will set the macOS gatekeeper quarantine attribute. Starting from Catalina, when attempting to run rclone
, a pop-up will appear saying:
“rclone” cannot be opened because the developer cannot be verified.
+"rclone" cannot be opened because the developer cannot be verified.
macOS cannot verify that this app is free from malware.
The simplest fix is to run
xattr -d com.apple.quarantine rclone
@@ -308,18 +265,26 @@ docker run --rm \
ls ~/data/mount
kill %1
Make sure you have at least Go go1.15 installed. Download go if necessary. The latest release is recommended. Then
-git clone https://github.com/rclone/rclone.git
-cd rclone
-go build
-# If on macOS and mount is wanted, instead run: make GOTAGS=cmount
-./rclone version
This will leave you a checked out version of rclone you can modify and send pull requests with. If you use make
instead of go build
then the rclone build will have the correct version information in it.
You can also build the latest stable rclone with:
+Make sure you have git and Go installed. Go version 1.16 or newer is required, latest release is recommended. You can get it from your package manager, or download it from golang.org/dl. Then you can run the following:
+git clone https://github.com/rclone/rclone.git
+cd rclone
+go build
+This will check out the rclone source in subfolder rclone, which you can later modify and send pull requests with. Then it will build the rclone executable in the same folder. As an initial check you can now run ./rclone version
(.\rclone version
on Windows).
Note that on macOS and Windows the mount command will not be available unless you specify additional build tag cmount
.
go build -tags cmount
+This assumes you have a GCC compatible C compiler (GCC or Clang) in your PATH, as it uses cgo. But on Windows, the cgofuse library that the cmount implementation is based on, also supports building without cgo, i.e. by setting environment variable CGO_ENABLED to value 0 (static linking). This is how the official Windows release of rclone is being built, starting with version 1.59. It is still possible to build with cgo on Windows as well, by using the MinGW port of GCC, e.g. by installing it in a MSYS2 distribution (make sure you install it in the classic mingw64 subsystem, the ucrt64 version is not compatible).
+Additionally, on Windows, you must install the third party utility WinFsp, with the "Developer" feature selected. If building with cgo, you must also set environment variable CPATH pointing to the fuse include directory within the WinFsp installation (normally C:\Program Files (x86)\WinFsp\inc\fuse
).
You may also add arguments -ldflags -s
(with or without -tags cmount
), to omit symbol table and debug information, making the executable file smaller, and -trimpath
to remove references to local file system paths. This is how the official rclone releases are built.
go build -trimpath -ldflags -s -tags cmount
+Instead of executing the go build
command directly, you can run it via the Makefile, which also sets version information and copies the resulting rclone executable into your GOPATH bin folder ($(go env GOPATH)/bin
, which corresponds to ~/go/bin/rclone
by default).
make
+To include mount command on macOS and Windows with Makefile build:
+make GOTAGS=cmount
+As an alternative you can download the source, build and install rclone in one operation, as a regular Go package. The source will be stored it in the Go module cache, and the resulting executable will be in your GOPATH bin folder ($(go env GOPATH)/bin
, which corresponds to ~/go/bin/rclone
by default).
With Go version 1.17 or newer:
+go install github.com/rclone/rclone@latest
+With Go versions older than 1.17 (do not use the -u
flag, it causes Go to try to update the dependencies that rclone uses and sometimes these don't work with the current version):
go get github.com/rclone/rclone
-or the latest version (equivalent to the beta) with
-go get github.com/rclone/rclone@master
-These will build the binary in $(go env GOPATH)/bin
(~/go/bin/rclone
by default) after downloading the source to the go module cache. Note - do not use the -u
flag here. This causes go to try to update the dependencies that rclone uses and sometimes these don't work with the current version of rclone.
This can be done with Stefan Weichinger's ansible role.
Instructions
@@ -354,7 +319,7 @@ kill %1For running rclone at system startup, you can create a Windows service that executes your rclone command, as an alternative to scheduled task configured to run at startup.
For mount commands, Rclone has a built-in Windows service integration via the third-party WinFsp library it uses. Registering as a regular Windows service easy, as you just have to execute the built-in PowerShell command New-Service
(requires administrative privileges).
For mount commands, rclone has a built-in Windows service integration via the third-party WinFsp library it uses. Registering as a regular Windows service easy, as you just have to execute the built-in PowerShell command New-Service
(requires administrative privileges).
Example of a PowerShell command that creates a Windows service for mounting some remote:/files
as drive letter X:
, for all users (service will be running as the local system account):
New-Service -Name Rclone -BinaryPathName 'c:\rclone\rclone.exe mount remote:/files X: --config c:\rclone\config\rclone.conf --log-file c:\rclone\logs\mount.txt'
The WinFsp service infrastructure supports incorporating services for file system implementations, such as rclone, into its own launcher service, as kind of "child services". This has the additional advantage that it also implements a network provider that integrates into Windows standard methods for managing network drives. This is currently not officially supported by Rclone, but with WinFsp version 2019.3 B2 / v1.5B2 or later it should be possible through path rewriting as described here.
@@ -384,6 +349,7 @@ kill %1Copy files from source to dest, skipping identical files.
Copy the source to the destination. Does not transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Doesn't delete files from the destination.
-Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.
+Copy the source to the destination. Does not transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Doesn't delete files from the destination. If you want to also delete files from destination, to make it match source, use the sync command instead.
+Note that it is always the contents of the directory that is synced, not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.
+To copy single files, use the copyto command instead.
If dest:path doesn't exist, it is created and the source:path contents go there.
For example
rclone copy source:sourcepath dest:destpath
@@ -494,11 +463,11 @@ destpath/sourcepath/two.txt
Make source and dest identical, modifying destination only.
Sync the source to the destination, changing the destination only. Doesn't transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary (except duplicate objects, see below).
+Sync the source to the destination, changing the destination only. Doesn't transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary (except duplicate objects, see below). If you don't want to delete files from destination, use the copy command instead.
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive
/-i
flag.
rclone sync -i SOURCE remote:DESTINATION
Note that files in the destination won't be deleted if there were any errors at any point. Duplicate objects (files with the same name, on those providers that support it) are also not yet handled.
-It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy
command above if unsure.
It is always the contents of the directory that is synced, not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy command if unsure.
If dest:path doesn't exist, it is created and the source:path contents go there.
Note: Use the -P
/--progress
flag to view real-time transfer statistics
Note: Use the rclone dedupe
command to deal with "Duplicate object/directory found in source/destination - ignoring" errors. See this forum post for more info.
Move files from source to dest.
Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server-side directory move operation.
+To move single files, use the moveto command instead.
If no filters are in use and if possible this will server-side move source:path
into dest:path
. After this source:path
will no longer exist.
Otherwise for each file in source:path
selected by the filters (if any) this will move it into dest:path
. If possible a server-side move will be used, otherwise it will copy it (server-side if possible) into dest:path
then delete the original (if no errors on copy) in source:path
.
If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.
+If you want to delete empty source directories after move, use the --delete-empty-src-dirs
flag.
See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly.
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive
/-i
flag.
Note: Use the -P
/--progress
flag to view real-time transfer statistics.
Remove the files in path.
Remove the files in path. Unlike purge
it obeys include/exclude filters so can be used to selectively delete files.
rclone delete
only deletes files but leaves the directory structure alone. If you want to delete a directory and all of its contents use the purge
command.
If you supply the --rmdirs
flag, it will remove all empty directories along with it. You can also use the separate command rmdir
or rmdirs
to delete empty directories only.
Remove the files in path. Unlike purge it obeys include/exclude filters so can be used to selectively delete files.
+rclone delete
only deletes files but leaves the directory structure alone. If you want to delete a directory and all of its contents use the purge command.
If you supply the --rmdirs
flag, it will remove all empty directories along with it. You can also use the separate command rmdir or rmdirs to delete empty directories only.
For example, to delete all files bigger than 100 MiB, you may first want to check what would be deleted (use either):
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
@@ -556,7 +526,7 @@ rclone --dry-run --min-size 100M delete remote:path
Remove the path and all of its contents.
Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use the delete
command if you want to selectively delete files. To delete empty directories only, use command rmdir
or rmdirs
.
Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use the delete command if you want to selectively delete files. To delete empty directories only, use command rmdir or rmdirs.
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive
/-i
flag.
rclone purge remote:path [flags]
Remove the empty directory at path.
This removes empty directory given by path. Will not remove the path if it has any objects in it, not even empty subdirectories. Use command rmdirs
(or delete
with option --rmdirs
) to do that.
To delete a path and any objects in it, use purge
command.
This removes empty directory given by path. Will not remove the path if it has any objects in it, not even empty subdirectories. Use command rmdirs (or delete with option --rmdirs
) to do that.
To delete a path and any objects in it, use purge command.
rclone rmdir remote:path [flags]
-h, --help help for rmdir
@@ -593,6 +563,7 @@ rclone --dry-run --min-size 100M delete remote:path
Checks the files in the source and destination match.
Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files that don't match. It doesn't alter the source or destination.
+For the crypt remote there is a dedicated command, cryptcheck, that are able to check the checksums of the crypted files.
If you supply the --size-only
flag, it will only compare the sizes not the hashes as well. Use this for a quick check.
If you supply the --download
flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
If you supply the --checkfile HASH
flag with a valid hash name, the source:path
must point to a text file in the SUM format.
List all directories/containers/buckets in the path.
Lists the directories in the source path to standard output. Does not recurse by default. Use the -R flag to recurse.
+Lists the directories in the source path to standard output. Does not recurse by default. Use the -R
flag to recurse.
This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name of the directory, Eg
$ rclone lsd swift:
494000 2018-04-26 08:43:20 10000 10000files
@@ -667,7 +638,7 @@ rclone --dry-run --min-size 100M delete remote:path
-1 2016-10-17 17:41:53 -1 1000files
-1 2017-01-03 14:40:54 -1 2500files
-1 2017-07-08 14:39:28 -1 4000files
-If you just want the directory names use "rclone lsf --dirs-only".
+If you just want the directory names use rclone lsf --dirs-only
.
Any of the filtering options can be applied to this command.
There are several related list commands
Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces.
By default, the hash is requested from the remote. If MD5 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling MD5 for any remote.
+For other algorithms, see the hashsum command. Running rclone md5sum remote:path
is equivalent to running rclone hashsum MD5 remote:path
.
This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a relative path).
rclone md5sum remote:path [flags]
Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces.
By default, the hash is requested from the remote. If SHA-1 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling SHA-1 for any remote.
+For other algorithms, see the hashsum command. Running rclone sha1sum remote:path
is equivalent to running rclone hashsum SHA1 remote:path
.
This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a relative path).
This command can also hash data received on STDIN, if not passing a remote:path.
rclone sha1sum remote:path [flags]
@@ -760,6 +733,11 @@ rclone --dry-run --min-size 100M delete remote:path
Prints the total size and number of objects in remote:path.
+Counts objects in the path and calculates the total size. Prints the result to standard output.
+By default the output is in human-readable format, but shows values in both human-readable format as well as the raw numbers (global option --human-readable
is not considered). Use option --json
to format output as JSON instead.
Recurses by default, use --max-depth 1
to stop the recursion.
Some backends do not always provide file sizes, see for example Google Photos and Google Drive. Rclone will then show a notice in the log indicating how many such files were encountered, and count them in as empty files in the output of the size command.
rclone size remote:path [flags]
-h, --help help for size
@@ -771,7 +749,7 @@ rclone --dry-run --min-size 100M delete remote:path
Show the version number.
-Show the rclone version number, the go version, the build target OS and architecture, the runtime OS and kernel version and bitness, build tags and the type of executable (static or dynamic).
For example:
$ rclone version
@@ -807,7 +785,7 @@ beta: 1.42.0.5 (released 2018-06-17)
rclone cleanup
Clean up the remote if possible.
-Synopsis
+Synopsis
Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes.
rclone cleanup remote:path [flags]
Options
@@ -819,10 +797,10 @@ beta: 1.42.0.5 (released 2018-06-17)
rclone dedupe
Interactively find duplicate filenames and delete/rename them.
-Synopsis
+Synopsis
By default dedupe
interactively finds files with duplicate names and offers to delete all but one or rename them to be different. This is known as deduping by name.
Deduping by name is only useful with a small group of backends (e.g. Google Drive, Opendrive) that can have duplicate file names. It can be run on wrapping backends (e.g. crypt) if they wrap a backend which supports duplicate file names.
-However if --by-hash is passed in then dedupe will find files with duplicate hashes instead which will work on any backend which supports at least one hash. This can be used to find files with duplicate content. This is known as deduping by hash.
+However if --by-hash
is passed in then dedupe will find files with duplicate hashes instead which will work on any backend which supports at least one hash. This can be used to find files with duplicate content. This is known as deduping by hash.
If deduping by name, first rclone will merge directories with the same name. It will do this iteratively until all the identically named directories have been merged.
Next, if deduping by name, for every group of duplicate file names / hashes, it will delete all but one identical file it finds without confirmation. This means that for most duplicated files the dedupe
command will not be interactive.
dedupe
considers files to be identical if they have the same file path and the same hash. If the backend does not support hashes (e.g. crypt wrapping Google Drive) then they will never be found to be identical. If you use the --size-only
flag then files will be considered identical if they have the same size (any hash will be ignored). This can be useful on crypt backends which do not support hashes.
@@ -898,7 +876,7 @@ two-3.txt: renamed from: two.txt
Get quota information from the remote.
-rclone about
prints quota information about a remote to standard output. The output is typically used, free, quota and trash contents.
E.g. Typical output from rclone about remote:
is:
Total: 17 GiB
@@ -944,7 +922,7 @@ Other: 8849156022
Remote authorization.
-Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.
Use the --auth-no-open-browser to prevent rclone to open auth link in default browser automatically.
rclone authorize [flags]
@@ -958,7 +936,7 @@ Other: 8849156022
Run a backend-specific command.
-This runs a backend-specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions.
You can discover what commands a backend implements by using
rclone backend help remote:
@@ -982,7 +960,7 @@ rclone backend help <backendname>
Perform bidirectonal synchronization between two paths.
-Perform bidirectonal synchronization between two paths.
Bisync provides a bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. On each successive run it will: - list files on Path1 and Path2, and check for changes on each side. Changes include New
, Newer
, Older
, and Deleted
files. - Propagate changes on Path1 to Path2, and vice-versa.
See full bisync description for details.
@@ -1006,7 +984,7 @@ rclone backend help <backendname>Concatenates any files and sends them to stdout.
-rclone cat sends any files to standard output.
You can use it like this to output a single file
rclone cat remote:path/to/file
@@ -1030,7 +1008,7 @@ rclone backend help <backendname>
Checks the files in the source against a SUM file.
-Checks that hashsums of source files match the SUM file. It compares hashes (MD5, SHA1, etc) and logs a report of files which don't match. It doesn't alter the file system.
If you supply the --download
flag, it will download the data from remote and calculate the contents hash on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
Note that hash values in the SUM file are treated as case insensitive.
@@ -1061,8 +1039,8 @@ rclone backend help <backendname>generate the autocompletion script for the specified shell
-Generate the autocompletion script for the specified shell
+Generate the autocompletion script for rclone for the specified shell. See each sub-command's help for details on how to use the generated script.
-h, --help help for completion
@@ -1070,18 +1048,23 @@ rclone backend help <backendname>
generate the autocompletion script for bash
-Generate the autocompletion script for bash
+Generate the autocompletion script for the bash shell.
This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager.
-To load completions in your current shell session: $ source <(rclone completion bash)
-To load completions for every new session, execute once: Linux: $ rclone completion bash > /etc/bash_completion.d/rclone MacOS: $ rclone completion bash > /usr/local/etc/bash_completion.d/rclone
+To load completions in your current shell session:
+source <(rclone completion bash)
+To load completions for every new session, execute once:
+rclone completion bash > /etc/bash_completion.d/rclone
+rclone completion bash > /usr/local/etc/bash_completion.d/rclone
You will need to start a new shell for this setup to take effect.
rclone completion bash
See the global flags page for global options not listed here.
generate the autocompletion script for fish
-Generate the autocompletion script for fish
+Generate the autocompletion script for the fish shell.
-To load completions in your current shell session: $ rclone completion fish | source
-To load completions for every new session, execute once: $ rclone completion fish > ~/.config/fish/completions/rclone.fish
+To load completions in your current shell session:
+rclone completion fish | source
+To load completions for every new session, execute once:
+rclone completion fish > ~/.config/fish/completions/rclone.fish
You will need to start a new shell for this setup to take effect.
rclone completion fish [flags]
See the global flags page for global options not listed here.
generate the autocompletion script for powershell
-Generate the autocompletion script for powershell
+Generate the autocompletion script for powershell.
-To load completions in your current shell session: PS C:> rclone completion powershell | Out-String | Invoke-Expression
+To load completions in your current shell session:
+rclone completion powershell | Out-String | Invoke-Expression
To load completions for every new session, add the output of the above command to your powershell profile.
rclone completion powershell [flags]
See the global flags page for global options not listed here.
generate the autocompletion script for zsh
-Generate the autocompletion script for zsh
+Generate the autocompletion script for the zsh shell.
If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once:
-$ echo "autoload -U compinit; compinit" >> ~/.zshrc
-To load completions for every new session, execute once: # Linux: $ rclone completion zsh > "${fpath[1]}/_rclone" # macOS: $ rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
+echo "autoload -U compinit; compinit" >> ~/.zshrc
+To load completions for every new session, execute once:
+rclone completion zsh > "${fpath[1]}/_rclone"
+rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
You will need to start a new shell for this setup to take effect.
rclone completion zsh [flags]
See the global flags page for global options not listed here.
Create a new remote with name, type and options.
-Create a new remote of name
with type
and options. The options should be passed in pairs of key
value
or as key=value
.
For example, to make a swift remote of name myremote using auto config you would do:
rclone config create myremote swift env_auth true
@@ -1223,7 +1213,7 @@ rclone config create myremote swift env_auth=true
Disconnects user from remote
-This disconnects the remote: passed in to the cloud storage system.
This normally means revoking the oauth token.
To reconnect use "rclone config reconnect".
@@ -1247,7 +1237,7 @@ rclone config create myremote swift env_auth=trueEnter an interactive configuration session.
-Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.
rclone config edit [flags]
Update password in an existing remote.
-Update an existing remote's password. The password should be passed in pairs of key
password
or as key=password
. The password
should be passed in in clear (unobscured).
For example, to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
@@ -1305,7 +1295,7 @@ rclone config password myremote fieldname=mypassword
Re-authenticates user with remote.
-This reconnects remote: passed in to the cloud storage system.
To disconnect the remote use "rclone config disconnect".
This normally means going through the interactive oauth flow again.
@@ -1339,7 +1329,7 @@ rclone config password myremote fieldname=mypasswordUpdate options in an existing remote.
-Update an existing remote's options. The options should be passed in pairs of key
value
or as key=value
.
For example, to update the env_auth field of a remote of name myremote you would do:
rclone config update myremote env_auth true
@@ -1410,7 +1400,7 @@ rclone config update myremote env_auth=true
Prints info about logged in user of remote.
-This prints the details of the person logged in to the cloud storage system.
rclone config userinfo remote: [flags]
Copy files from source to dest, skipping identical files.
-If source:path is a file or directory then it copies it to a file or directory named dest:path.
-This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.
+This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.
So
rclone copyto src dst
where src and dst are rclone paths, either remote:path or /path/to/local or C:.
@@ -1447,18 +1437,19 @@ if src is directoryCopy url content to dest.
-Download a URL's content and copy it to the destination without saving it in temporary storage.
-Setting --auto-filename
will cause the file name to be retrieved from the URL (after any redirections) and used in the destination path. With --print-filename
in addition, the resulting file name will be printed.
Setting --auto-filename
will attempt to automatically determine the filename from the URL (after any redirections) and used in the destination path. With --auto-filename-header
in addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. With --print-filename
in addition, the resulting file name will be printed.
Setting --no-clobber
will prevent overwriting file on the destination if there is one with the same name.
Setting --stdout
or making the output file name -
will cause the output to be written to standard output.
rclone copyurl https://example.com dest:path [flags]
-a, --auto-filename Get the file name from the URL and use it for destination file path
- -h, --help help for copyurl
- --no-clobber Prevent overwriting file with same name
- -p, --print-filename Print the resulting name from --auto-filename
- --stdout Write the output to stdout rather than a file
+ -a, --auto-filename Get the file name from the URL and use it for destination file path
+ --header-filename Get the file name from the Content-Disposition header
+ -h, --help help for copyurl
+ --no-clobber Prevent overwriting file with same name
+ -p, --print-filename Print the resulting name from --auto-filename
+ --stdout Write the output to stdout rather than a file
See the global flags page for global options not listed here.
Cryptcheck checks the integrity of a crypted remote.
-rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.
+rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.
For it to work the underlying remote of the cryptedremote must support some kind of checksum.
It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.
Use it like this
@@ -1502,14 +1493,14 @@ if src is directoryCryptdecode returns unencrypted file names.
-rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.
-If you supply the --reverse flag, it will return encrypted file names.
+If you supply the --reverse
flag, it will return encrypted file names.
use it like this
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
rclone cryptdecode --reverse encryptedremote: filename1 filename2
-Another way to accomplish this is by using the rclone backend encode
(or decode
)command. See the documentation on the crypt
overlay for more info.
Another way to accomplish this is by using the rclone backend encode
(or decode
) command. See the documentation on the crypt overlay for more info.
rclone cryptdecode encryptedremote: encryptedfilename [flags]
-h, --help help for cryptdecode
@@ -1521,7 +1512,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
Remove a single file from remote.
-Remove a single file from remote. Unlike delete
it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.
rclone deletefile remote:path [flags]
Output completion script for a given shell.
-Generates a shell completion script for rclone. Run with --help to list the supported shells.
+Generates a shell completion script for rclone. Run with --help
to list the supported shells.
-h, --help help for genautocomplete
See the global flags page for global options not listed here.
@@ -1547,7 +1538,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2Output bash completion script for rclone.
-Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete bash
@@ -1565,7 +1556,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
Output fish completion script for rclone.
-Generates a fish autocompletion script for rclone.
This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete fish
@@ -1583,7 +1574,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
Output zsh completion script for rclone.
-Generates a zsh autocompletion script for rclone.
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g.
sudo rclone genautocomplete zsh
@@ -1601,7 +1592,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
Output markdown docs for rclone to the directory supplied.
-This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
rclone gendocs output_directory [flags]
Produces a hashsum file for all the objects in the path.
-Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.
By default, the hash is requested from the remote. If the hash is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote.
+For the MD5 and SHA1 algorithms there are also dedicated commands, md5sum and sha1sum.
This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a relative path).
Run without a hash to see the list of all supported hashes, e.g.
$ rclone hashsum
@@ -1626,6 +1618,7 @@ Supported hashes are:
* crc32
* sha256
* dropbox
+ * hidrive
* mailru
* quickxor
Then
@@ -1645,7 +1638,7 @@ Supported hashes are:Generate public link to file/folder.
-rclone link will create, retrieve or remove a public link to the given file or folder.
rclone link remote:path/to/file
rclone link remote:path/to/folder/
@@ -1666,9 +1659,9 @@ rclone link --expire 1d remote:path/to/file
List all the remotes in the config file.
-rclone listremotes lists all the available remotes from the config file.
-When uses with the -l flag it lists the types too.
+When used with the --long
flag it lists the types too.
rclone listremotes [flags]
-h, --help help for listremotes
@@ -1680,7 +1673,7 @@ rclone link --expire 1d remote:path/to/file
List directories and objects in remote:path formatted for parsing.
-List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.
Eg
$ rclone lsf swift:bucket
@@ -1689,7 +1682,7 @@ canole
diwogej7
ferejej3gux/
fubuwic
-Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:
+Use the --format
option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:
p - path
s - size
t - modification time
@@ -1698,8 +1691,9 @@ i - ID of object
o - Original ID of underlying object
m - MimeType of object if known
e - encrypted name
-T - tier of storage if known, e.g. "Hot" or "Cool"
-So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last.
+T - tier of storage if known, e.g. "Hot" or "Cool" +M - Metadata of object in JSON blob format, eg {"key":"value"} +So if you wanted the path, size and modification time, you would use --format "pst"
, or maybe --format "tsp"
to put the path last.
Eg
$ rclone lsf --format "tsp" swift:bucket
2016-06-25 18:55:41;60295;bevajer5jef
@@ -1707,7 +1701,7 @@ T - tier of storage if known, e.g. "Hot" or "Cool"
-If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type.
+If you specify "h" in the format you will get the MD5 hash by default, use the --hash
flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type.
For example, to emulate the md5sum command you can use
rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
Eg
@@ -1718,7 +1712,7 @@ cd65ac234e6fea5925974a51cdd865cc canole 8fd37c3810dd660778137ac3a66cc06d fubuwic 99713e14a4c4ff553acaf1930fad985b gixacuh7ku(Though "rclone md5sum ." is an easier way of typing this.)
-By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy.
+By default the separator is ";" this can be changed with the --separator
flag. Note that separators aren't escaped in the path so putting it last is a good strategy.
Eg
$ rclone lsf --separator "," --format "tshp" swift:bucket
2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
@@ -1732,7 +1726,7 @@ cd65ac234e6fea5925974a51cdd865cc canole
test.log,22355
test.sh,449
"this file contains a comma, in the file name.txt",6
-Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw flag.
+Note that the --absolute
parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw
flag.
For example, to find all the files modified within one day and copy those only (without traversing the whole directory structure):
rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
rclone copy --files-from-raw new_files /path/to/local remote:path
@@ -1768,18 +1762,37 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
List directories and objects in the path in JSON format.
-List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this
-{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsBucket" : false, "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", }
-If --hash is not specified the Hashes property won't be emitted. The types of hash can be specified with the --hash-type parameter (which may be repeated). If --hash-type is set then it implies --hash.
-If --no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (e.g. s3, swift).
-If --no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (e.g. s3, swift).
-If --encrypted is not specified the Encrypted won't be emitted.
-If --dirs-only is not specified files in addition to directories are returned
-If --files-only is not specified directories in addition to the files will be returned.
-if --stat is set then a single JSON blob will be returned about the item pointed to. This will return an error if the item isn't found. However on bucket based backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will return an empty directory as it isn't possible to tell empty directories from missing directories there.
-The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name.
+{
+ "Hashes" : {
+ "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f",
+ "MD5" : "b1946ac92492d2347c6235b4d2611184",
+ "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
+ },
+ "ID": "y2djkhiujf83u33",
+ "OrigID": "UYOJVTUW00Q1RzTDA",
+ "IsBucket" : false,
+ "IsDir" : false,
+ "MimeType" : "application/octet-stream",
+ "ModTime" : "2017-05-31T16:15:57.034468261+01:00",
+ "Name" : "file.txt",
+ "Encrypted" : "v0qpsdq8anpci8n929v3uu9338",
+ "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338",
+ "Path" : "full/path/goes/here/file.txt",
+ "Size" : 6,
+ "Tier" : "hot",
+}
+If --hash
is not specified the Hashes property won't be emitted. The types of hash can be specified with the --hash-type
parameter (which may be repeated). If --hash-type
is set then it implies --hash
.
If --no-modtime
is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (e.g. s3, swift).
If --no-mimetype
is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (e.g. s3, swift).
If --encrypted
is not specified the Encrypted won't be emitted.
If --dirs-only
is not specified files in addition to directories are returned
If --files-only
is not specified directories in addition to the files will be returned.
If --metadata
is set then an additional Metadata key will be returned. This will have metdata in rclone standard format as a JSON object.
if --stat
is set then a single JSON blob will be returned about the item pointed to. This will return an error if the item isn't found. However on bucket based backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will return an empty directory as it isn't possible to tell empty directories from missing directories there.
The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive
the Path will always be the same as Name.
If the directory is a bucket in a bucket-based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true".
The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (e.g. Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav, etc.) no digits will be shown ("2017-05-31T16:15:57+01:00").
The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line.
@@ -1799,7 +1812,7 @@ rclone copy --files-from-raw new_files /path/to/local remote:pathrclone lsjson remote:path [flags]
--dirs-only Show only directories in the listing
- -M, --encrypted Show the encrypted names
+ --encrypted Show the encrypted names
--files-only Show only files in the listing
--hash Include hashes in the output (may take longer)
--hash-type stringArray Show only this hash type (may be repeated)
@@ -1816,7 +1829,7 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
Mount the remote as file system on a mountpoint.
-rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the --daemon
flag to force background mode. On Windows you can run mount in foreground only, the flag is ignored.
The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not support the about feature at all, then 1 PiB is set as both the total and the free size.
To run rclone mount on Windows, you will need to download and install WinFsp.
-WinFsp is an open-source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with cgofuse. Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows.
+WinFsp is an open-source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with cgofuse. Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone mount for Windows.
Unlike other operating systems, Microsoft Windows provides a different filesystem type for network and fixed drives. It optimises access on the assumption fixed disk drives are fast and reliable, while network drives have relatively high latency and less reliability. Some settings can also be differentiated between the two types, for example that Windows Explorer should just display icons and not create preview thumbnails for image and video files on network drives.
In most cases, rclone will mount the remote as a normal, fixed disk drive by default. However, you can also choose to mount it as a remote network drive, often described as a network share. If you mount an rclone remote using the default, fixed drive mode and experience unexpected program errors, freezes or other issues, consider mounting as a network drive instead.
@@ -1873,7 +1886,7 @@ rclone mount remote:path/to/files * --volname \\cloud\remoteDrives created as Administrator are not visible to other accounts, not even an account that was elevated to Administrator with the User Account Control (UAC) feature. A result of this is that if you mount to a drive letter from a Command Prompt run as Administrator, and then try to access the same drive from Windows Explorer (which does not run as Administrator), you will not be able to see the mounted drive.
If you don't need to access the drive from applications running with administrative privileges, the easiest way around this is to always create the mount from a non-elevated command prompt.
To make mapped drives available to the user account that created them regardless if elevated or not, there is a special Windows setting called linked connections that can be enabled.
-It is also possible to make a drive mount available to everyone on the system, by running the process creating it as the built-in SYSTEM account. There are several ways to do this: One is to use the command-line utility PsExec, from Microsoft's Sysinternals suite, which has option -s
to start processes as the SYSTEM account. Another alternative is to run the mount command from a Windows Scheduled Task, or a Windows Service, configured to run as the SYSTEM account. A third alternative is to use the WinFsp.Launcher infrastructure). Note that when running rclone as another user, it will not use the configuration file from your profile unless you tell it to with the --config
option. Read more in the install documentation.
It is also possible to make a drive mount available to everyone on the system, by running the process creating it as the built-in SYSTEM account. There are several ways to do this: One is to use the command-line utility PsExec, from Microsoft's Sysinternals suite, which has option -s
to start processes as the SYSTEM account. Another alternative is to run the mount command from a Windows Scheduled Task, or a Windows Service, configured to run as the SYSTEM account. A third alternative is to use the WinFsp.Launcher infrastructure). Note that when running rclone as another user, it will not use the configuration file from your profile unless you tell it to with the --config
option. Read more in the install documentation.
Note that mapping to a directory path, instead of a drive letter, does not suffer from the same limitations.
Without the use of --vfs-cache-mode
this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without --vfs-cache-mode writes
or --vfs-cache-mode full
. See the VFS File Caching section for more info.
Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
@@ -1999,6 +2012,19 @@ WantedBy=multi-user.targetWhen reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:
+where available on an object.
+On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).
+For example hash
is slow with the local
and sftp
backends as they have to read the entire file and hash it, and modtime
is slow with the s3
, swift
, ftp
and qinqstor
backends because they need to do an extra API call to fetch it.
If you use the --vfs-fast-fingerprint
flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.
If you are running a vfs cache over local
, s3
or swift
backends then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.
When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
These flags control the chunking:
@@ -2013,20 +2039,23 @@ WantedBy=multi-user.target--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
---read-only Mount read-only.
+--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
-When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
-The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
-Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+The --vfs-case-insensitive
VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
+This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.
+--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
Move file or directory from source to dest.
-If source:path is a file or directory then it moves it to a file or directory named dest:path.
-This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command.
+This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command.
So
rclone moveto src dst
where src and dst are rclone paths, either remote:path or /path/to/local or C:.
@@ -2107,10 +2138,10 @@ if src is directoryExplore a remote with a text based user interface.
-This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".
To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.
-Here are the keys - press '?' to toggle the help on and off
+You can interact with the user interface using key presses, press '?' to toggle the help on and off. The supported keys are:
↑,↓ or k,j to Move
→,l to enter
←,h to return
@@ -2120,13 +2151,28 @@ if src is directory
u toggle human-readable format
n,s,C,A sort by name,size,count,average size
d delete file/directory
+ v select file/directory
+ V enter visual select mode
+ D delete selected files/directories
y copy current path to clipboard
Y display current path
- ^L refresh screen
+ ^L refresh screen (fix screen corruption)
? to toggle help on and off
- q/ESC/c-C to quit
+ q/ESC/^c to quit
+Listed files/directories may be prefixed by a one-character flag, some of them combined with a description in brackes at end of line. These flags have the following meaning:
+e means this is an empty directory, i.e. contains no files (but
+ may contain empty subdirectories)
+~ means this is a directory where some of the files (possibly in
+ subdirectories) have unknown size, and therefore the directory
+ size may be underestimated (and average size inaccurate, as it
+ is average of the files with known sizes).
+. means an error occurred while reading a subdirectory, and
+ therefore the directory size may be underestimated (and average
+ size inaccurate)
+! means an error occurred while reading this directory
This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands.
-Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously.
+Note that it might take some time to delete big files/directories. The UI won't respond in the meantime since the deletion is done synchronously.
+For a non-interactive listing of the remote, see the tree command. To just get the total size of the remote you can also use the size command.
rclone ncdu remote:path [flags]
-h, --help help for ncdu
@@ -2137,11 +2183,11 @@ if src is directory
Obscure password for use in the rclone config file.
-In the rclone config file, human-readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident.
Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token.
This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline.
-echo "secretpassword" | rclone obscure -
+echo "secretpassword" | rclone obscure -
If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself.
If you want to encrypt the config file then please use config file encryption - see rclone config for more info.
rclone obscure password [flags]
@@ -2154,24 +2200,24 @@ if src is directory
Run a command against a running rclone.
-This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"
-A username and password can be passed in with --user and --pass.
-Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.
+This runs a command against a running rclone. Use the --url
flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"
A username and password can be passed in with --user
and --pass
.
Note that --rc-addr
, --rc-user
, --rc-pass
will be read also for --url
, --user
, --pass
.
Arguments should be passed in as parameter=value.
The result will be returned as a JSON object by default.
-The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values.
-The -o/--opt option can be used to set a key "opt" with key, value options in the form "-o key=value" or "-o key". It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings.
+The --json
parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values.
The -o
/--opt
option can be used to set a key "opt" with key, value options in the form -o key=value
or -o key
. It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings.
-o key=value -o key2
Will place this in the "opt" value
{"key":"value", "key2","")
-The -a/--arg option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings.
+The -a
/--arg
option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings.
-a value -a value2
Will place this in the "arg" value
["value", "value2"]
-Use --loopback to connect to the rclone instance running "rclone rc". This is very useful for testing commands without having to run an rclone rc server, e.g.:
+Use --loopback
to connect to the rclone instance running rclone rc
. This is very useful for testing commands without having to run an rclone rc server, e.g.:
rclone rc --loopback operations/about fs=/
-Use "rclone rc" to see a list of all possible commands.
+Use rclone rc
to see a list of all possible commands.
rclone rc commands parameter [flags]
-a, --arg stringArray Argument placed in the "arg" array
@@ -2190,14 +2236,14 @@ if src is directory
rclone rcat
Copies standard input to file on remote.
-Synopsis
+Synopsis
rclone rcat reads from standard input (stdin) and copies it to a single remote file.
echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file
If the remote file already exists, it will be overwritten.
rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff
. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.
-Use the |--size| flag to preallocate the file in advance at the remote end and actually stream it, even if remote backend doesn't support streaming.
-|--size| should be the exact size of the input stream in bytes. If the size of the stream is different in length to the |--size| passed in then the transfer will likely fail.
+Use the --size
flag to preallocate the file in advance at the remote end and actually stream it, even if remote backend doesn't support streaming.
+--size
should be the exact size of the input stream in bytes. If the size of the stream is different in length to the --size
passed in then the transfer will likely fail.
Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move
it to the destination.
rclone rcat remote:path [flags]
Options
@@ -2210,7 +2256,7 @@ ffmpeg - | rclone rcat remote:path/to/file
Run rclone listening to remote control commands only.
-This runs rclone so that it only listens to remote control commands.
This is useful if you are controlling rclone via the rc API.
If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run.
@@ -2225,11 +2271,11 @@ ffmpeg - | rclone rcat remote:path/to/fileRemove empty directories under the path.
-This recursively removes any empty directories (including directories that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the --leave-root
flag.
Use command rmdir
to delete just the empty directory given by path, not recurse.
This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete
command will delete files but leave the directory structure (unless used with option --rmdirs
).
To delete a path and any objects in it, use purge
command.
Use command rmdir to delete just the empty directory given by path, not recurse.
+This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete command will delete files but leave the directory structure (unless used with option --rmdirs
).
To delete a path and any objects in it, use purge command.
rclone rmdirs remote:path [flags]
-h, --help help for rmdirs
@@ -2241,7 +2287,7 @@ ffmpeg - | rclone rcat remote:path/to/file
Update the rclone binary.
-This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and cryptographically signed signature.
If used without flags (or with implied --stable
flag), this command will install the latest stable release. However, some issues may be fixed (or features added) only in the latest beta release. In such cases you should run the command with the --beta
flag, i.e. rclone selfupdate --beta
. You can check in advance what version would be installed by adding the --check
flag, then repeat the command without it when you are satisfied.
Sometimes the rclone team may recommend you a concrete beta or stable rclone release to troubleshoot your issue or add a bleeding edge feature. The --version VER
flag, if given, will update to the concrete version instead of the latest one. If you omit micro version from VER
(for example 1.53
), the latest matching micro version will be used.
Serve a remote over a protocol.
-rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, e.g.
+Serve a remote over a given protocol. Requires the use of a subcommand to specify the protocol, e.g.
rclone serve http remote:
Each subcommand has its own options which you can see in their help.
rclone serve <protocol> [opts] <remote> [flags]
@@ -2283,12 +2329,12 @@ ffmpeg - | rclone rcat remote:path/to/file
Serve remote:path over DLNA
-rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.
+Run a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.
Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.
Use --addr
to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000
or --addr :8080
to listen to all IPs.
Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
@@ -2362,6 +2408,19 @@ ffmpeg - | rclone rcat remote:path/to/fileWhen reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:
+where available on an object.
+On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).
+For example hash
is slow with the local
and sftp
backends as they have to read the entire file and hash it, and modtime
is slow with the s3
, swift
, ftp
and qinqstor
backends because they need to do an extra API call to fetch it.
If you use the --vfs-fast-fingerprint
flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.
If you are running a vfs cache over local
, s3
or swift
backends then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.
When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
These flags control the chunking:
@@ -2376,20 +2435,23 @@ ffmpeg - | rclone rcat remote:path/to/file--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
---read-only Mount read-only.
+--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
-When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
-The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
-Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+The --vfs-case-insensitive
VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
+This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.
+--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
Serve any remote on docker's volume plugin API.
-This command implements the Docker volume plugin API allowing docker to use rclone as a data storage mechanism for various cloud providers. rclone provides docker volume plugin based on it.
To create a docker plugin, one must create a Unix or TCP socket that Docker will look for when you use the plugin and then it listens for commands from docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example:
sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
@@ -2442,7 +2506,7 @@ ffmpeg - | rclone rcat remote:path/to/file
Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
@@ -2505,6 +2569,19 @@ ffmpeg - | rclone rcat remote:path/to/fileWhen reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:
+where available on an object.
+On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).
+For example hash
is slow with the local
and sftp
backends as they have to read the entire file and hash it, and modtime
is slow with the s3
, swift
, ftp
and qinqstor
backends because they need to do an extra API call to fetch it.
If you use the --vfs-fast-fingerprint
flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.
If you are running a vfs cache over local
, s3
or swift
backends then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.
When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
These flags control the chunking:
@@ -2519,20 +2596,23 @@ ffmpeg - | rclone rcat remote:path/to/file--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
---read-only Mount read-only.
+--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
-When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
-The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
-Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+The --vfs-case-insensitive
VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
+This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.
+--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
Serve remote:path over FTP.
-rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it.
+Run a basic FTP server to serve a remote over FTP protocol. This can be viewed with a FTP client or you can make a remote of type FTP to read and write it.
Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
@@ -2606,7 +2688,7 @@ ffmpeg - | rclone rcat remote:path/to/fileCloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
@@ -2669,6 +2751,19 @@ ffmpeg - | rclone rcat remote:path/to/fileWhen reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:
+where available on an object.
+On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).
+For example hash
is slow with the local
and sftp
backends as they have to read the entire file and hash it, and modtime
is slow with the s3
, swift
, ftp
and qinqstor
backends because they need to do an extra API call to fetch it.
If you use the --vfs-fast-fingerprint
flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.
If you are running a vfs cache over local
, s3
or swift
backends then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.
When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
These flags control the chunking:
@@ -2683,20 +2778,23 @@ ffmpeg - | rclone rcat remote:path/to/file--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
---read-only Mount read-only.
+--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
-When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
-The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
-Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+The --vfs-case-insensitive
VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
+This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.
+--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
Serve the remote over HTTP.
-rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.
-You can use the filter flags (e.g. --include, --exclude) to control what is served.
-The server will log errors. Use -v to see access logs.
---bwlimit will be respected for file transfers. Use --stats to control the stats printing.
+Run a basic web server to serve a remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.
+You can use the filter flags (e.g. --include
, --exclude
) to control what is served.
The server will log errors. Use -v
to see access logs.
--bwlimit
will be respected for file transfers. Use --stats
to control the stats printing.
Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
-If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
---server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
---max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
---baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.
+Use --addr
to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000
or --addr :8080
to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
If you set --addr
to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
--server-read-timeout
and --server-write-timeout
can be used to control the timeouts on the server. Note that this is the total time for a transfer.
--max-header-bytes
controls the maximum number of bytes the server will accept in the HTTP header.
--baseurl
controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone"
then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl
, so --baseurl "rclone"
, --baseurl "/rclone"
and --baseurl "/rclone/"
are all treated identically.
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
---cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
+By default this will serve over http. If you want you can serve over https. You will need to supply the --cert
and --key
flags. If you wish to do client side certificate validation then you will need to supply --client-ca
also.
--cert
should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key
should be the PEM encoded private key and --client-ca
should be the PEM encoded client certificate authority certificate.
--template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:
+--template
allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:
By default this will serve files without needing a login.
-You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
-Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user
and --pass
flags.
Use --htpasswd /path/to/htpasswd
to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser
The password file can be updated while rclone is running.
-Use --realm to set the authentication realm.
-Use --salt to change the password hashing salt from the default.
+Use --realm
to set the authentication realm.
Use --salt
to change the password hashing salt from the default.
This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.
Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
@@ -2944,6 +3044,19 @@ htpasswd -B htpasswd anotherUserWhen reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:
+where available on an object.
+On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).
+For example hash
is slow with the local
and sftp
backends as they have to read the entire file and hash it, and modtime
is slow with the s3
, swift
, ftp
and qinqstor
backends because they need to do an extra API call to fetch it.
If you use the --vfs-fast-fingerprint
flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.
If you are running a vfs cache over local
, s3
or swift
backends then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.
When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
These flags control the chunking:
@@ -2958,20 +3071,23 @@ htpasswd -B htpasswd anotherUser--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
---read-only Mount read-only.
+--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
-When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
-The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
-Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+The --vfs-case-insensitive
VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
+This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.
+--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
Serve the remote for restic's REST API.
-rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.
+Run a basic web server to serve a remove over restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.
Restic is a command-line program for doing backups.
The server will log errors. Use -v to see access logs.
---bwlimit will be respected for file transfers. Use --stats to control the stats printing.
+--bwlimit
will be respected for file transfers. Use --stats
to control the stats printing.
First set up a remote for your chosen cloud provider.
Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions.
Now start the rclone restic server
rclone serve restic -v remote:backup
Where you can replace "backup" in the above by whatever path in the remote you wish to use.
-By default this will serve on "localhost:8080" you can change this with use of the "--addr" flag.
+By default this will serve on "localhost:8080" you can change this with use of the --addr
flag.
You might wish to start this server on boot.
-Adding --cache-objects=false will cause rclone to stop caching objects returned from the List call. Caching is normally desirable as it speeds up downloading objects, saves transactions and uses very little memory.
+Adding --cache-objects=false
will cause rclone to stop caching objects returned from the List call. Caching is normally desirable as it speeds up downloading objects, saves transactions and uses very little memory.
Now you can follow the restic instructions on setting up restic.
Note that you will need restic 0.8.2 or later to interoperate with rclone.
@@ -3062,14 +3180,14 @@ snapshot 45c8fdd8 saved $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ # backup user2 stuffThe "--private-repos" flag can be used to limit users to repositories starting with a path of /<username>/
.
The--private-repos
flag can be used to limit users to repositories starting with a path of /<username>/
.
Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
-If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
---server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
---max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
---baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.
---template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:
+Use --addr
to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000
or --addr :8080
to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
If you set --addr
to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
--server-read-timeout
and --server-write-timeout
can be used to control the timeouts on the server. Note that this is the total time for a transfer.
--max-header-bytes
controls the maximum number of bytes the server will accept in the HTTP header.
--baseurl
controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone"
then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl
, so --baseurl "rclone"
, --baseurl "/rclone"
and --baseurl "/rclone/"
are all treated identically.
--template
allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:
By default this will serve files without needing a login.
-You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
-Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user
and --pass
flags.
Use --htpasswd /path/to/htpasswd
to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser
The password file can be updated while rclone is running.
-Use --realm to set the authentication realm.
+Use --realm
to set the authentication realm.
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
---cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
+By default this will serve over HTTP. If you want you can serve over HTTPS. You will need to supply the --cert
and --key
flags. If you wish to do client side certificate validation then you will need to supply --client-ca
also.
--cert
should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key
should be the PEM encoded private key and --client-ca
should be the PEM encoded client certificate authority certificate.
rclone serve restic remote:path [flags]
--addr string IPaddress:Port or :Port to bind server to (default "localhost:8080")
@@ -3188,26 +3306,26 @@ htpasswd -B htpasswd anotherUser
Serve the remote over SFTP.
-rclone serve sftp implements an SFTP server to serve the remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.
-You can use the filter flags (e.g. --include, --exclude) to control what is served.
-The server will log errors. Use -v to see access logs.
---bwlimit will be respected for file transfers. Use --stats to control the stats printing.
-You must provide some means of authentication, either with --user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in.
+Run a SFTP server to serve a remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.
+You can use the filter flags (e.g. --include
, --exclude
) to control what is served.
The server will log errors. Use -v
to see access logs.
--bwlimit
will be respected for file transfers. Use --stats
to control the stats printing.
You must provide some means of authentication, either with --user
/--pass
, an authorized keys file (specify location with --authorized-keys
- the default is the same as ssh), an --auth-proxy
, or set the --no-auth
flag for no authentication when logging in.
Note that this also implements a small number of shell commands so that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend.
-If you don't supply a host --key then rclone will generate rsa, ecdsa and ed25519 variants, and cache them for later use in rclone's cache directory (see "rclone help flags cache-dir") in the "serve-sftp" directory.
-By default the server binds to localhost:2022 - if you want it to be reachable externally then supply "--addr :2022" for example.
-Note that the default of "--vfs-cache-mode off" is fine for the rclone sftp backend, but it may not be with other SFTP clients.
-If --stdio is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example:
+If you don't supply a host --key
then rclone will generate rsa, ecdsa and ed25519 variants, and cache them for later use in rclone's cache directory (see rclone help flags cache-dir
) in the "serve-sftp" directory.
By default the server binds to localhost:2022 - if you want it to be reachable externally then supply --addr :2022
for example.
Note that the default of --vfs-cache-mode off
is fine for the rclone sftp backend, but it may not be with other SFTP clients.
If --stdio
is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example:
restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...
-On the client you need to set "--transfers 1" when using --stdio. Otherwise multiple instances of the rclone server are started by OpenSSH which can lead to "corrupted on transfer" errors. This is the case because the client chooses indiscriminately which server to send commands to while the servers all have different views of the state of the filing system.
-The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from beeing used. Omitting "restrict" and using --sftp-path-override to enable checksumming is possible but less secure and you could use the SFTP server provided by OpenSSH in this case.
+On the client you need to set --transfers 1
when using --stdio
. Otherwise multiple instances of the rclone server are started by OpenSSH which can lead to "corrupted on transfer" errors. This is the case because the client chooses indiscriminately which server to send commands to while the servers all have different views of the state of the filing system.
The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from beeing used. Omitting "restrict" and using --sftp-path-override
to enable checksumming is possible but less secure and you could use the SFTP server provided by OpenSSH in this case.
This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.
Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
@@ -3270,6 +3388,19 @@ htpasswd -B htpasswd anotherUserWhen reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:
+where available on an object.
+On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).
+For example hash
is slow with the local
and sftp
backends as they have to read the entire file and hash it, and modtime
is slow with the s3
, swift
, ftp
and qinqstor
backends because they need to do an extra API call to fetch it.
If you use the --vfs-fast-fingerprint
flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.
If you are running a vfs cache over local
, s3
or swift
backends then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.
When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
These flags control the chunking:
@@ -3284,20 +3415,23 @@ htpasswd -B htpasswd anotherUser--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
---read-only Mount read-only.
+--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
-When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
-The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
-Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+The --vfs-case-insensitive
VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
+This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.
+--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
Serve remote:path over webdav.
-rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client, through a web browser, or you can make a remote of type webdav to read and write it.
-Serve remote:path over WebDAV.
+Run a basic WebDAV server to serve a remote over HTTP via the WebDAV protocol. This can be viewed with a WebDAV client, through a web browser, or you can make a remote of type WebDAV to read and write it.
+This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object.
-If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1".
-Use "rclone hashsum" to see the full list.
+If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1". Use the hashsum command to see the full list.
Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
-If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
---server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
---max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
---baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.
---template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:
+Use --addr
to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000
or --addr :8080
to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
If you set --addr
to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
--server-read-timeout
and --server-write-timeout
can be used to control the timeouts on the server. Note that this is the total time for a transfer.
--max-header-bytes
controls the maximum number of bytes the server will accept in the HTTP header.
--baseurl
controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone"
then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl
, so --baseurl "rclone"
, --baseurl "/rclone"
and --baseurl "/rclone/"
are all treated identically.
--template
allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:
By default this will serve files without needing a login.
-You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
-Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user
and --pass
flags.
Use --htpasswd /path/to/htpasswd
to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser
The password file can be updated while rclone is running.
-Use --realm to set the authentication realm.
+Use --realm
to set the authentication realm.
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
---cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
+By default this will serve over HTTP. If you want you can serve over HTTPS. You will need to supply the --cert
and --key
flags. If you wish to do client side certificate validation then you will need to supply --client-ca
also.
--cert
should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key
should be the PEM encoded private key and --client-ca
should be the PEM encoded client certificate authority certificate.
This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.
Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the mount will appear immediately or invalidate the cache.
Using the --dir-cache-time
flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval.
@@ -3545,6 +3680,19 @@ htpasswd -B htpasswd anotherUserWhen reading a file rclone will read --buffer-size
plus --vfs-read-ahead
bytes ahead. The --buffer-size
is buffered in memory whereas the --vfs-read-ahead
is buffered on disk.
When using this mode it is recommended that --buffer-size
is not set too large and --vfs-read-ahead
is set large if required.
IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected.
+Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from:
+where available on an object.
+On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object).
+For example hash
is slow with the local
and sftp
backends as they have to read the entire file and hash it, and modtime
is slow with the s3
, swift
, ftp
and qinqstor
backends because they need to do an extra API call to fetch it.
If you use the --vfs-fast-fingerprint
flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.
If you are running a vfs cache over local
, s3
or swift
backends then using this flag is recommended.
Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again.
When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.
These flags control the chunking:
@@ -3559,20 +3707,23 @@ htpasswd -B htpasswd anotherUser--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
---read-only Mount read-only.
+--read-only Only allow read-only access.
Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.
--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
-When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from cache (the related global flag --checkers
have no effect on mount).
When using VFS write caching (--vfs-cache-mode
with value writes or full), the global flag --transfers
can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers
has no effect on the VFS).
--transfers int Number of file transfers to run in parallel (default 4)
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default.
-The --vfs-case-insensitive
mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as-is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
-Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+The --vfs-case-insensitive
VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
+This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.
+--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running df
on the filesystem, then pass the flag --vfs-used-is-size
to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size
and compute the total used space itself.
WARNING. Contrary to rclone size
, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
Changes storage class/tier of objects in remote.
-rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true
You can use it to tier single object
@@ -3674,7 +3827,7 @@ htpasswd -B htpasswd anotherUserRun a test command
-Rclone test is used to run test commands.
Select which test comand you want with the subcommand, eg
rclone test memory remote:
@@ -3689,6 +3842,7 @@ htpasswd -B htpasswd anotherUser
Makes a histogram of file name characters.
-This command outputs JSON which shows the histogram of characters used in filenames in the remote:path specified.
The data doesn't contain any identifying information but is useful for the rclone developers when developing filename compression.
rclone test histogram [remote:path] [flags]
@@ -3718,7 +3872,7 @@ htpasswd -B htpasswd anotherUser
Discovers file name or other limitations for paths.
-rclone info discovers what filenames and upload methods are possible to write to the paths passed in and how long they can be. It can take some time. It will write test files into the remote:path passed in. It outputs a bit of go code for each one.
NB this can create undeletable files and other hazards - use with care
rclone test info [remote:path]+ [flags]
@@ -3736,36 +3890,57 @@ htpasswd -B htpasswd anotherUser
Make files with random contents of the size given
+rclone test makefile <size> [<file>]+ [flags]
+ --ascii Fill files with random ASCII printable bytes only
+ --chargen Fill files with a ASCII chargen pattern
+ -h, --help help for makefile
+ --pattern Fill files with a periodic pattern
+ --seed int Seed for the random number generator (0 for random) (default 1)
+ --sparse Make the files sparse (appear to be filled with ASCII 0x00)
+ --zero Fill files with ASCII 0x00
+See the global flags page for global options not listed here.
+Make a random file hierarchy in a directory
rclone test makefiles <dir> [flags]
- --files int Number of files to create (default 1000)
+Options
+ --ascii Fill files with random ASCII printable bytes only
+ --chargen Fill files with a ASCII chargen pattern
+ --files int Number of files to create (default 1000)
--files-per-directory int Average number of files per directory (default 10)
-h, --help help for makefiles
--max-file-size SizeSuffix Maximum size of files to create (default 100)
--max-name-length int Maximum size of file names (default 12)
--min-file-size SizeSuffix Minimum size of file to create
--min-name-length int Minimum size of file names (default 4)
- --seed int Seed for the random number generator (0 for random) (default 1)
+ --pattern Fill files with a periodic pattern
+ --seed int Seed for the random number generator (0 for random) (default 1)
+ --sparse Make the files sparse (appear to be filled with ASCII 0x00)
+ --zero Fill files with ASCII 0x00
See the global flags page for global options not listed here.
-Load all the objects at remote:path into memory and report memory stats.
rclone test memory remote:path [flags]
- -h, --help help for memory
See the global flags page for global options not listed here.
-Create new file or change file modification time.
-Set the modification time on file(s) as specified by remote:path to have the current time.
If remote:path does not exist then a zero sized file will be created, unless --no-create
or --recursive
is provided.
If --recursive
is used then recursively sets the modification time on all existing files that is found under the path. Filters are supported, and you can test with the --dry-run
or the --interactive
flag.
Note that value of --timestamp
is in UTC. If you want local time then add the --localtime
flag.
rclone touch remote:path [flags]
- -h, --help help for touch
--localtime Use localtime for timestamp, not UTC
-C, --no-create Do not create the file if it does not exist (implied with --recursive)
-R, --recursive Recursively touch all files
-t, --timestamp string Use specified time instead of the current time of day
See the global flags page for global options not listed here.
-List the contents of the remote in a tree like fashion.
-rclone tree lists the contents of a remote in a similar way to the unix tree command.
For example
$ rclone tree remote:path
@@ -3803,10 +3978,11 @@ htpasswd -B htpasswd anotherUser
└── file5
1 directories, 5 files
-You can use any of the filtering options with the tree command (e.g. --include and --exclude). You can also use --fast-list.
-The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.
+You can use any of the filtering options with the tree command (e.g. --include
and --exclude
. You can also use --fast-list
.
The tree command has many options for controlling the listing which are compatible with the tree command, for example you can include file sizes with --size
. Note that not all of them have short options as they conflict with rclone's short options.
For a more interactive navigation of the remote see the ncdu command.
rclone tree remote:path [flags]
- -a, --all All files are listed (list . files too)
-C, --color Turn colorization on always
-d, --dirs-only List directories only
@@ -3828,7 +4004,7 @@ htpasswd -B htpasswd anotherUser
-U, --unsorted Leave files unsorted
--version Sort files alphanumerically by version
See the global flags page for global options not listed here.
-This can be used when scripting to make aged backups efficiently, e.g.
rclone sync -i remote:current-backup remote:previous-backup
rclone sync -i /path/to/files remote:current-backup
-Metadata is data about a file which isn't the contents of the file. Normally rclone only preserves the modification time and the content (MIME) type where possible.
+Rclone supports preserving all the available metadata on files (not directories) when using the --metadata
or -M
flag.
Exactly what metadata is supported and what that support means depends on the backend. Backends that support metadata have a metadata section in their docs and are listed in the features table (Eg local, s3)
+Rclone only supports a one-time sync of metadata. This means that metadata will be synced from the source object to the destination object only when the source object has changed and needs to be re-uploaded. If the metadata subsequently changes on the source object without changing the object itself then it won't be synced to the destination object. This is in line with the way rclone syncs Content-Type
without the --metadata
flag.
Using --metadata
when syncing from local to local will preserve file attributes such as file mode, owner, extended attributes (not Windows).
Note that arbitrary metadata may be added to objects using the --metadata-set key=value
flag when the object is first uploaded. This flag can be repeated as many times as necessary.
Metadata is divided into two type. System metadata and User metadata.
+Metadata which the backend uses itself is called system metadata. For example on the local backend the system metadata uid
will store the user ID of the file when used on a unix based platform.
Arbitrary metadata is called user metadata and this can be set however is desired.
+When objects are copied from backend to backend, they will attempt to interpret system metadata if it is supplied. Metadata may change from being user metadata to system metadata as objects are copied between different backends. For example copying an object from s3 sets the content-type
metadata. In a backend which understands this (like azureblob
) this will become the Content-Type of the object. In a backend which doesn't understand this (like the local
backend) this will become user metadata. However should the local object be copied back to s3, the Content-Type will be set correctly.
Rclone implements a metadata framework which can read metadata from an object and write it to the object when (and only when) it is being uploaded.
+This metadata is stored as a dictionary with string keys and string values.
+There are some limits on the names of the keys (these may be clarified further in the future).
+a-z
0-9
containing .
-
or _
Each backend can provide system metadata that it understands. Some backends can also store arbitrary user metadata.
+Where possible the key names are standardized, so, for example, it is possible to copy object metadata from s3 to azureblob for example and metadata will be translated apropriately.
+Some backends have limits on the size of the metadata and rclone will give errors on upload if they are exceeded.
+The goal of the implementation is to
+The consequences of 1 is that you can copy an S3 object to a local disk then back to S3 losslessly. Likewise you can copy a local file with file attributes and xattrs from local disk to s3 and back again losslessly.
+The consequence of 2 is that you can copy an S3 object with metadata to Azureblob (say) and have the metadata appear on the Azureblob object also.
+Here is a table of standard system metadata which, if appropriate, a backend may implement.
+key | +description | +example | +
---|---|---|
mode | +File type and mode: octal, unix style | +0100664 | +
uid | +User ID of owner: decimal number | +500 | +
gid | +Group ID of owner: decimal number | +500 | +
rdev | +Device ID (if special file) => hexadecimal | +0 | +
atime | +Time of last access: RFC 3339 | +2006-01-02T15:04:05.999999999Z07:00 | +
mtime | +Time of last modification: RFC 3339 | +2006-01-02T15:04:05.999999999Z07:00 | +
btime | +Time of file creation (birth): RFC 3339 | +2006-01-02T15:04:05.999999999Z07:00 | +
cache-control | +Cache-Control header | +no-cache | +
content-disposition | +Content-Disposition header | +inline | +
content-encoding | +Content-Encoding header | +gzip | +
content-language | +Content-Language header | +en-US | +
content-type | +Content-Type header | +text/plain | +
The metadata keys mtime
and content-type
will take precedence if supplied in the metadata over reading the Content-Type
or modification time of the source object.
Hashes are not included in system metadata as there is a well defined way of reading those already.
+Rclone has a number of options to control its behaviour.
Options that take parameters can have the values passed in two ways, --option=value
or --option value
. However boolean (true/false) options behave slightly differently to the other options in that --boolean
sets the option to true
and the absence of the flag sets it to false
. It is also possible to specify --boolean=false
or --boolean=true
. Note that --boolean false
is not valid - this is parsed as --boolean
and the false
is parsed as an extra command line argument for rclone.
Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
@@ -4010,8 +4298,9 @@ rclone sync -i /path/to/files remote:current-backupIt can also be useful to ensure perfect ordering when using --order-by
.
Using this flag can use more memory as it effectively sets --max-backlog
to infinite. This means that all the info on the objects to transfer is held in memory before the transfers start.
The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (e.g. S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel.
-The default is to run 8 checkers in parallel.
+Originally controlling just the number of file checkers to run in parallel, e.g. by rclone copy
. Now a fairly universal parallelism control used by rclone
in several places.
Note: checkers do the equality checking of files during a sync. For some storage systems (e.g. S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel.
+The default is to run 8 checkers in parallel. However, in case of slow-reacting backends you may need to lower (rather than increase) this default by setting --checkers
to 4 or less threads. This is especially advised if you are experiencing backend server crashes during file checking phase (e.g. on subsequent or top-up backups where little or no file copying is done and checking takes up most of the time). Increase this setting only with utmost care, while monitoring your server health and file checking throughput.
Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal.
This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size.
@@ -4068,7 +4357,8 @@ pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qHThe remote in use must support server-side copy and you must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory.
See --compare-dest
and --backup-dir
.
Mode to run dedupe command in. One of interactive
, skip
, first
, newest
, oldest
, rename
. The default is interactive
. See the dedupe command for more information as to what these options mean.
Mode to run dedupe command in. One of interactive
, skip
, first
, newest
, oldest
, rename
. The default is interactive
.
+See the dedupe command for more information as to what these options mean.
This disables a comma separated list of optional features. For example to disable server-side move and server-side copy use:
--disable move,copy
@@ -4118,11 +4408,11 @@ pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qH
Rclone commands output values for sizes (e.g. number of bytes) and counts (e.g. number of files) either as raw numbers, or in human-readable format.
In human-readable format the values are scaled to larger units, indicated with a suffix shown after the value, and rounded to three decimals. Rclone consistently uses binary units (powers of 2) for sizes and decimal units (powers of 10) for counts. The unit prefix for size is according to IEC standard notation, e.g. Ki
for kibi. Used with byte unit, 1 KiB
means 1024 Byte. In list type of output, only the unit prefix appended to the value (e.g. 9.762Ki
), while in more textual output the full unit is shown (e.g. 9.762 KiB
). For counts the SI standard notation is used, e.g. prefix k
for kilo. Used with file counts, 1k
means 1000 files.
The various list commands output raw numbers by default. Option --human-readable
will make them output values in human-readable format instead (with the short unit prefix).
The about command outputs human-readable by default, with a command-specific option --full
to output the raw numbers instead.
Command size outputs both human-readable and raw numbers in the same output.
-The tree command also considers --human-readable
, but it will not use the exact same notation as the other commands: It rounds to one decimal, and uses single letter suffix, e.g. K
instead of Ki
. The reason for this is that it relies on an external library.
The interactive command ncdu shows human-readable by default, and responds to key u
for toggling human-readable format.
The various list commands output raw numbers by default. Option --human-readable
will make them output values in human-readable format instead (with the short unit prefix).
The about command outputs human-readable by default, with a command-specific option --full
to output the raw numbers instead.
Command size outputs both human-readable and raw numbers in the same output.
+The tree command also considers --human-readable
, but it will not use the exact same notation as the other commands: It rounds to one decimal, and uses single letter suffix, e.g. K
instead of Ki
. The reason for this is that it relies on an external library.
The interactive command ncdu shows human-readable by default, and responds to key u
for toggling human-readable format.
Using this option will cause rclone to ignore the case of the files when synchronizing so files will not be copied/synced when the existing filenames are the same, even if the casing is different.
Rclone will stop transferring when it has reached the size specified. Defaults to off.
When the limit is reached all transfers will stop immediately.
Rclone will exit with exit code 8 if the transfer limit is reached.
+Setting this flag enables rclone to copy the metadata from the source to the destination. For local backends this is ownership, permissions, xattr etc. See the #metadata for more info.
+Add metadata key
= value
when uploading. This can be repeated as many times as required. See the #metadata for more info.
This modifies the behavior of --max-transfer
Defaults to --cutoff-mode=hard
.
Specifying --cutoff-mode=hard
will stop transferring immediately when Rclone reaches the limit.
The number of file transfers to run in parallel. It can sometimes be useful to set this to a smaller number if the remote is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote.
The default is to run 4 file transfers in parallel.
+Look at --multi-thread-streams if you would like to control single file transfers.
This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file.
This can be useful in avoiding needless transfers when transferring to a remote which doesn't support modification times directly (or when using --use-server-modtime
to avoid extra API calls) as it is more accurate than a --size-only
check and faster than using --checksum
. On such remotes (or when using --use-server-modtime
) the time checked will be the uploaded time.
With -v
rclone will tell you about each file that is transferred and a small number of significant events.
With -vv
rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting.
When setting verbosity as an environment variable, use RCLONE_VERBOSE=1
or RCLONE_VERBOSE=2
for -v
and -vv
respectively.
Prints the version number
--filter-from
--exclude
--exclude-from
--exclude-if-present
--include
--include-from
--files-from
Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.
-Every option in rclone can have its default set by environment variable.
To find the name of the environment variable, first, take the long option name, strip the leading --
, change -
to _
, make upper case and prepend RCLONE_
.
For example, to always set --stats 5s
, set the environment variable RCLONE_STATS=5s
. If you set stats on the command line this will override the environment variable setting.
Or to always use the trash in drive --drive-use-trash
, set RCLONE_DRIVE_USE_TRASH=true
.
Verbosity is slightly different, the environment variable equivalent of --verbose
or -v
is RCLONE_VERBOSE=1
, or for -vv
, RCLONE_VERBOSE=2
.
The same parser is used for the options and the environment variables so they take exactly the same form.
The options set by environment variables can be seen with the -vv
flag, e.g. rclone version -vv
.
Now transfer it to the remote box (scp, cut paste, ftp, sftp, etc.) and place it in the correct place (use rclone config file
on the remote box to find out where).
Linux and MacOS users can utilize SSH Tunnel to redirect the headless box port 53682 to local machine by using the following command:
+ssh -L localhost:53682:localhost:53682 username@remote_server
+Then on the headless box run rclone
config and answer Y
to the Use auto config?
question.
...
+Remote config
+Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+y) Yes (default)
+n) No
+y/n> y
+Then copy and paste the auth url http://127.0.0.1:53682/auth?state=xxxxxxxxxxxx
to the browser on your local machine, complete the auth and it is done.
Filter flags determine which files rclone sync
, move
, ls
, lsl
, md5sum
, sha1sum
, size
, delete
, check
and similar commands apply to.
They are specified in terms of path/file name patterns; path/file lists; file age and size, or presence of a file in a directory. Bucket based remotes without the concept of directory apply filters to object key, age and size in an analogous way.
@@ -4780,7 +5091,7 @@ ASCII character classes (e.g. [[:alnum:]], [[:alpha:]], [[:punct:]], [[:xdigit:] - matches "POTATO"The syntax of filter patterns is glob style matching (like bash
uses) to make things easy for users. However this does not provide absolute control over the matching, so for advanced users rclone also provides a regular expression syntax.
The regular expressions used are as defined in the Go regular expression reference. Regular expressions should be enclosed in {{
}}
. They will match only the last path segment if the glob doesn't start with /
or the whole path name if it does.
The regular expressions used are as defined in the Go regular expression reference. Regular expressions should be enclosed in {{
}}
. They will match only the last path segment if the glob doesn't start with /
or the whole path name if it does. Note that rclone does not attempt to parse the supplied regular expression, meaning that using any regular expression filter will prevent rclone from using directory filter rules, as it will instead check every path against the supplied regular expression(s).
Here is how the {{regexp}}
is transformed into an full regular expression to match the entire path:
{{regexp}} becomes (^|/)(regexp)$
/{{regexp}} becomes ^(regexp)$
@@ -4949,14 +5260,15 @@ ASCII character classes (e.g. [[:alnum:]], [[:alpha:]], [[:punct:]], [[:xdigit:]
Any path/file included at that stage is processed by the rclone command.
--files-from
and --files-from-raw
flags over-ride and cannot be combined with other filter options.
To see the internal combined rule list, in regular expression form, for a command add the --dump filters
flag. Running an rclone command with --dump filters
and -vv
flags lists the internal filter elements and shows how they are applied to each source path/file. There is not currently a means provided to pass regular expression filter options into rclone directly though character class filter rules contain character classes. Go regular expression reference
Rclone commands are applied to path/file names not directories. The entire contents of a directory can be matched to a filter by the pattern directory/*
or recursively by directory/**
.
Directory filter rules are defined with a closing /
separator.
E.g. /directory/subdirectory/
is an rclone directory filter rule.
Rclone commands can use directory filter rules to determine whether they recurse into subdirectories. This potentially optimises access to a remote by avoiding listing unnecessary directories. Whether optimisation is desirable depends on the specific filter rules and source remote content.
+If any regular expression filters are in use, then no directory recursion optimisation is possible, as rclone must check every path against the supplied regular expression(s).
Directory recursion optimisation occurs if either:
A source remote does not support the rclone ListR
primitive. local, sftp, Microsoft OneDrive and WebDav do not support ListR
. Google Drive and most bucket type storage do. Full list
A source remote does not support the rclone ListR
primitive. local, sftp, Microsoft OneDrive and WebDAV do not support ListR
. Google Drive and most bucket type storage do. Full list
On other remotes (those that support ListR
), if the rclone command is not naturally recursive, and provided it is not run with the --fast-list
flag. ls
, lsf -R
and size
are naturally recursive but sync
, copy
and move
are not.
Whenever the --disable ListR
flag is applied to an rclone command.
Dumps the defined filters to standard output in regular expression format.
Useful for debugging.
The --exclude-if-present
flag controls whether a directory is within the scope of an rclone command based on the presence of a named file within it.
The --exclude-if-present
flag controls whether a directory is within the scope of an rclone command based on the presence of a named file within it. The flag can be repeated to check for multiple file names, presence of any of them will exclude the directory.
This flag has a priority over other filter flags.
E.g. for the following directory structure:
dir1/file1
@@ -5165,7 +5477,6 @@ dir1/dir2/file2
dir1/dir2/dir3/file3
dir1/dir2/dir3/.ignore
The command rclone ls --exclude-if-present .ignore dir1
does not list dir3
, file3
or .ignore
.
--exclude-if-present
can only be used once in an rclone command.
The most frequent filter support issues on the rclone forum are:
If you have questions then please ask them on the rclone forum.
If rclone is run with the --rc
flag then it starts an HTTP server which can be used to remote control rclone using its API.
You can either use the rclone rc command to access the API or use HTTP directly.
-If you just want to run a remote control then see the rcd command.
+You can either use the rc command to access the API or use HTTP directly.
+If you just want to run a remote control then see the rcd command.
Flag to start the http server listen on remote requests
@@ -5310,6 +5621,11 @@ dir1/dir2/dir3/.ignoreBy default rclone will require authorisation to have been set up on the rc interface in order to use any methods which access any rclone remotes. Eg operations/list
is denied as it involved creating a remote as is sync/copy
.
If this is set then no authorisation will be required on the server to use these methods. The alternative is to use --rc-user
and --rc-pass
and use these credentials in the request.
Default Off.
+Prefix for URLs.
+Default is root
+User-specified template.
Rclone itself implements the remote control protocol in its rclone rc
command.
You can use it like this
@@ -5516,30 +5832,30 @@ rclone rc cache/expire remote=/ withData=trueSee the config create command command for more information on the above.
+See the config create command for more information on the above.
Authentication is required for this call.
Parameters:
See the config delete command command for more information on the above.
+See the config delete command for more information on the above.
Authentication is required for this call.
Returns a JSON object: - key: value
Where keys are remote names and values are the config parameters.
-See the config dump command command for more information on the above.
+See the config dump command for more information on the above.
Authentication is required for this call.
Parameters:
See the config dump command command for more information on the above.
+See the config dump command for more information on the above.
Authentication is required for this call.
Returns - remotes - array of remote names
-See the listremotes command command for more information on the above.
+See the listremotes command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
@@ -5547,11 +5863,11 @@ rclone rc cache/expire remote=/ withData=trueSee the config password command command for more information on the above.
+See the config password command for more information on the above.
Authentication is required for this call.
Returns a JSON object: - providers - array of objects
-See the config providers command command for more information on the above.
+See the config providers command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
@@ -5569,7 +5885,7 @@ rclone rc cache/expire remote=/ withData=trueSee the config update command command for more information on the above.
+See the config update command for more information on the above.
Authentication is required for this call.
This sets the bandwidth limit to the string passed in. This should be a single bandwidth limit entry or a pair of upload:download bandwidth.
@@ -5882,14 +6198,14 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheThe result is as returned from rclone about --json
-See the about command command for more information on the above.
+See the about command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
See the cleanup command command for more information on the above.
+See the cleanup command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
@@ -5906,15 +6222,16 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheSee the copyurl command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
See the delete command command for more information on the above.
+See the delete command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
@@ -5922,7 +6239,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheSee the deletefile command command for more information on the above.
+See the deletefile command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
@@ -5931,46 +6248,103 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheThis returns info about the remote passed in;
{
- // optional features and whether they are available or not
- "Features": {
- "About": true,
- "BucketBased": false,
- "CanHaveEmptyDirectories": true,
- "CaseInsensitive": false,
- "ChangeNotify": false,
- "CleanUp": false,
- "Copy": false,
- "DirCacheFlush": false,
- "DirMove": true,
- "DuplicateFiles": false,
- "GetTier": false,
- "ListR": false,
- "MergeDirs": false,
- "Move": true,
- "OpenWriterAt": true,
- "PublicLink": false,
- "Purge": true,
- "PutStream": true,
- "PutUnchecked": false,
- "ReadMimeType": false,
- "ServerSideAcrossConfigs": false,
- "SetTier": false,
- "SetWrapper": false,
- "UnWrap": false,
- "WrapFs": false,
- "WriteMimeType": false
- },
- // Names of hashes available
- "Hashes": [
- "MD5",
- "SHA-1",
- "DropboxHash",
- "QuickXorHash"
- ],
- "Name": "local", // Name as created
- "Precision": 1, // Precision of timestamps in ns
- "Root": "/", // Path as created
- "String": "Local file system at /" // how the remote will appear in logs
+ // optional features and whether they are available or not
+ "Features": {
+ "About": true,
+ "BucketBased": false,
+ "BucketBasedRootOK": false,
+ "CanHaveEmptyDirectories": true,
+ "CaseInsensitive": false,
+ "ChangeNotify": false,
+ "CleanUp": false,
+ "Command": true,
+ "Copy": false,
+ "DirCacheFlush": false,
+ "DirMove": true,
+ "Disconnect": false,
+ "DuplicateFiles": false,
+ "GetTier": false,
+ "IsLocal": true,
+ "ListR": false,
+ "MergeDirs": false,
+ "MetadataInfo": true,
+ "Move": true,
+ "OpenWriterAt": true,
+ "PublicLink": false,
+ "Purge": true,
+ "PutStream": true,
+ "PutUnchecked": false,
+ "ReadMetadata": true,
+ "ReadMimeType": false,
+ "ServerSideAcrossConfigs": false,
+ "SetTier": false,
+ "SetWrapper": false,
+ "Shutdown": false,
+ "SlowHash": true,
+ "SlowModTime": false,
+ "UnWrap": false,
+ "UserInfo": false,
+ "UserMetadata": true,
+ "WrapFs": false,
+ "WriteMetadata": true,
+ "WriteMimeType": false
+ },
+ // Names of hashes available
+ "Hashes": [
+ "md5",
+ "sha1",
+ "whirlpool",
+ "crc32",
+ "sha256",
+ "dropbox",
+ "mailru",
+ "quickxor"
+ ],
+ "Name": "local", // Name as created
+ "Precision": 1, // Precision of timestamps in ns
+ "Root": "/", // Path as created
+ "String": "Local file system at /", // how the remote will appear in logs
+ // Information about the system metadata for this backend
+ "MetadataInfo": {
+ "System": {
+ "atime": {
+ "Help": "Time of last access",
+ "Type": "RFC 3339",
+ "Example": "2006-01-02T15:04:05.999999999Z07:00"
+ },
+ "btime": {
+ "Help": "Time of file birth (creation)",
+ "Type": "RFC 3339",
+ "Example": "2006-01-02T15:04:05.999999999Z07:00"
+ },
+ "gid": {
+ "Help": "Group ID of owner",
+ "Type": "decimal number",
+ "Example": "500"
+ },
+ "mode": {
+ "Help": "File type and mode",
+ "Type": "octal, unix style",
+ "Example": "0100664"
+ },
+ "mtime": {
+ "Help": "Time of last modification",
+ "Type": "RFC 3339",
+ "Example": "2006-01-02T15:04:05.999999999Z07:00"
+ },
+ "rdev": {
+ "Help": "Device ID (if special file)",
+ "Type": "hexadecimal",
+ "Example": "1abc"
+ },
+ "uid": {
+ "Help": "User ID of owner",
+ "Type": "decimal number",
+ "Example": "500"
+ }
+ },
+ "Help": "Textual help string\n"
+ }
}
This command does not have a command line equivalent so use this instead:
rclone rc --loopback operations/fsinfo fs=remote:
@@ -5989,6 +6363,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache
See the lsjson command for more information on the above and examples.
+See the lsjson command for more information on the above and examples.
Authentication is required for this call.
This takes the following parameters:
@@ -6007,7 +6382,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheSee the mkdir command command for more information on the above.
+See the mkdir command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
@@ -6030,7 +6405,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheSee the link command command for more information on the above.
+See the link command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
@@ -6038,7 +6413,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheSee the purge command command for more information on the above.
+See the purge command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
@@ -6046,15 +6421,16 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheSee the rmdir command command for more information on the above.
+See the rmdir command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
See the rmdirs command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
@@ -6066,7 +6442,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheSee the size command command for more information on the above.
+See the size command for more information on the above.
Authentication is required for this call.
This takes the following parameters
@@ -6083,15 +6459,16 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheNote that if you are only interested in files then it is much more efficient to set the filesOnly flag in the options.
-See the lsjson command for more information on the above and examples.
+See the lsjson command for more information on the above and examples.
Authentication is required for this call.
This takes the following parameters:
See the uploadfile command for more information on the above.
Authentication is required for this call.
Returns: - options - a list of the options block names
@@ -6218,7 +6595,7 @@ rclone rc options/set --json '{"main": {"LogLevel": 8}}&See the copy command command for more information on the above.
+See the copy command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
@@ -6228,7 +6605,7 @@ rclone rc options/set --json '{"main": {"LogLevel": 8}}&See the move command command for more information on the above.
+See the move command for more information on the above.
Authentication is required for this call.
This takes the following parameters:
@@ -6237,7 +6614,7 @@ rclone rc options/set --json '{"main": {"LogLevel": 8}}&See the sync command command for more information on the above.
+See the sync command for more information on the above.
Authentication is required for this call.
This forgets the paths in the directory cache causing them to be re-read from the remote when needed.
@@ -6428,320 +6805,378 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total⁶ Mail.ru uses its own modified SHA1 hash
⁷ pCloud only supports SHA1 (not MD5) in its EU region
⁸ Opendrive does not support creation of duplicate files using their web client interface or other stock clients, but the underlying storage platform has been determined to allow duplicate files, and it is possible to create them with rclone
. It may be that this is a mistake or an unsupported feature.
⁹ QingStor does not support SetModTime for objects bigger than 5 GiB.
+¹⁰ FTP supports modtimes for the major FTP servers, and also others if they advertised required protocol extensions. See this for more details.
+¹¹ Internet Archive requires option wait_archive
to be set to a non-zero value for full modtime support.
¹² HiDrive supports its own custom hash. It combines SHA1 sums for each 4 KiB block hierarchically to a single top-level sum.
The cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum
flag in syncs and in the check
command.
To use the verify checksums when transferring between cloud storage systems they must support a common hash type.
The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum
flag.
All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.
+Allmost all cloud storage systems store some sort of timestamp on objects, but several of them not something that is appropriate to use for syncing. E.g. some backends will only write a timestamp that represent the time of the upload. To be relevant for syncing it should be able to store the modification time of the source object. If this is not the case, rclone will only check the file size by default, though can be configured to check the file hash (with the --checksum
flag). Ideally it should also be possible to change the timestamp of an existing file without having to re-upload it.
Storage systems with a -
in the ModTime column, means the modification read on objects is not the modification time of the file when uploaded. It is most likely the time the file was uploaded, or possibly something else (like the time the picture was taken in Google Photos).
Storage systems with a R
(for read-only) in the ModTime column, means the it keeps modification times on objects, and updates them when uploading objects, but it does not support changing only the modification time (SetModTime
operation) without re-uploading, possibly not even without deleting existing first. Some operations in rclone, such as copy
and sync
commands, will automatically check for SetModTime
support and re-upload if necessary to keep the modification times in sync. Other commands will not work without SetModTime
support, e.g. touch
command on an existing file will fail, and changes to modification time only on a files in a mount
will be silently ignored.
Storage systems with R/W
(for read/write) in the ModTime column, means they do also support modtime-only operations.
If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, e.g. file.txt
and FILE.txt
. If a cloud storage system is case insensitive then that isn't possible.
This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.
@@ -7000,120 +7441,158 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalThe --backend-encoding
flags allow you to change that. You can disable the encoding completely with --backend-encoding None
or set encoding = None
in the config file.
Encoding takes a comma separated list of encodings. You can see the list of all possible values by passing an invalid value to this flag, e.g. --local-encoding "help"
. The command rclone help flags encoding
will show you the defaults for the backends.
Encoding | Characters | +Encoded as | ||
---|---|---|---|---|
Asterisk | * |
+* |
||
BackQuote | ` |
+` |
||
BackSlash | \ |
+\ |
||
Colon | : |
+: |
||
CrLf | CR 0x0D, LF 0x0A | +␍ , ␊ |
||
Ctl | All control characters 0x00-0x1F | +␀␁␂␃␄␅␆␇␈␉␊␋␌␍␎␏␐␑␒␓␔␕␖␗␘␙␚␛␜␝␞␟ |
||
Del | DEL 0x7F | +␡ |
||
Dollar | $ |
+$ |
||
Dot | . or .. as entire string |
+. , .. |
||
DoubleQuote | " |
+" |
||
Hash | # |
+# |
||
InvalidUtf8 | An invalid UTF-8 character (e.g. latin1) | +� |
||
LeftCrLfHtVt | -CR 0x0D, LF 0x0A,HT 0x09, VT 0x0B on the left of a string | +CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the left of a string | +␍ , ␊ , ␉ , ␋ |
|
LeftPeriod | . on the left of a string |
+. |
||
LeftSpace | SPACE on the left of a string | +␠ |
||
LeftTilde | ~ on the left of a string |
+~ |
||
LtGt | < , > |
+< , > |
||
None | No characters are encoded | +|||
Percent | % |
+% |
||
Pipe | | | +| |
||
Question | ? |
+? |
||
RightCrLfHtVt | CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the right of a string | +␍ , ␊ , ␉ , ␋ |
||
RightPeriod | . on the right of a string |
+. |
||
RightSpace | SPACE on the right of a string | +␠ |
||
SingleQuote | -' |
+Semicolon | +; |
+; |
Slash | -/ |
+SingleQuote | +' |
+' |
Slash | +/ |
+/ |
+||
SquareBracket | [ , ] |
+[ , ] |
Some cloud storage systems support reading (R
) the MIME type of objects and some support writing (W
) the MIME type of objects.
The MIME type can be important if you are serving files directly to HTTP from the storage system.
If you are copying from a remote which supports reading (R
) to a remote which supports writing (W
) then rclone will preserve the MIME types. Otherwise they will be guessed from the extension, or the remote itself may assign the MIME type.
Backends may or may support reading or writing metadata. They may support reading and writing system metadata (metadata intrinsic to that backend) and/or user metadata (general purpose metadata).
+The levels of metadata support are
+Key | +Explanation | +
---|---|
R |
+Read only System Metadata | +
RW |
+Read and write System Metadata | +
RWU |
+Read and write System Metadata and read and write User Metadata | +
See the metadata docs for more info.
All rclone remotes support a base command set. Other features depend upon backend-specific capabilities.
Yes | ||||||||||
Akamai Netstorage | +Yes | +No | +No | +No | +No | +Yes | +Yes | +No | +No | +Yes | +
Amazon Drive | Yes | No | @@ -7184,8 +7702,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | Yes | ||||||
Amazon S3 | +||||||||||
Amazon S3 (or S3 compatible) | No | Yes | No | @@ -7197,7 +7715,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | No | |||||
Backblaze B2 | No | Yes | @@ -7210,7 +7728,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | No | ||||||
Box | Yes | Yes | @@ -7223,7 +7741,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | ||||||
Citrix ShareFile | Yes | Yes | @@ -7236,7 +7754,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | Yes | ||||||
Dropbox | Yes | Yes | @@ -7249,7 +7767,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | ||||||
Enterprise File Fabric | Yes | Yes | @@ -7262,7 +7780,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | Yes | ||||||
FTP | No | No | @@ -7275,7 +7793,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | Yes | ||||||
Google Cloud Storage | Yes | Yes | @@ -7288,7 +7806,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | No | ||||||
Google Drive | Yes | Yes | @@ -7301,7 +7819,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | ||||||
Google Photos | No | No | @@ -7314,7 +7832,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | No | ||||||
HDFS | Yes | No | @@ -7327,6 +7845,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | ||||||
HiDrive | +Yes | +Yes | +Yes | +Yes | +No | +No | +Yes | +No | +No | +Yes | +
HTTP | No | @@ -7354,6 +7885,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | ||||||||
Internet Archive | +No | +Yes | +No | +No | +Yes | +Yes | +No | +Yes | +Yes | +No | +
Jottacloud | Yes | Yes | @@ -7366,6 +7910,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | ||||||
Koofr | +Yes | +Yes | +Yes | +Yes | +No | +No | +Yes | +Yes | +Yes | +Yes | +
Mail.ru Cloud | Yes | @@ -7536,6 +8093,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | ||||||||
Sia | +No | +No | +No | +No | +No | +No | +Yes | +No | +No | +Yes | +
SugarSync | Yes | Yes | @@ -7548,7 +8118,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | Yes | ||||||
Storj | Yes † | No | @@ -7561,7 +8131,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | No | ||||||
Uptobox | No | Yes | @@ -7574,7 +8144,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | No | ||||||
WebDAV | Yes | Yes | @@ -7587,7 +8157,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | ||||||
Yandex Disk | Yes | Yes | @@ -7600,7 +8170,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | ||||||
Zoho WorkDrive | Yes | Yes | @@ -7613,7 +8183,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | ||||||
The local filesystem | Yes | No | @@ -7686,6 +8256,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --delete-during When synchronizing, delete files during transfer --delete-excluded Delete files on dest excluded from sync --disable string Disable a comma separated list of features (use --disable help to see a list) + --disable-http-keep-alives Disable HTTP keep-alives and use each connection once. --disable-http2 Disable HTTP/2 in the global transport -n, --dry-run Do a trial run with no permanent changes --dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21 @@ -7695,7 +8266,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read exclude patterns from file (use - to read from stdin) - --exclude-if-present string Exclude directories if filename is present + --exclude-if-present stringArray Exclude directories if filename is present --expect-continue-timeout duration Timeout when using expect / 100-continue in HTTP (default 1s) --fast-list Use recursive list if available; uses more memory but fewer transactions --files-from stringArray Read list of source-file names from file (use - to read from stdin) @@ -7734,6 +8305,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer (default off) --memprofile string Write memory profile to file + -M, --metadata If set, preserve metadata when copying objects + --metadata-set stringArray Add metadata key=value when uploading --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) --modify-window duration Max time diff to be considered the same (default 1ns) @@ -7805,7 +8378,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --use-json-log Use json log format --use-mmap Use mmap allocator (see docs) --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string (default "rclone/v1.58.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0") -v, --verbose count Print lots more stuff (repeat for more)
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Here are the standard options specific to fichier (1Fichier).
+Here are the Standard options specific to fichier (1Fichier).
Your API Key, get it from https://1fichier.com/console/params.pl.
Properties:
@@ -9234,7 +9842,7 @@ y/e/d> yHere are the advanced options specific to fichier (1Fichier).
+Here are the Advanced options specific to fichier (1Fichier).
If you want to download a shared folder, add this parameter.
Properties:
@@ -9276,7 +9884,7 @@ y/e/d> yrclone about
is not supported by the 1Fichier backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+See List of backends that do not support rclone about and rclone about
The alias
remote provides a new name for another remote.
Paths may be as deep as required or a local path, e.g. remote:directory/subdirectory
or /directory/subdirectory
.
Copy another local directory to the alias directory called source
rclone copy /home/source remote:source
Here are the standard options specific to alias (Alias for an existing remote).
+Here are the Standard options specific to alias (Alias for an existing remote).
Remote or path to alias.
Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
@@ -9446,7 +10054,7 @@ y/e/d> y.com
Amazon accountsLet's say you usually use amazon.co.uk
. When you authenticate with rclone it will take you to an amazon.com
page to log in. Your amazon.co.uk
email and password should work here just fine.
Here are the standard options specific to amazon cloud drive (Amazon Drive).
+Here are the Standard options specific to amazon cloud drive (Amazon Drive).
OAuth Client Id.
Leave blank normally.
@@ -9468,7 +10076,7 @@ y/e/d> yHere are the advanced options specific to amazon cloud drive (Amazon Drive).
+Here are the Advanced options specific to amazon cloud drive (Amazon Drive).
OAuth Access Token as a JSON blob.
Properties:
@@ -9549,16 +10157,21 @@ y/e/d> yAt the time of writing (Jan 2016) is in the area of 50 GiB per file. This means that larger files are likely to fail.
Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M
option to limit the maximum size of uploaded files. Note that --max-size
does not split files into segments, it only ignores files over this size.
rclone about
is not supported by the Amazon Drive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+See List of backends that do not support rclone about and rclone about
The S3 backend can be used with a number of different providers:
As mentioned in the Hashes section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail. A simple solution is to set the --s3-upload-cutoff 0
and force all the files to be uploaded as multipart.
Here are the standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
Choose your S3 provider.
Properties:
@@ -10002,6 +10615,18 @@ y/e/d>Region to connect to. - the location where your bucket will be created and your data stored. Need bo be same with your endpoint.
+Properties:
+Region to connect to.
+Properties:
+Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Properties:
Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
+Properties:
+Endpoint for Arvan Cloud Object Storage (AOS) API.
+Properties:
+Endpoint for IBM COS S3 API.
Specify if using an IBM COS On Premise.
Properties:
@@ -10647,7 +11530,7 @@ y/e/d> -Endpoint for OSS API.
Properties:
Endpoint for OBS API.
+Properties:
+Endpoint for Scaleway Object Storage.
Properties:
Endpoint for StackPath Object Storage.
Properties:
Endpoint of the Shared Gateway.
Properties:
Endpoint for Tencent COS API.
Properties:
Endpoint for RackCorp Object Storage.
Properties:
Endpoint for S3 API.
Required when using an S3 clone.
Properties:
Location constraint - must match endpoint.
+Used when creating buckets only.
+Properties:
+Location constraint - must match endpoint.
+Used when creating buckets only.
+Properties:
+Location constraint - must match endpoint when using IBM Cloud Public.
For on-prem COS, do not make a selection from this list, hit enter.
Properties:
@@ -11331,7 +12451,7 @@ y/e/d> -Location constraint - the location where your bucket will be located and your data stored.
Properties:
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Properties:
The storage class to use when storing new objects in ChinaMobile.
+Properties:
+The storage class to use when storing new objects in ArvanCloud.
+Properties:
+The storage class to use when storing new objects in Tencent COS.
Properties:
The storage class to use when storing new objects in S3.
Properties:
Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
@@ -11742,7 +12908,7 @@ y/e/d>If you are transferring large files over high-speed links and you have enough memory, then increasing this will speed up the transfers.
Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit.
Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5 MiB and there can be at most 10,000 chunks, this means that by default the maximum size of a file you can stream upload is 48 GiB. If you wish to stream upload larger files then you will need to increase chunk_size.
+Increasing the chunk size decreases the accuracy of the progress statistics displayed with "-P" flag. Rclone treats chunk as sent when it's buffered by the AWS SDK, when in fact it may still be uploading. A bigger chunk size means a bigger AWS SDK buffer and progress reporting more deviating from the truth.
Properties:
Whether to use a presigned request or PutObject for single part uploads
+If this is false rclone will use PutObject from the AWS SDK to upload an object.
+Versions of rclone < 1.59 use presigned requests to upload a single part object and setting this flag to true will re-enable that functionality. This shouldn't be necessary except in exceptional circumstances or for testing.
+Properties:
+User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
+Here are the possible system metadata items for the s3 backend.
+Name | +Help | +Type | +Example | +Read Only | +
---|---|---|---|---|
btime | +Time of file birth (creation) read from Last-Modified header | +RFC 3339 | +2006-01-02T15:04:05.999999999Z07:00 | +Y | +
cache-control | +Cache-Control header | +string | +no-cache | +N | +
content-disposition | +Content-Disposition header | +string | +inline | +N | +
content-encoding | +Content-Encoding header | +string | +gzip | +N | +
content-language | +Content-Language header | +string | +en-US | +N | +
content-type | +Content-Type header | +string | +text/plain | +N | +
mtime | +Time of last modification, read from rclone metadata | +RFC 3339 | +2006-01-02T15:04:05.999999999Z07:00 | +N | +
tier | +Tier of the object | +string | +GLACIER | +Y | +
See the metadata docs for more info.
Here are the commands specific to the s3 backend.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
-See the "rclone backend" command for more info on how to pass options and arguments.
+See the backend command for more info on how to pass options and arguments.
These can be run on a running backend using the rc command backend/command.
Restore objects from GLACIER to normal storage
@@ -12170,7 +13428,9 @@ storage_class =This is the provider used as main example and described in the configuration section above.
AWS Snowball is a hardware appliance used for transferring bulk data back to AWS. Its main software interface is S3 object storage.
-To use rclone with AWS Snowball Edge devices, configure as standard for an 'S3 Compatible Service' be sure to set upload_cutoff = 0
otherwise you will run into authentication header issues as the snowball device does not support query parameter based authentication.
To use rclone with AWS Snowball Edge devices, configure as standard for an 'S3 Compatible Service'.
+If using rclone pre v1.59 be sure to set upload_cutoff = 0
otherwise you will run into authentication header issues as the snowball device does not support query parameter based authentication.
With rclone v1.59 or later setting upload_cutoff
should not be necessary.
eg.
[snowball]
type = s3
@@ -12194,7 +13454,7 @@ location_constraint =
acl =
server_side_encryption =
storage_class =
-If you are using an older version of CEPH, e.g. 10.2.x Jewel, then you may need to supply the parameter --s3-upload-cutoff 0
or put this in the config file as upload_cutoff 0
to work around a bug which causes uploading of small files to fail.
If you are using an older version of CEPH (e.g. 10.2.x Jewel) and a version of rclone before v1.59 then you may need to supply the parameter --s3-upload-cutoff 0
or put this in the config file as upload_cutoff 0
to work around a bug which causes uploading of small files to fail.
Note also that Ceph sometimes puts /
in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the /
escaped as \/
. Make sure you only write /
in the secret access key.
Eg the dump from Ceph looks something like this (irrelevant keys removed).
{
@@ -12209,6 +13469,86 @@ storage_class =
],
}
Because this is a json dump, it is encoding the /
as \/
, so if you use the secret key as xxxxxx/xxxx
it will work fine.
Cloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
+Here is an example of making a Cloudflare R2 configuration. First run:
+rclone config
+This will guide you through an interactive setup process.
+Note that all buckets are private, and all are stored in the same "auto" region. It is necessary to use Cloudflare workers to share the content of a bucket publicly.
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> r2
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+...
+XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
+ \ (s3)
+...
+Storage> s3
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+...
+XX / Cloudflare R2 Storage
+ \ (Cloudflare)
+...
+provider> Cloudflare
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth> 1
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ACCESS_KEY
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> SECRET_ACCESS_KEY
+Option region.
+Region to connect to.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / R2 buckets are automatically distributed across Cloudflare's data centers for low latency.
+ \ (auto)
+region> 1
+Option endpoint.
+Endpoint for S3 API.
+Required when using an S3 clone.
+Enter a value. Press Enter to leave empty.
+endpoint> https://ACCOUNT_ID.r2.cloudflarestorage.com
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+This will leave your config looking something like:
+[r2]
+type = s3
+provider = Cloudflare
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_ACCESS_KEY
+region = auto
+endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com
+acl = private
+Now run rclone lsf r2:
to see your buckets and rclone lsf r2:bucket
to look within a bucket.
Dreamhost DreamObjects is an object storage system based on CEPH.
To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:
@@ -12254,6 +13594,126 @@ storage_class =Once configured, you can create a new Space and begin copying files. For example:
rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
+Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-use cloud storage that lets you store virtually any volume of unstructured data in any format and access it from anywhere.
+OBS provides an S3 interface, you can copy and modify the following configuration and add it to your rclone configuration file.
+[obs]
+type = s3
+provider = HuaweiOBS
+access_key_id = your-access-key-id
+secret_access_key = your-secret-access-key
+region = af-south-1
+endpoint = obs.af-south-1.myhuaweicloud.com
+acl = private
+Or you can also configure via the interactive command line:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> obs
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+ 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
+ \ (s3)
+[snip]
+Storage> 5
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+ 9 / Huawei Object Storage Service
+ \ (HuaweiOBS)
+[snip]
+provider> 9
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth> 1
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> your-access-key-id
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> your-secret-access-key
+Option region.
+Region to connect to.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / AF-Johannesburg
+ \ (af-south-1)
+ 2 / AP-Bangkok
+ \ (ap-southeast-2)
+[snip]
+region> 1
+Option endpoint.
+Endpoint for OBS API.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / AF-Johannesburg
+ \ (obs.af-south-1.myhuaweicloud.com)
+ 2 / AP-Bangkok
+ \ (obs.ap-southeast-2.myhuaweicloud.com)
+[snip]
+endpoint> 1
+Option acl.
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn't copy the ACL from the source but rather writes a fresh one.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+[snip]
+acl> 1
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n>
+--------------------
+[obs]
+type = s3
+provider = HuaweiOBS
+access_key_id = your-access-key-id
+secret_access_key = your-secret-access-key
+region = af-south-1
+endpoint = obs.af-south-1.myhuaweicloud.com
+acl = private
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+obs s3
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> q
Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)
To configure access to IBM COS S3, follow the steps below:
@@ -12278,12 +13738,12 @@ rclone copy /path/to/files spaces:my-new-space \ "alias" 2 / Amazon Drive \ "amazon cloud drive" - 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS) + 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, IBM COS) \ "s3" 4 / Backblaze B2 \ "b2" [snip] - 23 / http Connection + 23 / HTTP \ "http" Storage> 3Here is an example of making an IDrive e2 configuration. First run:
+rclone config
+This will guide you through an interactive setup process.
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> e2
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
+ \ (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / IDrive e2
+ \ (IDrive)
+[snip]
+provider> IDrive
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth>
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> YOUR_ACCESS_KEY
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> YOUR_SECRET_KEY
+
+Option acl.
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn't copy the ACL from the source but rather writes a fresh one.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+ / Owner gets FULL_CONTROL.
+ 2 | The AllUsers group gets READ access.
+ \ (public-read)
+ / Owner gets FULL_CONTROL.
+ 3 | The AllUsers group gets READ and WRITE access.
+ | Granting this on a bucket is generally not recommended.
+ \ (public-read-write)
+ / Owner gets FULL_CONTROL.
+ 4 | The AuthenticatedUsers group gets READ access.
+ \ (authenticated-read)
+ / Object owner gets FULL_CONTROL.
+ 5 | Bucket owner gets READ access.
+ | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
+ \ (bucket-owner-read)
+ / Both the object owner and the bucket owner get FULL_CONTROL over the object.
+ 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
+ \ (bucket-owner-full-control)
+acl>
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n>
+
+Configuration complete.
+Options:
+- type: s3
+- provider: IDrive
+- access_key_id: YOUR_ACCESS_KEY
+- secret_access_key: YOUR_SECRET_KEY
+- endpoint: q9d9.la12.idrivee2-5.com
+Keep this "e2" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
Minio is an object storage server built for cloud application developers and devops.
It is very easy to install and provides an S3 compatible server which can be used by rclone.
@@ -12485,6 +14047,7 @@ location_constraint = acl = private server_side_encryption = storage_class = +C14 Cold Storage is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" storage_class
. So you can configure your remote with the storage_class = GLACIER
option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
Seagate Lyve Cloud is an S3 compatible object storage platform from Seagate intended for enterprise use.
Here is a config run through for a remote called remote
- you may choose a different name of course. Note that to create an access key and secret key you will need to create a service account first.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
-XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
+XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
\ (s3)
[snip]
Storage> s3
@@ -12624,7 +14187,7 @@ name> wasabi
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
-XX / Amazon S3 (also Dreamhost, Ceph, Minio)
+XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio)
\ "s3"
[snip]
Storage> s3
@@ -12726,7 +14289,7 @@ Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
- 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS
+ 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Minio, and Tencent COS
\ "s3"
[snip]
Storage> s3
@@ -12814,6 +14377,343 @@ y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
+Here is an example of making an China Mobile Ecloud Elastic Object Storage (EOS) configuration. First run:
+rclone config
+This will guide you through an interactive setup process.
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> ChinaMobile
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+ ...
+ 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
+ \ (s3)
+ ...
+Storage> s3
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ ...
+ 4 / China Mobile Ecloud Elastic Object Storage (EOS)
+ \ (ChinaMobile)
+ ...
+provider> ChinaMobile
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth>
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> accesskeyid
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> secretaccesskey
+Option endpoint.
+Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / The default endpoint - a good choice if you are unsure.
+ 1 | East China (Suzhou)
+ \ (eos-wuxi-1.cmecloud.cn)
+ 2 / East China (Jinan)
+ \ (eos-jinan-1.cmecloud.cn)
+ 3 / East China (Hangzhou)
+ \ (eos-ningbo-1.cmecloud.cn)
+ 4 / East China (Shanghai-1)
+ \ (eos-shanghai-1.cmecloud.cn)
+ 5 / Central China (Zhengzhou)
+ \ (eos-zhengzhou-1.cmecloud.cn)
+ 6 / Central China (Changsha-1)
+ \ (eos-hunan-1.cmecloud.cn)
+ 7 / Central China (Changsha-2)
+ \ (eos-zhuzhou-1.cmecloud.cn)
+ 8 / South China (Guangzhou-2)
+ \ (eos-guangzhou-1.cmecloud.cn)
+ 9 / South China (Guangzhou-3)
+ \ (eos-dongguan-1.cmecloud.cn)
+10 / North China (Beijing-1)
+ \ (eos-beijing-1.cmecloud.cn)
+11 / North China (Beijing-2)
+ \ (eos-beijing-2.cmecloud.cn)
+12 / North China (Beijing-3)
+ \ (eos-beijing-4.cmecloud.cn)
+13 / North China (Huhehaote)
+ \ (eos-huhehaote-1.cmecloud.cn)
+14 / Southwest China (Chengdu)
+ \ (eos-chengdu-1.cmecloud.cn)
+15 / Southwest China (Chongqing)
+ \ (eos-chongqing-1.cmecloud.cn)
+16 / Southwest China (Guiyang)
+ \ (eos-guiyang-1.cmecloud.cn)
+17 / Nouthwest China (Xian)
+ \ (eos-xian-1.cmecloud.cn)
+18 / Yunnan China (Kunming)
+ \ (eos-yunnan.cmecloud.cn)
+19 / Yunnan China (Kunming-2)
+ \ (eos-yunnan-2.cmecloud.cn)
+20 / Tianjin China (Tianjin)
+ \ (eos-tianjin-1.cmecloud.cn)
+21 / Jilin China (Changchun)
+ \ (eos-jilin-1.cmecloud.cn)
+22 / Hubei China (Xiangyan)
+ \ (eos-hubei-1.cmecloud.cn)
+23 / Jiangxi China (Nanchang)
+ \ (eos-jiangxi-1.cmecloud.cn)
+24 / Gansu China (Lanzhou)
+ \ (eos-gansu-1.cmecloud.cn)
+25 / Shanxi China (Taiyuan)
+ \ (eos-shanxi-1.cmecloud.cn)
+26 / Liaoning China (Shenyang)
+ \ (eos-liaoning-1.cmecloud.cn)
+27 / Hebei China (Shijiazhuang)
+ \ (eos-hebei-1.cmecloud.cn)
+28 / Fujian China (Xiamen)
+ \ (eos-fujian-1.cmecloud.cn)
+29 / Guangxi China (Nanning)
+ \ (eos-guangxi-1.cmecloud.cn)
+30 / Anhui China (Huainan)
+ \ (eos-anhui-1.cmecloud.cn)
+endpoint> 1
+Option location_constraint.
+Location constraint - must match endpoint.
+Used when creating buckets only.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / East China (Suzhou)
+ \ (wuxi1)
+ 2 / East China (Jinan)
+ \ (jinan1)
+ 3 / East China (Hangzhou)
+ \ (ningbo1)
+ 4 / East China (Shanghai-1)
+ \ (shanghai1)
+ 5 / Central China (Zhengzhou)
+ \ (zhengzhou1)
+ 6 / Central China (Changsha-1)
+ \ (hunan1)
+ 7 / Central China (Changsha-2)
+ \ (zhuzhou1)
+ 8 / South China (Guangzhou-2)
+ \ (guangzhou1)
+ 9 / South China (Guangzhou-3)
+ \ (dongguan1)
+10 / North China (Beijing-1)
+ \ (beijing1)
+11 / North China (Beijing-2)
+ \ (beijing2)
+12 / North China (Beijing-3)
+ \ (beijing4)
+13 / North China (Huhehaote)
+ \ (huhehaote1)
+14 / Southwest China (Chengdu)
+ \ (chengdu1)
+15 / Southwest China (Chongqing)
+ \ (chongqing1)
+16 / Southwest China (Guiyang)
+ \ (guiyang1)
+17 / Nouthwest China (Xian)
+ \ (xian1)
+18 / Yunnan China (Kunming)
+ \ (yunnan)
+19 / Yunnan China (Kunming-2)
+ \ (yunnan2)
+20 / Tianjin China (Tianjin)
+ \ (tianjin1)
+21 / Jilin China (Changchun)
+ \ (jilin1)
+22 / Hubei China (Xiangyan)
+ \ (hubei1)
+23 / Jiangxi China (Nanchang)
+ \ (jiangxi1)
+24 / Gansu China (Lanzhou)
+ \ (gansu1)
+25 / Shanxi China (Taiyuan)
+ \ (shanxi1)
+26 / Liaoning China (Shenyang)
+ \ (liaoning1)
+27 / Hebei China (Shijiazhuang)
+ \ (hebei1)
+28 / Fujian China (Xiamen)
+ \ (fujian1)
+29 / Guangxi China (Nanning)
+ \ (guangxi1)
+30 / Anhui China (Huainan)
+ \ (anhui1)
+location_constraint> 1
+Option acl.
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn't copy the ACL from the source but rather writes a fresh one.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+ / Owner gets FULL_CONTROL.
+ 2 | The AllUsers group gets READ access.
+ \ (public-read)
+ / Owner gets FULL_CONTROL.
+ 3 | The AllUsers group gets READ and WRITE access.
+ | Granting this on a bucket is generally not recommended.
+ \ (public-read-write)
+ / Owner gets FULL_CONTROL.
+ 4 | The AuthenticatedUsers group gets READ access.
+ \ (authenticated-read)
+ / Object owner gets FULL_CONTROL.
+acl> private
+Option server_side_encryption.
+The server-side encryption algorithm used when storing this object in S3.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / None
+ \ ()
+ 2 / AES256
+ \ (AES256)
+server_side_encryption>
+Option storage_class.
+The storage class to use when storing new objects in ChinaMobile.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Default
+ \ ()
+ 2 / Standard storage class
+ \ (STANDARD)
+ 3 / Archive storage mode
+ \ (GLACIER)
+ 4 / Infrequent access storage mode
+ \ (STANDARD_IA)
+storage_class>
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+--------------------
+[ChinaMobile]
+type = s3
+provider = ChinaMobile
+access_key_id = accesskeyid
+secret_access_key = secretaccesskey
+endpoint = eos-wuxi-1.cmecloud.cn
+location_constraint = wuxi1
+acl = private
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+ArvanCloud ArvanCloud Object Storage goes beyond the limited traditional file storage. It gives you access to backup and archived files and allows sharing. Files like profile image in the app, images sent by users or scanned documents can be stored securely and easily in our Object Storage service.
+ArvanCloud provides an S3 interface which can be configured for use with rclone like this.
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+n/s> n
+name> ArvanCloud
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio)
+ \ "s3"
+[snip]
+Storage> s3
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own value
+ 1 / Enter AWS credentials in the next step
+ \ "false"
+ 2 / Get AWS credentials from the environment (env vars or IAM)
+ \ "true"
+env_auth> 1
+AWS Access Key ID - leave blank for anonymous access or runtime credentials.
+access_key_id> YOURACCESSKEY
+AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
+secret_access_key> YOURSECRETACCESSKEY
+Region to connect to.
+Choose a number from below, or type in your own value
+ / The default endpoint - a good choice if you are unsure.
+ 1 | US Region, Northern Virginia, or Pacific Northwest.
+ | Leave location constraint empty.
+ \ "us-east-1"
+[snip]
+region>
+Endpoint for S3 API.
+Leave blank if using ArvanCloud to use the default endpoint for the region.
+Specify if using an S3 clone such as Ceph.
+endpoint> s3.arvanstorage.com
+Location constraint - must be set to match the Region. Used when creating buckets only.
+Choose a number from below, or type in your own value
+ 1 / Empty for Iran-Tehran Region.
+ \ ""
+[snip]
+location_constraint>
+Canned ACL used when creating buckets and/or storing objects in S3.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Choose a number from below, or type in your own value
+ 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
+ \ "private"
+[snip]
+acl>
+The server-side encryption algorithm used when storing this object in S3.
+Choose a number from below, or type in your own value
+ 1 / None
+ \ ""
+ 2 / AES256
+ \ "AES256"
+server_side_encryption>
+The storage class to use when storing objects in S3.
+Choose a number from below, or type in your own value
+ 1 / Default
+ \ ""
+ 2 / Standard storage class
+ \ "STANDARD"
+storage_class>
+Remote config
+--------------------
+[ArvanCloud]
+env_auth = false
+access_key_id = YOURACCESSKEY
+secret_access_key = YOURSECRETACCESSKEY
+region = ir-thr-at1
+endpoint = s3.arvanstorage.com
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+This will leave the config file looking like this.
+[ArvanCloud]
+type = s3
+provider = ArvanCloud
+env_auth = false
+access_key_id = YOURACCESSKEY
+secret_access_key = YOURSECRETACCESSKEY
+region =
+endpoint = s3.arvanstorage.com
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =
Tencent Cloud Object Storage (COS) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost.
To configure access to Tencent COS, follow the steps below:
@@ -12840,7 +14740,7 @@ n/s/q> n \ "alias" 3 / Amazon Drive \ "amazon cloud drive" - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS + 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, Minio, and Tencent COS \ "s3" [snip] Storage> s3 @@ -13001,7 +14901,7 @@ y/n> nFor more detailed comparison please check the documentation of the storj backend.
rclone about
is not supported by the S3 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+See List of backends that do not support rclone about and rclone about
B2 is Backblaze's cloud storage system.
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the --b2-hard-delete
flag which would permanently remove the file instead of hiding it.
Old versions of files, where available, are visible using the --b2-versions
flag.
It is also possible to view a bucket as it was at a certain point in time, using the --b2-version-at
flag. This will show the file versions as they were at that time, showing files that have been deleted afterwards, and hiding files that were created since.
If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket
command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, e.g. rclone cleanup remote:bucket/path/to/stuff
.
Note that cleanup
will remove partially uploaded files from the bucket if they are more than a day old.
When you purge
a bucket, the current and the old versions will be deleted then the bucket will be deleted.
Here are the standard options specific to b2 (Backblaze B2).
+Here are the Standard options specific to b2 (Backblaze B2).
Account ID or Application Key ID.
Properties:
@@ -13188,7 +15089,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxHere are the advanced options specific to b2 (Backblaze B2).
+Here are the Advanced options specific to b2 (Backblaze B2).
Endpoint for the service.
Leave blank normally.
@@ -13225,6 +15126,16 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxShow file versions as they were at the specified time.
+Note that when using this no file write operations are permitted, so you can't upload files or delete them.
+Properties:
+Cutoff for switching to chunked upload.
Files above this size will be uploaded in chunks of "--b2-chunk-size".
@@ -13321,7 +15232,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxrclone about
is not supported by the B2 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+See List of backends that do not support rclone about and rclone about
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
In order to do this you will have to find the Folder ID
of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the Box web interface.
So if the folder you want rclone to use has a URL which looks like https://app.box.com/folder/11xxxxxxxxx8
in the browser, then you use 11xxxxxxxxx8
as the root_folder_id
in the config.
Here are the standard options specific to box (Box).
+Here are the Standard options specific to box (Box).
OAuth Client Id.
Leave blank normally.
@@ -13581,7 +15492,7 @@ y/e/d> yHere are the advanced options specific to box (Box).
+Here are the Advanced options specific to box (Box).
OAuth Access Token as a JSON blob.
Properties:
@@ -13671,7 +15582,7 @@ y/e/d> yBox file names can't have the \
character in. rclone maps this to and from an identical looking unicode equivalent \
(U+FF3C Fullwidth Reverse Solidus).
Box only supports filenames up to 255 characters in length.
rclone about
is not supported by the Box backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+See List of backends that do not support rclone about and rclone about
The cache
remote wraps another existing remote and stores file structure and its data for long running tasks like rclone mount
.
Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt.
Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional, false by default)
Here are the standard options specific to cache (Cache a remote).
+Here are the Standard options specific to cache (Cache a remote).
Remote to cache.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
@@ -13947,7 +15858,7 @@ chunk_total_size = 10GHere are the advanced options specific to cache (Cache a remote).
+Here are the Advanced options specific to cache (Cache a remote).
The plex token for authentication - auto set normally.
Properties:
@@ -14101,7 +16012,7 @@ chunk_total_size = 10GRun them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
-See the "rclone backend" command for more info on how to pass options and arguments.
+See the backend command for more info on how to pass options and arguments.
These can be run on a running backend using the rc command backend/command.
Print stats on the cache backend in JSON format.
@@ -14180,7 +16091,7 @@ y/e/d> yFor example, if name format is big_*-##.part
and original file name is data.txt
and numbering starts from 0, then the first chunk will be named big_data.txt-00.part
, the 99th chunk will be big_data.txt-98.part
and the 302nd chunk will become big_data.txt-301.part
.
Note that list
assembles composite directory entries only when chunk names match the configured format and treats non-conforming file names as normal non-chunked files.
When using norename
transactions, chunk names will additionally have a unique file version suffix. For example, BIG_FILE_NAME.rclone_chunk.001_bp562k
.
Besides data chunks chunker will by default create metadata object for a composite file. The object is named after the original file. Chunker allows user to disable metadata completely (the none
format). Note that metadata is normally not created for files smaller than the configured chunk size. This may change in future rclone releases.
This is the default format. It supports hash sums and chunk validation for composite files. Meta objects carry the following fields:
@@ -14221,7 +16132,7 @@ y/e/d> yChunker included in rclone releases up to v1.54
can sometimes fail to detect metadata produced by recent versions of rclone. We recommend users to keep rclone up-to-date to avoid data corruption.
Changing transactions
is dangerous and requires explicit migration.
Here are the standard options specific to chunker (Transparently chunk/split large files).
+Here are the Standard options specific to chunker (Transparently chunk/split large files).
Remote to chunk/unchunk.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
@@ -14285,7 +16196,7 @@ y/e/d> yHere are the advanced options specific to chunker (Transparently chunk/split large files).
+Here are the Advanced options specific to chunker (Transparently chunk/split large files).
String format of chunk file names.
The two placeholders are: base file name (*) and chunk number (#...). There must be one and only one asterisk and one or more consecutive hash characters. If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. Possible chunk files are ignored if their name does not match given format.
@@ -14536,7 +16447,7 @@ y/e/d> yInvalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Here are the standard options specific to sharefile (Citrix Sharefile).
+Here are the Standard options specific to sharefile (Citrix Sharefile).
ID of the root folder.
Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID).
@@ -14571,7 +16482,7 @@ y/e/d> yHere are the advanced options specific to sharefile (Citrix Sharefile).
+Here are the Advanced options specific to sharefile (Citrix Sharefile).
Cutoff for switching to multipart upload.
Properties:
@@ -14617,7 +16528,7 @@ y/e/d> yNote that ShareFile is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
ShareFile only supports filenames up to 256 characters in length.
rclone about
is not supported by the Citrix ShareFile backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+See List of backends that do not support rclone about and rclone about
Rclone crypt
remotes encrypt and decrypt other remotes.
A remote of type crypt
does not access a storage system directly, but instead wraps another remote, which in turn accesses the storage system. This is similar to how alias, union, chunker and a few others work. It makes the usage very flexible, as you can add a layer, in this case an encryption layer, on top of any other backend, even in multiple layers. Rclone's functionality can be used as with any other remote, for example you can mount a crypt remote.
Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.
Use the rclone cryptcheck
command to check the integrity of a crypted remote instead of rclone check
which can't check the checksums properly.
Here are the standard options specific to crypt (Encrypt/Decrypt a remote).
+Here are the Standard options specific to crypt (Encrypt/Decrypt a remote).
Remote to encrypt/decrypt.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
@@ -14896,7 +16807,7 @@ $ rclone -q ls secret:Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).
+Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote).
Allow server-side operations (e.g. copy) to work across different crypt configs.
Normally this option is not what you want, but if you have two crypts pointing to the same backend you can use it.
@@ -14965,12 +16876,15 @@ $ rclone -q ls secret: +Any metadata supported by the underlying remote is read and written.
+See the metadata docs for more info.
Here are the commands specific to the crypt backend.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
-See the "rclone backend" command for more info on how to pass options and arguments.
+See the backend command for more info on how to pass options and arguments.
These can be run on a running backend using the rc command backend/command.
Encode the given filename(s)
@@ -15050,7 +16964,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfileRclone uses scrypt
with parameters N=16384, r=8, p=1
with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.
scrypt
makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.
The compressed files will be named *.###########.gz
where *
is the base file and the #
part is base64 encoded size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend.
Here are the standard options specific to compress (Compress a remote).
+Here are the Standard options specific to compress (Compress a remote).
Remote to compress.
Properties:
@@ -15141,7 +17055,7 @@ y/e/d> yHere are the advanced options specific to compress (Compress a remote).
+Here are the Advanced options specific to compress (Compress a remote).
GZIP compression level (-2 to 9).
Generally -1 (default, equivalent to 5) is recommended. Levels 1 to 9 increase compression at the cost of speed. Going past 6 generally offers very little return.
@@ -15163,10 +17077,111 @@ y/e/d> yAny metadata supported by the underlying remote is read and written.
+See the metadata docs for more info.
+The combine
backend joins remotes together into a single directory tree.
For example you might have a remote for images on one provider:
+$ rclone tree s3:imagesbucket
+/
+├── image1.jpg
+└── image2.jpg
+And a remote for files on another:
+$ rclone tree drive:important/files
+/
+├── file1.txt
+└── file2.txt
+The combine
backend can join these together into a synthetic directory structure like this:
$ rclone tree combined:
+/
+├── files
+│ ├── file1.txt
+│ └── file2.txt
+└── images
+ ├── image1.jpg
+ └── image2.jpg
+You'd do this by specifying an upstreams
parameter in the config like this
upstreams = images=s3:imagesbucket files=drive:important/files
+During the initial setup with rclone config
you will specify the upstreams remotes as a space separated list. The upstream remotes can either be a local paths or other remotes.
Here is an example of how to make a combine called remote
for the example above. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+...
+XX / Combine several remotes into one
+ \ (combine)
+...
+Storage> combine
+Option upstreams.
+Upstreams for combining
+These should be in the form
+ dir=remote:path dir2=remote2:path
+Where before the = is specified the root directory and after is the remote to
+put there.
+Embedded spaces can be added using quotes
+ "dir=remote:path with space" "dir2=remote2:path with space"
+Enter a fs.SpaceSepList value.
+upstreams> images=s3:imagesbucket files=drive:important/files
+--------------------
+[remote]
+type = combine
+upstreams = images=s3:imagesbucket files=drive:important/files
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Rclone has a convenience feature for making a combine backend for all the shared drives you have access to.
+Assuming your main (non shared drive) Google drive remote is called drive:
you would run
rclone backend -o config drives drive:
+This would produce something like this:
+[My Drive]
+type = alias
+remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
+
+[Test Drive]
+type = alias
+remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
+
+[AllDrives]
+type = combine
+remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
+If you then add that config to your config file (find it with rclone config file
) then you can access all the shared drives in one place with the AllDrives:
remote.
See the Google Drive docs for full info.
+Here are the Standard options specific to combine (Combine several remotes into one).
+Upstreams for combining
+These should be in the form
+dir=remote:path dir2=remote2:path
+Where before the = is specified the root directory and after is the remote to put there.
+Embedded spaces can be added using quotes
+"dir=remote:path with space" "dir2=remote2:path with space"
+Properties:
+Any metadata supported by the underlying remote is read and written.
+See the metadata docs for more info.
Paths are specified as remote:path
Dropbox paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -15287,8 +17302,8 @@ y/e/d> y
This provides the maximum possible upload speed especially with lots of small files, however rclone can't check the file got uploaded properly using this mode.
If you are using this mode then using "rclone check" after the transfer completes is recommended. Or you could do an initial transfer with --dropbox-batch-mode async
then do a final transfer with --dropbox-batch-mode sync
(the default).
Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode.
-Here are the standard options specific to dropbox (Dropbox).
+Here are the Standard options specific to dropbox (Dropbox).
OAuth Client Id.
Leave blank normally.
@@ -15310,7 +17325,7 @@ y/e/d> yHere are the advanced options specific to dropbox (Dropbox).
+Here are the Advanced options specific to dropbox (Dropbox).
OAuth Access Token as a JSON blob.
Properties:
@@ -15476,7 +17491,7 @@ y/e/d> yThis backend supports Storage Made Easy's Enterprise File Fabric™ which provides a software solution to integrate and unify File and Object Storage accessible through a global file system.
-The initial setup for the Enterprise File Fabric backend involves getting a token from the the Enterprise File Fabric which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -15570,8 +17585,8 @@ y/e/d> y
120673757,My contacts/
120673761,S3 Storage/
The ID for "S3 Storage" would be 120673761
.
Here are the standard options specific to filefabric (Enterprise File Fabric).
+Here are the Standard options specific to filefabric (Enterprise File Fabric).
URL of the Enterprise File Fabric to connect to.
Properties:
@@ -15620,7 +17635,7 @@ y/e/d> yHere are the advanced options specific to filefabric (Enterprise File Fabric).
+Here are the Advanced options specific to filefabric (Enterprise File Fabric).
Session Token.
This is a session token which rclone caches in the config file. It is usually valid for 1 hour.
@@ -15666,7 +17681,7 @@ y/e/d> yFTP is the File Transfer Protocol. Rclone FTP support is provided using the github.com/jlaffaye/ftp package.
Limitations of Rclone's FTP backend
Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user's home directory.
To create an FTP configuration named remote
, run
rclone config
Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, use anonymous
as username and your email address as password.
This backend's interactive configuration wizard provides a selection of sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd. Just hit a selection number when prompted.
-Here are the standard options specific to ftp (FTP Connection).
+Here are the Standard options specific to ftp (FTP).
FTP host to connect to.
E.g. "ftp.example.com".
@@ -15837,7 +17852,7 @@ y/e/d> yHere are the advanced options specific to ftp (FTP Connection).
+Here are the Advanced options specific to ftp (FTP).
Maximum number of FTP simultaneous connections, 0 for unlimited.
Properties:
@@ -15874,6 +17889,15 @@ y/e/d> yDisable using UTF-8 even if server advertises support.
+Properties:
+Use MDTM to set modification time (VsFtpd quirk)
Properties:
@@ -15970,7 +17994,7 @@ y/e/d> yFTP servers acting as rclone remotes must support passive
mode. The mode cannot be configured as passive
is the only supported one. Rclone's FTP implementation is not compatible with active
mode as the library it uses doesn't support it. This will likely never be supported due to security concerns.
Rclone's FTP backend does not support any checksums but can compare file sizes.
rclone about
is not supported by the FTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+See List of backends that do not support rclone about and rclone about
The implementation of : --dump headers
, --dump bodies
, --dump auth
for debugging isn't the same as for rclone HTTP based backends - it has less fine grained control.
--timeout
isn't supported (but --contimeout
is).
--bind
isn't supported.
You can use the following command to check whether rclone can use precise time with your FTP server: rclone backend features your_ftp_remote:
(the trailing colon is important). Look for the number in the line tagged by Precision
designating the remote time precision expressed as nanoseconds. A value of 1000000000
means that file time precision of 1 second is available. A value of 3153600000000000000
(or another large number) means "unsupported".
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -16177,8 +18201,8 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
+Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
OAuth Client Id.
Leave blank normally.
@@ -16532,7 +18556,7 @@ y/e/d> yHere are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
+Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
OAuth Access Token as a JSON blob.
Properties:
@@ -16562,6 +18586,27 @@ y/e/d> yIf set, don't attempt to check the bucket exists or create it.
+This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already.
+Properties:
+If set this will decompress gzip encoded objects.
+It is possible to upload objects to GCS with "Content-Encoding: gzip" set. Normally rclone will download these files files as compressed objects.
+If this flag is set then rclone will decompress these files with "Content-Encoding: gzip" as they are received. This means that rclone can't check the size and hash but the file contents will be decompressed.
+Properties:
+The encoding for the backend.
See the encoding section in the overview for more info.
@@ -16574,11 +18619,11 @@ y/e/d> yrclone about
is not supported by the Google Cloud Storage backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+See List of backends that do not support rclone about and rclone about
Paths are specified as drive:path
Drive paths may be as deep as required, e.g. drive:directory/subdirectory
.
The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -16619,8 +18664,6 @@ Choose a number from below, or type in your own value
5 | does not allow any access to read or download file content.
\ "drive.metadata.readonly"
scope> 1
-ID of the root folder - leave blank normally. Fill in to access "Computers" folders. (see docs).
-root_folder_id>
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
service_account_file>
Remote config
@@ -16677,7 +18720,7 @@ y/e/d> y
This allows read only access to file names only. It does not allow rclone to download or upload data, or rename or delete files or directories.
You can set the root_folder_id
for rclone. This is the directory (identified by its Folder ID
) that rclone considers to be the root of your drive.
This option has been moved to the advanced section. You can set the root_folder_id
for rclone. This is the directory (identified by its Folder ID
) that rclone considers to be the root of your drive.
Normally you will leave this blank and rclone will determine the correct root to use itself.
However you can set this to restrict rclone to a specific folder hierarchy or to access data within the "Computers" tab on the drive web interface (where files from Google's Backup and Sync desktop program go).
In order to do this you will have to find the Folder ID
of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the drive web interface.
Here are the standard options specific to drive (Google Drive).
+Here are the Standard options specific to drive (Google Drive).
Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance.
Properties:
@@ -17104,16 +19172,6 @@ trashed=false and 'c' in parents -ID of the root folder. Leave blank normally.
-Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point.
-Properties:
-Service Account Credentials JSON file path.
Leave blank normally. Needed only if you want use SA instead of interactive login.
@@ -17135,7 +19193,7 @@ trashed=false and 'c' in parentsHere are the advanced options specific to drive (Google Drive).
+Here are the Advanced options specific to drive (Google Drive).
OAuth Access Token as a JSON blob.
Properties:
@@ -17165,6 +19223,16 @@ trashed=false and 'c' in parentsID of the root folder. Leave blank normally.
+Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point.
+Properties:
+Service Account Credentials JSON blob.
Leave blank normally. Needed only if you want use SA instead of interactive login.
@@ -17490,6 +19558,21 @@ trashed=false and 'c' in parentsResource key for accessing a link-shared file.
+If you need to access files shared with a link like this
+https://drive.google.com/drive/folders/XXX?resourcekey=YYY&usp=sharing
+Then you will need to use the first part "XXX" as the "root_folder_id" and the second part "YYY" as the "resource_key" otherwise you will get 404 not found errors when trying to access the directory.
+See: https://developers.google.com/drive/api/guides/resource-keys
+This resource key requirement only applies to a subset of old files.
+Note also that opening the folder once in the web interface (with the user you've authenticated rclone with) seems to be enough so that the resource key is no needed.
+Properties:
+The encoding for the backend.
See the encoding section in the overview for more info.
@@ -17505,7 +19588,7 @@ trashed=false and 'c' in parentsRun them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
-See the "rclone backend" command for more info on how to pass options and arguments.
+See the backend command for more info on how to pass options and arguments.
These can be run on a running backend using the rc command backend/command.
Get command for fetching the drive config parameters
@@ -17563,15 +19646,19 @@ rclone backend shortcut drive: source_item -o target=drive2: destination_shortcu "name": "Test Drive" } ] -With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives found.
+With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives found and a combined drive.
[My Drive]
type = alias
remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
[Test Drive]
type = alias
-remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
-Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. This may require manual editing of the names.
+remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: + +[AllDrives] +type = combine +remote = "My Drive=My Drive:" "Test Drive=Test Drive:" +Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. Any illegal charactes will be substituted with "_" and duplicate names will have numbers suffixed. It will also add a remote called AllDrives which shows all the shared drives combined into one directory tree.
Untrash files and directories
rclone backend untrash remote: [options] [<arguments>+]
@@ -17597,11 +19684,17 @@ rclone backend copyid drive: ID1 path1 ID2 path2
The path should end with a / to indicate copy the file as named to this directory. If it doesn't end with a / then the last path component will be used as the file name.
If the destination is a drive backend then server-side copying will be attempted if possible.
Use the -i flag to see what would be copied before copying.
+Dump the export formats for debug purposes
+rclone backend exportformats remote: [options] [<arguments>+]
+Dump the import formats for debug purposes
+rclone backend importformats remote: [options] [<arguments>+]
Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiB/s but lots of small files can take a long time.
Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server-side copies with --disable copy
to download and upload the files if you prefer.
Google docs will appear as size -1 in rclone ls
and as size 0 in anything which uses the VFS layer, e.g. rclone mount
, rclone serve
.
Google docs will appear as size -1 in rclone ls
, rclone ncdu
etc, and as size 0 in anything which uses the VFS layer, e.g. rclone mount
and rclone serve
. When calculating directory totals, e.g. in rclone size
and rclone ncdu
, they will be counted in as empty files.
This is because rclone can't find out the size of the Google docs without downloading them.
Google docs will transfer correctly with rclone sync
, rclone copy
etc as rclone knows to ignore the size when doing the transfer.
However an unfortunate consequence of this is that you may not be able to download Google docs using rclone mount
. If it doesn't work you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable. Whether it will work on not depends on the application accessing the mount and the OS you are running - experiment to find out if it does work for you!
Select a project or create a new project.
Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the "Google Drive API".
Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials"
If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK); enter "User Support Email" (your own email is OK); enter "Developer Contact Email" (your own email is OK); then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen.
(PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this has not been tested/documented so far).
-If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK); enter "User Support Email" (your own email is OK); enter "Developer Contact Email" (your own email is OK); then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen.
+(PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this will restrict API use to Google Workspace users in your organisation).
Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".
Choose an application type of "Desktop app" and click "Create". (the default name is fine)
It will show you a client ID and client secret. Make a note of these.
It will show you a client ID and client secret. Make a note of these.
+(If you selected "External" at Step 5 continue to "Publish App" in the Steps 9 and 10. If you chose "Internal" you don't need to publish and can skip straight to Step 11.)
Go to "Oauth consent screen" and press "Publish App"
Provide the noted client ID and client secret to rclone.
Click "OAuth consent screen", then click "PUBLISH APP" button and confirm, or add your account under "Test users".
Provide the noted client ID and client secret to rclone.
Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it's not such a big deal).
(Thanks to @balazer on github for these instructions.)
@@ -17640,7 +19732,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos.
NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use.
-The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -17793,8 +19885,8 @@ y/e/d> y
This means that you can use the album
path pretty much like a normal filesystem and it is a good target for repeated syncing.
The shared-album
directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.
Here are the standard options specific to google photos (Google Photos).
+Here are the Standard options specific to google photos (Google Photos).
OAuth Client Id.
Leave blank normally.
@@ -17826,7 +19918,7 @@ y/e/d> yHere are the advanced options specific to google photos (Google Photos).
+Here are the Advanced options specific to google photos (Google Photos).
OAuth Access Token as a JSON blob.
Properties:
@@ -18007,8 +20099,8 @@ rclone backend drop Hasher:rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1
stickyimport
is similar to import
but works much faster because it does not need to stat existing files and skips initial tree walk. Instead of binding cache entries to file fingerprints it creates sticky entries bound to the file name alone ignoring size, modification time etc. Such hash entries can be replaced only by purge
, delete
, backend drop
or by full re-read/re-write of the files.
Here are the standard options specific to hasher (Better checksums for other remotes).
+Here are the Standard options specific to hasher (Better checksums for other remotes).
Remote to cache checksums for (e.g. myRemote:path).
Properties:
@@ -18037,7 +20129,7 @@ rclone backend drop Hasher:Here are the advanced options specific to hasher (Better checksums for other remotes).
+Here are the Advanced options specific to hasher (Better checksums for other remotes).
Auto-update checksum for files smaller than this size (disabled by default).
Properties:
@@ -18047,12 +20139,15 @@ rclone backend drop Hasher:Any metadata supported by the underlying remote is read and written.
+See the metadata docs for more info.
Here are the commands specific to the hasher backend.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
-See the "rclone backend" command for more info on how to pass options and arguments.
+See the backend command for more info on how to pass options and arguments.
These can be run on a running backend using the rc command backend/command.
Drop cache
@@ -18101,7 +20196,7 @@ rclone backend drop Hasher:HDFS is a distributed file-system, part of the Apache Hadoop framework.
Paths are specified as remote:
or remote:path/to/dir
.
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -18209,8 +20304,8 @@ username = rootInvalid UTF-8 bytes will also be replaced.
-Here are the standard options specific to hdfs (Hadoop distributed file system).
+Here are the Standard options specific to hdfs (Hadoop distributed file system).
Hadoop name node and port.
E.g. "namenode:8020" to connect to host namenode at port 8020.
@@ -18238,7 +20333,7 @@ username = rootHere are the advanced options specific to hdfs (Hadoop distributed file system).
+Here are the Advanced options specific to hdfs (Hadoop distributed file system).
Kerberos service principal name for the namenode.
Enables KERBEROS authentication. Specifies the Service Principal Name (SERVICE/FQDN) for the namenode. E.g. "hdfs/namenode.hadoop.docker" for namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'.
@@ -18281,13 +20376,309 @@ username = rootMove
or DirMove
.Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for hidrive involves getting a token from HiDrive which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / HiDrive
+ \ "hidrive"
+[snip]
+Storage> hidrive
+OAuth Client Id - Leave blank normally.
+client_id>
+OAuth Client Secret - Leave blank normally.
+client_secret>
+Access permissions that rclone should use when requesting access from HiDrive.
+Leave blank normally.
+scope_access>
+Edit advanced config?
+y/n> n
+Use auto config?
+y/n> y
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx
+Log in and authorize rclone for access
+Waiting for code...
+Got code
+--------------------
+[remote]
+type = hidrive
+token = {"access_token":"xxxxxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"xxxxxxxxxxxxxxxxxxxxxxx","expiry":"xxxxxxxxxxxxxxxxxxxxxxx"}
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+You should be aware that OAuth-tokens can be used to access your account and hence should not be shared with other persons. See the below section for more information.
+See the remote setup docs for how to set it up on a machine with no Internet browser available.
+Note that rclone runs a webserver on your local machine to collect the token as returned from HiDrive. This only runs from the moment it opens your browser to the moment you get back the verification code. The webserver runs on http://127.0.0.1:53682/
. If local port 53682
is protected by a firewall you may need to temporarily unblock the firewall to complete authorization.
Once configured you can then use rclone
like this,
List directories in top level of your HiDrive root folder
+rclone lsd remote:
+List all the files in your HiDrive filesystem
+rclone ls remote:
+To copy a local directory to a HiDrive directory called backup
+rclone copy /home/source remote:backup
+Any OAuth-tokens will be stored by rclone in the remote's configuration file as unencrypted text. Anyone can use a valid refresh-token to access your HiDrive filesystem without knowing your password. Therefore you should make sure no one else can access your configuration.
+It is possible to encrypt rclone's configuration file. You can find information on securing your configuration file by viewing the configuration encryption docs.
+As can be verified here, each refresh_token
(for Native Applications) is valid for 60 days. If used to access HiDrivei, its validity will be automatically extended.
This means that if you
+then rclone will return an error which includes a text that implies the refresh token is invalid or expired.
+To fix this you will need to authorize rclone to access your HiDrive account again.
+Using
+rclone config reconnect remote:
+the process is very similar to the process of initial setup exemplified before.
+HiDrive allows modification times to be set on objects accurate to 1 second.
+HiDrive supports its own hash type which is used to verify the integrety of file contents after successful transfers.
+HiDrive cannot store files or folders that include /
(0x2F) or null-bytes (0x00) in their name. Any other characters can be used in the names of files or folders. Additionally, files or folders cannot be named either of the following: .
or ..
Therefore rclone will automatically replace these characters, if files or folders are stored or accessed with such names.
+You can read about how this filename encoding works in general here.
+Keep in mind that HiDrive only supports file or folder names with a length of 255 characters or less.
+HiDrive limits file sizes per single request to a maximum of 2 GiB. To allow storage of larger files and allow for better upload performance, the hidrive backend will use a chunked transfer for files larger than 96 MiB. Rclone will upload multiple parts/chunks of the file at the same time. Chunks in the process of being uploaded are buffered in memory, so you may want to restrict this behaviour on systems with limited resources.
+You can customize this behaviour using the following options:
+chunk_size
: size of file partsupload_cutoff
: files larger or equal to this in size will use a chunked transferupload_concurrency
: number of file-parts to upload at the same timeSee the below section about configuration options for more details.
+You can set the root folder for rclone. This is the directory that rclone considers to be the root of your HiDrive.
+Usually, you will leave this blank, and rclone will use the root of the account.
+However, you can set this to restrict rclone to a specific folder hierarchy.
+This works by prepending the contents of the root_prefix
option to any paths accessed by rclone. For example, the following two ways to access the home directory are equivalent:
rclone lsd --hidrive-root-prefix="/users/test/" remote:path
+
+rclone lsd remote:/users/test/path
+See the below section about configuration options for more details.
+By default, rclone will know the number of directory members contained in a directory. For example, rclone lsd
uses this information.
The acquisition of this information will result in additional time costs for HiDrive's API. When dealing with large directory structures, it may be desirable to circumvent this time cost, especially when this information is not explicitly needed. For this, the disable_fetching_member_count
option can be used.
See the below section about configuration options for more details.
+Here are the Standard options specific to hidrive (HiDrive).
+OAuth Client Id.
+Leave blank normally.
+Properties:
+OAuth Client Secret.
+Leave blank normally.
+Properties:
+Access permissions that rclone should use when requesting access from HiDrive.
+Properties:
+Here are the Advanced options specific to hidrive (HiDrive).
+OAuth Access Token as a JSON blob.
+Properties:
+Auth server URL.
+Leave blank to use the provider defaults.
+Properties:
+Token server url.
+Leave blank to use the provider defaults.
+Properties:
+User-level that rclone should use when requesting access from HiDrive.
+Properties:
+The root/parent folder for all paths.
+Fill in to use the specified folder as the parent for all paths given to the remote. This way rclone can use any folder as its starting point.
+Properties:
+Endpoint for the service.
+This is the URL that API-calls will be made to.
+Properties:
+Do not fetch number of objects in directories unless it is absolutely necessary.
+Requests may be faster if the number of objects in subdirectories is not fetched.
+Properties:
+Chunksize for chunked uploads.
+Any files larger than the configured cutoff (or files of unknown size) will be uploaded in chunks of this size.
+The upper limit for this is 2147483647 bytes (about 2.000Gi). That is the maximum amount of bytes a single upload-operation will support. Setting this above the upper limit or to a negative value will cause uploads to fail.
+Setting this to larger values may increase the upload speed at the cost of using more memory. It can be set to smaller values smaller to save on memory.
+Properties:
+Cutoff/Threshold for chunked uploads.
+Any files larger than this will be uploaded in chunks of the configured chunksize.
+The upper limit for this is 2147483647 bytes (about 2.000Gi). That is the maximum amount of bytes a single upload-operation will support. Setting this above the upper limit will cause uploads to fail.
+Properties:
+Concurrency for chunked uploads.
+This is the upper limit for how many transfers for the same file are running concurrently. Setting this above to a value smaller than 1 will cause uploads to deadlock.
+If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+HiDrive is able to store symbolic links (symlinks) by design, for example, when unpacked from a zip archive.
+There exists no direct mechanism to manage native symlinks in remotes. As such this implementation has chosen to ignore any native symlinks present in the remote. rclone will not be able to access or show any symlinks stored in the hidrive-remote. This means symlinks cannot be individually removed, copied, or moved, except when removing, copying, or moving the parent folder.
+This does not affect the .rclonelink
-files that rclone uses to encode and store symbolic links.
It is possible to store sparse files in HiDrive.
+Note that copying a sparse file will expand the holes into null-byte (0x00) regions that will then consume disk space. Likewise, when downloading a sparse file, the resulting file will have null-byte regions in the place of file holes.
The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!)
Paths are specified as remote:
or remote:path
.
The remote:
represents the configured url, and any path following it will be resolved relative to this url, according to the URL standard. This means with remote url https://beta.rclone.org/branch
and path fix
, the resolved URL will be https://beta.rclone.org/branch/fix
, while with path /fix
the resolved URL will be https://beta.rclone.org/fix
as the absolute path is resolved from the root of the domain.
If the path following the remote:
ends with /
it will be assumed to point to a directory. If the path does not end with /
, then a HEAD request is sent and the response used to decide if it it is treated as a file or a directory (run with -vv
to see details). When --http-no-head is specified, a path without ending /
is always assumed to be a file. If rclone incorrectly assumes the path is a file, the solution is to specify the path with ending /
. When you know the path is a directory, ending it with /
is always better as it avoids the initial HEAD request.
To just download a single file it is easier to use copyurl.
-Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -18300,7 +20691,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / http Connection +XX / HTTP \ "http" [snip] Storage> http @@ -18350,10 +20741,10 @@ e/n/d/r/c/s/q> qrclone lsd --http-url https://beta.rclone.org :http:
or:
rclone lsd :http,url='https://beta.rclone.org':
-Here are the standard options specific to http (http Connection).
+Here are the Standard options specific to http (HTTP).
URL of http host to connect to.
+URL of HTTP host to connect to.
E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password.
Properties:
Here are the advanced options specific to http (http Connection).
+Here are the Advanced options specific to http (HTTP).
Set HTTP headers for all transactions.
Use this to set additional HTTP headers for all transactions.
@@ -18405,13 +20796,13 @@ e/n/d/r/c/s/q> qrclone about
is not supported by the HTTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+See List of backends that do not support rclone about and rclone about
Paths are specified as remote:path
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:container/path/to/dir
.
The initial setup for Hubic involves getting a token from Hubic which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -18469,8 +20860,8 @@ y/e/d> y
The modified time is stored as metadata on the object as X-Object-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
Note that Hubic wraps the Swift backend, so most of the properties of are the same.
-Here are the standard options specific to hubic (Hubic).
+Here are the Standard options specific to hubic (Hubic).
OAuth Client Id.
Leave blank normally.
@@ -18491,8 +20882,8 @@ y/e/d> yHere are the advanced options specific to hubic (Hubic).
+Here are the Advanced options specific to hubic (Hubic).
OAuth Access Token as a JSON blob.
Properties:
@@ -18554,9 +20945,288 @@ y/e/d> yThis uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.
The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
+The Internet Archive backend utilizes Items on archive.org
+Refer to IAS3 API documentation for the API this backend uses.
+Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:item/path/to/dir
.
Once you have made a remote (see the provider specific section above) you can use it like this:
+Unlike S3, listing up all items uploaded by you isn't supported.
+Make a new item
+rclone mkdir remote:item
+List the contents of a item
+rclone ls remote:item
+Sync /home/local/directory
to the remote item, deleting any excess files in the item.
rclone sync -i /home/local/directory remote:item
+Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item's queue at https://catalogd.archive.org/history/item-name-here . Because of that, all uploads/deletes will not show up immediately and takes some time to be available. The per-item queue is enqueued to an another queue, Item Deriver Queue. You can check the status of Item Deriver Queue here. This queue has a limit, and it may block you from uploading, or even deleting. You should avoid uploading a lot of small files for better behavior.
+You can optionally wait for the server's processing to finish, by setting non-zero value to wait_archive
key. By making it wait, rclone can do normal file comparison. Make sure to set a large enough value (e.g. 30m0s
for smaller files) as it can take a long time depending on server's queue.
This backend supports setting, updating and reading metadata of each file. The metadata will appear as file metadata on Internet Archive. However, some fields are reserved by both Internet Archive and rclone.
+The following are reserved by Internet Archive: - name
- source
- size
- md5
- crc32
- sha1
- format
- old_version
- viruscheck
Trying to set values to these keys is ignored with a warning. Only setting mtime
is an exception. Doing so make it the identical behavior as setting ModTime.
rclone reserves all the keys starting with rclone-
. Setting value for these keys will give you warnings, but values are set according to request.
If there are multiple values for a key, only the first one is returned. This is a limitation of rclone, that supports one value per one key. It can be triggered when you did a server-side copy.
+Reading metadata will also provide custom (non-standard nor reserved) ones.
+Here is an example of making an internetarchive configuration. Most applies to the other providers as well, any differences are described below.
+First run
+rclone config
+This will guide you through an interactive setup process.
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+XX / InternetArchive Items
+ \ (internetarchive)
+Storage> internetarchive
+Option access_key_id.
+IAS3 Access Key.
+Leave blank for anonymous access.
+You can find one here: https://archive.org/account/s3.php
+Enter a value. Press Enter to leave empty.
+access_key_id> XXXX
+Option secret_access_key.
+IAS3 Secret Key (password).
+Leave blank for anonymous access.
+Enter a value. Press Enter to leave empty.
+secret_access_key> XXXX
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> y
+Option endpoint.
+IAS3 Endpoint.
+Leave blank for default value.
+Enter a string value. Press Enter for the default (https://s3.us.archive.org).
+endpoint>
+Option front_endpoint.
+Host of InternetArchive Frontend.
+Leave blank for default value.
+Enter a string value. Press Enter for the default (https://archive.org).
+front_endpoint>
+Option disable_checksum.
+Don't store MD5 checksum with object metadata.
+Normally rclone will calculate the MD5 checksum of the input before
+uploading it so it can ask the server to check the object against checksum.
+This is great for data integrity checking but can cause long delays for
+large files to start uploading.
+Enter a boolean value (true or false). Press Enter for the default (true).
+disable_checksum> true
+Option encoding.
+The encoding for the backend.
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+Enter a encoder.MultiEncoder value. Press Enter for the default (Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot).
+encoding>
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+--------------------
+[remote]
+type = internetarchive
+access_key_id = XXXX
+secret_access_key = XXXX
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Here are the Standard options specific to internetarchive (Internet Archive).
+IAS3 Access Key.
+Leave blank for anonymous access. You can find one here: https://archive.org/account/s3.php
+Properties:
+IAS3 Secret Key (password).
+Leave blank for anonymous access.
+Properties:
+Here are the Advanced options specific to internetarchive (Internet Archive).
+IAS3 Endpoint.
+Leave blank for default value.
+Properties:
+Host of InternetArchive Frontend.
+Leave blank for default value.
+Properties:
+Don't ask the server to test against MD5 checksum calculated by rclone. Normally rclone will calculate the MD5 checksum of the input before uploading it so it can ask the server to check the object against checksum. This is great for data integrity checking but can cause long delays for large files to start uploading.
+Properties:
+Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish. Only enable if you need to be guaranteed to be reflected after write operations. 0 to disable waiting. No errors to be thrown in case of timeout.
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Metadata fields provided by Internet Archive. If there are multiple values for a key, only the first one is returned. This is a limitation of Rclone, that supports one value per one key.
+Owner is able to add custom keys. Metadata feature grabs all the keys including them.
+Here are the possible system metadata items for the internetarchive backend.
+Name | +Help | +Type | +Example | +Read Only | +
---|---|---|---|---|
crc32 | +CRC32 calculated by Internet Archive | +string | +01234567 | +N | +
format | +Name of format identified by Internet Archive | +string | +Comma-Separated Values | +N | +
md5 | +MD5 hash calculated by Internet Archive | +string | +01234567012345670123456701234567 | +N | +
mtime | +Time of last modification, managed by Rclone | +RFC 3339 | +2006-01-02T15:04:05.999999999Z | +N | +
name | +Full file path, without the bucket part | +filename | +backend/internetarchive/internetarchive.go | +N | +
old_version | +Whether the file was replaced and moved by keep-old-version flag | +boolean | +true | +N | +
rclone-ia-mtime | +Time of last modification, managed by Internet Archive | +RFC 3339 | +2006-01-02T15:04:05.999999999Z | +N | +
rclone-mtime | +Time of last modification, managed by Rclone | +RFC 3339 | +2006-01-02T15:04:05.999999999Z | +N | +
rclone-update-track | +Random value used by Rclone for tracking changes inside Internet Archive | +string | +aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | +N | +
sha1 | +SHA1 hash calculated by Internet Archive | +string | +0123456701234567012345670123456701234567 | +N | +
size | +File size in bytes | +decimal number | +123456 | +N | +
source | +The source of the file | +string | +original | +N | +
viruscheck | +The last time viruscheck process was run for the file (?) | +unixtime | +1654191352 | +N | +
See the metadata docs for more info.
Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway. In addition to the official service at jottacloud.com, it also provides white-label solutions to different companies, such as: * Telia * Telia Cloud (cloud.telia.se) * Telia Sky (sky.telia.no) * Tele2 * Tele2 Cloud (mittcloud.tele2.se) * Elkjøp (with subsidiaries): * Elkjøp Cloud (cloud.elkjop.no) * Elgiganten Sweden (cloud.elgiganten.se) * Elgiganten Denmark (cloud.elgiganten.dk) * Giganti Cloud (cloud.gigantti.fi) * ELKO Clouud (cloud.elko.is)
Most of the white-label versions are supported by this backend, although may require different authentication setup - described below.
@@ -18572,7 +21242,7 @@ y/e/d> ySimilar to other whitelabel versions Telia Cloud doesn't offer the option of creating a CLI token, and additionally uses a separate authentication flow where the username is generated internally. To setup rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is identical to the default setup.
As Tele2-Com Hem merger was completed this authentication can be used for former Com Hem Cloud and Tele2 Cloud customers as no support for creating a CLI token exists, and additionally uses a separate authentication flow where the username is generated internally. To setup rclone to use Tele2 Cloud, choose Tele2 Cloud authentication in the setup. The rest of the setup is identical to the default setup.
-Here is an example of how to make a remote called remote
with the default setup. First run:
rclone config
This will guide you through an interactive setup process:
@@ -18582,56 +21252,78 @@ s) Set configuration password q) Quit config n/s/q> n name> remote +Option Storage. Type of storage to configure. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value +Choose a number from below, or type in your own value. [snip] XX / Jottacloud - \ "jottacloud" + \ (jottacloud) [snip] Storage> jottacloud -** See help for jottacloud backend at: https://rclone.org/jottacloud/ ** - -Edit advanced config? (y/n) -y) Yes -n) No -y/n> n -Remote config -Use legacy authentication?. -This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. +Edit advanced config? y) Yes n) No (default) y/n> n - -Generate a personal login token here: https://www.jottacloud.com/web/secure +Option config_type. +Select authentication type. +Choose a number from below, or type in an existing string value. +Press Enter for the default (standard). + / Standard authentication. + 1 | Use this if you're a normal Jottacloud user. + \ (standard) + / Legacy authentication. + 2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. + \ (legacy) + / Telia Cloud authentication. + 3 | Use this if you are using Telia Cloud. + \ (telia) + / Tele2 Cloud authentication. + 4 | Use this if you are using Tele2 Cloud. + \ (tele2) +config_type> 1 +Personal login token. +Generate here: https://www.jottacloud.com/web/secure Login Token> <your token here> - -Do you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client? - +Use a non-standard device/mountpoint? +Choosing no, the default, will let you access the storage used for the archive +section of the official Jottacloud client. If you instead want to access the +sync or the backup section, for example, you must choose yes. y) Yes -n) No +n) No (default) y/n> y -Please select the device to use. Normally this will be Jotta -Choose a number from below, or type in an existing value +Option config_device. +The device to use. In standard setup the built-in Jotta device is used, +which contains predefined mountpoints for archive, sync etc. All other devices +are treated as backup devices by the official Jottacloud client. You may create +a new by entering a unique name. +Choose a number from below, or type in your own string value. +Press Enter for the default (DESKTOP-3H31129). 1 > DESKTOP-3H31129 2 > Jotta -Devices> 2 -Please select the mountpoint to user. Normally this will be Archive -Choose a number from below, or type in an existing value +config_device> 2 +Option config_mountpoint. +The mountpoint to use for the built-in device Jotta. +The standard setup is to use the Archive mountpoint. Most other mountpoints +have very limited support in rclone and should generally be avoided. +Choose a number from below, or type in an existing string value. +Press Enter for the default (Archive). 1 > Archive - 2 > Links + 2 > Shared 3 > Sync - -Mountpoints> 1 +config_mountpoint> 1 -------------------- -[jotta] +[remote] type = jottacloud +configVersion = 1 +client_id = jottacli +client_secret = +tokenURL = https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token token = {........} +username = 2940e57271a93d987d6f8a21 device = Jotta mountpoint = Archive -configVersion = 1 -------------------- -y) Yes this is OK +y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y @@ -18643,18 +21335,19 @@ y/e/d> yTo copy a local directory to an Jottacloud directory called backup
rclone copy /home/source remote:backup
The official Jottacloud client registers a device for each computer you install it on, and then creates a mountpoint for each folder you select for Backup. The web interface uses a special device called Jotta for the Archive and Sync mountpoints.
-With rclone you'll want to use the Jotta/Archive device/mountpoint in most cases, however if you want to access files uploaded by any of the official clients rclone provides the option to select other devices and mountpoints during config. Note that uploading files is currently not supported to other devices than Jotta.
-The built-in Jotta device may also contain several other mountpoints, such as: Latest, Links, Shared and Trash. These are special mountpoints with a different internal representation than the "regular" mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing.
+The official Jottacloud client registers a device for each computer you install it on, and shows them in the backup section of the user interface. For each folder you select for backup it will create a mountpoint within this device. A built-in device called Jotta is special, and contains mountpoints Archive, Sync and some others, used for corresponding features in official clients.
+With rclone you'll want to use the standard Jotta/Archive device/mountpoint in most cases. However, you may for example want to access files from the sync or backup functionality provided by the official clients, and rclone therefore provides the option to select other devices and mountpoints during config.
+You are allowed to create new devices and mountpoints. All devices except the built-in Jotta device are treated as backup devices by official Jottacloud clients, and the mountpoints on them are individual backup sets.
+With the built-in Jotta device, only existing, built-in, mountpoints can be selected. In addition to the mentioned Archive and Sync, it may contain several other mountpoints such as: Latest, Links, Shared and Trash. All of these are special mountpoints with a different internal representation than the "regular" mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing.
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown.
Note also that with rclone version 1.58 and newer information about MIME types are not available when using --fast-list
.
Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
Jottacloud supports MD5 type hashes, so you can use the --checksum
flag.
Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (in location given by --temp-dir) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag. When uploading from local disk the source checksum is always available, so this does not apply. Starting with rclone version 1.52 the same is true for crypted remotes (in older versions the crypt backend would not calculate hashes for uploads from local disk, so the Jottacloud backend had to do it as described above).
-In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Here are the standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
+Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
Choose your storage provider.
Properties:
@@ -18942,8 +21635,8 @@ y/e/d> yHere are the advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
+Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
Mount ID of the mount to use.
If omitted, the primary mount is used.
@@ -18974,7 +21667,7 @@ y/e/d> yNote that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Here is an example of making a mailru configuration. First create a Mail.ru Cloud account and choose a tariff, then run
rclone config
This will guide you through an interactive setup process:
@@ -19190,7 +21883,7 @@ y/e/d> yRemoving a file or directory actually moves it to the trash, which is not visible to rclone but can be seen in a web browser. The trashed file still occupies part of total quota. If you wish to empty your trash and free some quota, you can use the rclone cleanup remote:
command, which will permanently delete all your trashed files. This command does not take any path arguments.
To view your current quota you can use the rclone about remote:
command which will display your usage limit (quota) and the current usage.
In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to mailru (Mail.ru Cloud).
+Here are the Standard options specific to mailru (Mail.ru Cloud).
User name (usually email).
Properties:
@@ -19286,8 +21979,8 @@ y/e/d> y -Here are the advanced options specific to mailru (Mail.ru Cloud).
+Here are the Advanced options specific to mailru (Mail.ru Cloud).
Comma separated list of file name patterns eligible for speedup (put by hash).
Patterns are case insensitive and can contain '*' or '?' meta characters.
@@ -19416,7 +22109,7 @@ y/e/d> yFile size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits.
Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -19471,9 +22164,9 @@ y/e/d> yrclone ls remote:
To copy a local directory to an Mega directory called backup
rclone copy /home/source remote:backup
-Mega does not support modification times or hashes yet.
-Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to opendrive (OpenDrive).
+Here are the Standard options specific to opendrive (OpenDrive).
Username.
Properties:
@@ -20827,8 +23578,8 @@ y/e/d> yHere are the advanced options specific to opendrive (OpenDrive).
+Here are the Advanced options specific to opendrive (OpenDrive).
The encoding for the backend.
See the encoding section in the overview for more info.
@@ -20849,14 +23600,14 @@ y/e/d> yNote that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
rclone about
is not supported by the OpenDrive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+See List of backends that do not support rclone about and rclone about
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
Here is an example of making an QingStor configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -20948,11 +23699,11 @@ y/e/d> y -The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to qingstor (QingCloud Object Storage).
+Here are the Standard options specific to qingstor (QingCloud Object Storage).
Get QingStor credentials from runtime.
Only applies if access_key_id and secret_access_key is blank.
@@ -21032,8 +23783,8 @@ y/e/d> y -Here are the advanced options specific to qingstor (QingCloud Object Storage).
+Here are the Advanced options specific to qingstor (QingCloud Object Storage).
Number of connection retries.
Properties:
@@ -21087,9 +23838,9 @@ y/e/d> yrclone about
is not supported by the qingstor backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about See rclone about
+See List of backends that do not support rclone about and rclone about
Sia (sia.tech) is a decentralized cloud storage platform based on the blockchain technology. With rclone you can use it like any other remote filesystem or mount Sia folders locally. The technology behind it involves a number of new concepts such as Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on. If you are new to it, you'd better first familiarize yourself using their excellent support documentation.
rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980
making external access impossible).
However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980
and --disable-api-security
arguments on the daemon command line. - Enforce API password for the siad
daemon via environment variable SIA_API_PASSWORD
or text file named apipassword
in the daemon directory. - Set rclone backend option api_password
taking it from above locations.
Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock
. Alternatively you can make siad
unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD
. 2. If siad
cannot find the SIA_API_PASSWORD
variable or the apipassword
file in the SIA_DIR
directory, it will generate a random password and store in the text file named apipassword
under YOUR_HOME/.sia/
directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword
on Windows. Remember this when you configure password in rclone. 3. The only way to use siad
without API password is to run it on localhost with command line argument --authorize-api=false
, but this is insecure and strongly discouraged.
Here is an example of how to make a sia
remote called mySia
. First, run:
rclone config
This will guide you through an interactive setup process:
@@ -21157,8 +23908,8 @@ y/e/d> yrclone copy /home/source mySia:backup
-Here are the standard options specific to sia (Sia Decentralized Cloud).
+Here are the Standard options specific to sia (Sia Decentralized Cloud).
Sia daemon API URL, like http://sia.daemon.host:9980.
Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost.
@@ -21180,8 +23931,8 @@ y/e/d> yHere are the advanced options specific to sia (Sia Decentralized Cloud).
+Here are the Advanced options specific to sia (Sia Decentralized Cloud).
Siad User Agent
Sia daemon requires the 'Sia-Agent' user agent by default for security
@@ -21202,7 +23953,7 @@ y/e/d> yPaths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:container/path/to/dir
.
Here is an example of making a swift configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -21360,7 +24111,7 @@ rclone lsd myremote:The modified time is stored as metadata on the object as X-Object-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
-Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
+Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
Get swift credentials from environment variables in standard OpenStack form.
Properties:
@@ -21617,8 +24368,8 @@ rclone lsd myremote: -Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
+Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
If true avoid calling abort upload on a failure.
It should be set to true for resuming uploads across different sessions.
@@ -21661,7 +24412,7 @@ rclone lsd myremote:The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -21733,10 +24484,10 @@ y/e/d> y
rclone ls remote:
To copy a local directory to a pCloud directory called backup
rclone copy /home/source remote:backup
-pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.
pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 hashes in the EU region, so you can use the --checksum
flag.
In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to premiumizeme (premiumize.me).
+Here are the Standard options specific to premiumizeme (premiumize.me).
API Key.
This is not normally used - use oauth instead.
@@ -21947,8 +24720,8 @@ y/e/d>Here are the advanced options specific to premiumizeme (premiumize.me).
+Here are the Advanced options specific to premiumizeme (premiumize.me).
The encoding for the backend.
See the encoding section in the overview for more info.
@@ -21959,14 +24732,14 @@ y/e/d>Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
premiumize.me file names can't have the \
or "
characters in. rclone maps these to and from an identical looking unicode equivalents \
and "
premiumize.me only supports filenames up to 255 characters in length.
Paths are specified as remote:path
put.io paths may be as deep as required, e.g. remote:directory/subdirectory
.
The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -22029,7 +24802,7 @@ e/n/d/r/c/s/q> q
rclone ls remote:
To copy a local directory to a put.io directory called backup
rclone copy /home/source remote:backup
-In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the advanced options specific to putio (Put.io).
+Here are the Advanced options specific to putio (Put.io).
The encoding for the backend.
See the encoding section in the overview for more info.
@@ -22060,9 +24833,12 @@ e/n/d/r/c/s/q> qput.io has rate limiting. When you hit a limit, rclone automatically retries after waiting the amount of time requested by the server.
+If you want to avoid ever hitting these limits, you may use the --tpslimit
flag with a low number. Note that the imposed limits may be different for different operations, and may change over time.
This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x and 7.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users
-There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library
. You may put subdirectories in too, e.g. remote:library/path/to/dir
. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir
. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)
Here is an example of making a seafile configuration for a user with no two-factor authentication. First run
@@ -22221,7 +24997,7 @@ y/e/d> yrclone sync -i /home/local/directory seafile:
Seafile version 7+ supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Please note this is not supported on seafile server version 6.x
In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Here are the standard options specific to uptobox (Uptobox).
+Here are the Standard options specific to uptobox (Uptobox).
Your access token.
Get it from https://uptobox.com/my_account.
@@ -23374,8 +26235,8 @@ y/e/d>Here are the advanced options specific to uptobox (Uptobox).
+Here are the Advanced options specific to uptobox (Uptobox).
The encoding for the backend.
See the encoding section in the overview for more info.
@@ -23386,7 +26247,7 @@ y/e/d>Uptobox will delete inactive files that have not been accessed in 60 days.
rclone about
is not supported by this backend an overview of used space can however been seen in the uptobox web interface.
Attribute :ro
and :nc
can be attach to the end of path to tag the remote as read only or no create, e.g. remote:directory/subdirectory:ro
or remote:directory/subdirectory:nc
.
Subfolders can be used in upstream remotes. Assume a union remote named backup
with the remotes mydrive:private/backup
. Invoking rclone mkdir backup:desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop
.
There will be no special handling of paths containing ..
segments. Invoking rclone mkdir backup:../desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop
.
Here is an example of how to make a union called remote
for local folders. First run:
rclone config
This will guide you through an interactive setup process:
@@ -23617,8 +26478,8 @@ e/n/d/r/c/s/q> qHere are the standard options specific to union (Union merges the contents of several upstream fs).
+Here are the Standard options specific to union (Union merges the contents of several upstream fs).
List of space separated upstreams.
Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc.
@@ -23666,10 +26527,25 @@ e/n/d/r/c/s/q> qHere are the Advanced options specific to union (Union merges the contents of several upstream fs).
+Minimum viable free space for lfs/eplfs policies.
+If a remote has less than this much free space then it won't be considered for use in lfs or eplfs policies.
+Properties:
+Any metadata supported by the underlying remote is read and written.
+See the metadata docs for more info.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -23683,7 +26559,7 @@ name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
-XX / Webdav
+XX / WebDAV
\ "webdav"
[snip]
Storage> webdav
@@ -23692,7 +26568,7 @@ Choose a number from below, or type in your own value
1 / Connect to example.com
\ "https://example.com"
url> https://example.com/remote.php/webdav/
-Name of the Webdav site/service/software you are using
+Name of the WebDAV site/service/software you are using
Choose a number from below, or type in your own value
1 / Nextcloud
\ "nextcloud"
@@ -23739,11 +26615,11 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an WebDAV directory called backup
rclone copy /home/source remote:backup
-Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times.
Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
-Here are the standard options specific to webdav (Webdav).
+Here are the Standard options specific to webdav (WebDAV).
URL of http host to connect to.
E.g. https://example.com.
@@ -23755,7 +26631,7 @@ y/e/d> yName of the Webdav site/service/software you are using.
+Name of the WebDAV site/service/software you are using.
Properties:
Here are the advanced options specific to webdav (Webdav).
+Here are the Advanced options specific to webdav (WebDAV).
Command to run to get a bearer token.
Properties:
@@ -23922,7 +26798,7 @@ vendor = other bearer_token_command = oidc-token XDCYandex Disk is a cloud storage solution created by Yandex.
-Here is an example of making a yandex configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -23983,11 +26859,11 @@ y/e/d> yIf you wish to empty your trash you can use the rclone cleanup remote:
command which will permanently delete all your trashed files. This command does not take any path arguments.
To view your current quota you can use the rclone about remote:
command which will display your usage limit (quota) and the current usage.
The default restricted characters set are replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the standard options specific to yandex (Yandex Disk).
+Here are the Standard options specific to yandex (Yandex Disk).
OAuth Client Id.
Leave blank normally.
@@ -24008,8 +26884,8 @@ y/e/d> yHere are the advanced options specific to yandex (Yandex Disk).
+Here are the Advanced options specific to yandex (Yandex Disk).
OAuth Access Token as a JSON blob.
Properties:
@@ -24058,13 +26934,13 @@ y/e/d> yWhen uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout
parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers
errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m
, that is --timeout 60m
.
Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. Token generation will work without a mail account, but Rclone won't be able to complete any actions.
[403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
Zoho WorkDrive is a cloud storage solution created by Zoho.
-Here is an example of making a zoho configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -24142,10 +27018,10 @@ y/e/d>No checksums are supported.
To view your current quota you can use the rclone about remote:
command which will display your current usage.
Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.
-Here are the standard options specific to zoho (Zoho).
+Here are the Standard options specific to zoho (Zoho).
OAuth Client Id.
Leave blank normally.
@@ -24189,14 +27065,22 @@ y/e/d>Here are the advanced options specific to zoho (Zoho).
+Here are the Advanced options specific to zoho (Zoho).
OAuth Access Token as a JSON blob.
Properties:
@@ -24236,11 +27120,19 @@ y/e/d>For Zoho we advise you to set up your own client_id. To do so you have to complete the following steps.
+Log in to the Zoho API Console
Create a new client of type "Server-based Application". The name and website don't matter, but you must add the redirect URL http://localhost:53682/
.
Once the client is created, you can go to the settings tab and enable it in other regions.
The client id and client secret can now be used with rclone.
Local paths are specified as normal filesystem paths, e.g. /path/to/wherever
, so
rclone sync -i /home/source /tmp/destination
Will sync /home/source
to /tmp/destination
.
For consistencies sake one can also configure a remote of type local
in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever
, but it is probably easier not to.
Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
@@ -24611,16 +27503,16 @@ $ tree /tmp/b 0 file2NB Rclone (like most unix tools such as du
, rsync
and tar
) treats a bind mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.
-Here are the advanced options specific to local (Local Disk).
+Here are the Advanced options specific to local (Local Disk).
Disable UNC (long path names) conversion on Windows.
Properties:
Depending on which OS is in use the local backend may return only some of the system metadata. Setting system metadata is supported on all OSes but setting user metadata is only supported on linux, freebsd, netbsd, macOS and Solaris. It is not supported on Windows yet (see pkg/attrs#47).
+User metadata is stored as extended attributes (which may not be supported by all file systems) under the "user.*" prefix.
+Here are the possible system metadata items for the local backend.
+Name | +Help | +Type | +Example | +Read Only | +
---|---|---|---|---|
atime | +Time of last access | +RFC 3339 | +2006-01-02T15:04:05.999999999Z07:00 | +N | +
btime | +Time of file birth (creation) | +RFC 3339 | +2006-01-02T15:04:05.999999999Z07:00 | +N | +
gid | +Group ID of owner | +decimal number | +500 | +N | +
mode | +File type and mode | +octal, unix style | +0100664 | +N | +
mtime | +Time of last modification | +RFC 3339 | +2006-01-02T15:04:05.999999999Z07:00 | +N | +
rdev | +Device ID (if special file) | +hexadecimal | +1abc | +N | +
uid | +User ID of owner | +decimal number | +500 | +N | +
See the metadata docs for more info.
Here are the commands specific to the local backend.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
-See the "rclone backend" command for more info on how to pass options and arguments.
+See the backend command for more info on how to pass options and arguments.
These can be run on a running backend using the rc command backend/command.
A null operation for testing backend commands
@@ -24791,6 +27757,315 @@ $ tree /tmp/blocal
, s3
and internetarchive
backends--metadata
/-M
flag to control whether metadata is copied--metadata-set
flag to specify metadata for uploadslinux/arm/v6
to docker images (Nick Craig-Wood)--no-traverse
and --no-unicode-normalization
(Nick Craig-Wood)--header-filename
to honor the HTTP header filename directive (J-P Treen)--exclude-if-present
flags (albertony)--disable-http-keep-alives
to disable HTTP Keep Alives (Nick Craig-Wood)-v
before calling curl. (Michael C Tiernan - MIT-Research Computing Project)M
flag (Nick Craig-Wood)--metadata
/-M
flag (Nick Craig-Wood)x/crypto/openpgp
package with ProtonMail/go-crypto
(albertony)--passive-port
arguments are correct (Nick Craig-Wood)--backup-dir
can be in the root provided it is filtered (Nick)--sparse
, --zero
, --pattern
, --ascii
, --chargen
flags to control file contents (Nick Craig-Wood)Shutdown
method on backends (Martin Czygan)--fast-list
--create-empty-src-dirs
and --exclude
(Nick Craig-Wood)--max-duration
and --cutoff-mode soft
(Nick Craig-Wood)windows/arm64
(may still be problems - see #5828) (Nick Craig-Wood)_netdev
mount argument (Hugal31)--vfs-fast-fingerprint
for less accurate but faster fingerprints (Nick Craig-Wood)--vfs-disk-space-total-size
option to manually set the total disk space (Claudio Maradonna)--local-nounc
flag (Nick Craig-Wood)--b2-version-at
flag to show file versions at time specified (SwazRGB)backend config -o config
add a combined AllDrives:
remote (Nick Craig-Wood)--drive-shared-with-me
work with shared drives (Nick Craig-Wood)--drive-resource-key
for accessing link-shared files (Nick Craig-Wood)exportformats
and importformats
for debugging (Nick Craig-Wood)root_folder_id
to advanced section (Abhiraj)disable_utf8
option (Jason Zheng)github.com/jlaffaye/ftp
from our fork (Nick Craig-Wood)--gcs-no-check-bucket
to minimise transactions and perms (Nick Gooding)--gcs-decompress
flag to decompress gzip-encoded files (Nick Craig-Wood)
+--poll-interval
for onedrive (Hugo Laloge)--sftp-chunk-size
to control packets sizes for high latency links (Nick Craig-Wood)--sftp-concurrency
to improve high latency transfers (Nick Craig-Wood)--sftp-set-env
option to set environment variables (Nick Craig-Wood)min_free_space
option for lfs
/eplfs
policies (Nick Craig-Wood)eplus
policy to select correct entry for existing files (Nick Craig-Wood)--min-age
/-max-age
from UTC to local as documented (Nick Craig-Wood)--multi-thread-streams
note to --transfers
. (Zsolt Ero)--devname
and fusermount: unknown option 'fsname' when mounting via rc (Nick Craig-Wood)Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing.
Rclone can sync between two remote cloud storage systems just fine.
Note that it effectively downloads the file and uploads it again, so the node running rclone would need to have lots of bandwidth.
The syncs would be incremental (on a file by file basis).
-Eg
+e.g.
rclone sync -i drive:Folder s3:bucket
You can use rclone from multiple places at the same time if you choose different subdirectory for the output, e.g.
@@ -30504,7 +33779,7 @@ export HTTPS_PROXY=$http_proxye.g.
export no_proxy=localhost,127.0.0.0/8,my.host.name
export NO_PROXY=$no_proxy
-Note that the ftp backend does not support ftp_proxy
yet.
Note that the FTP backend does not support ftp_proxy
yet.
This means that rclone
can't find the SSL root certificates. Likely you are running rclone
on a NAS with a cut-down Linux OS, or possibly on Solaris.
Rclone (via the Go runtime) tries to load the root certificates from these places on Linux.
@@ -31127,6 +34402,57 @@ THE SOFTWARE.