diff --git a/MANUAL.html b/MANUAL.html
index 4c0d99e61..4b87ec4b1 100644
--- a/MANUAL.html
+++ b/MANUAL.html
@@ -19,7 +19,7 @@
See below for some expanded Linux / macOS instructions.
+See below for some expanded Linux / macOS / Windows instructions.
Note that this script checks the version of rclone installed first and won't re-download if not needed.
-Note that this is a third party installer not controlled by the rclone developers so it may be out of date. Its current version is as below.
+To avoid problems with macOS gatekeeper enforcing the binary to be signed and notarized it is enough to download with curl
.
Download the latest version of rclone.
When downloading a binary with a web browser, the browser will set the macOS gatekeeper quarantine attribute. Starting from Catalina, when attempting to run rclone
, a pop-up will appear saying:
Fetch the correct binary for your processor type by clicking on these links. If not sure, use the first link.
+Open a CMD window (or powershell) and run the binary. Note that rclone does not launch a GUI by default, it runs in the CMD Window.
+will install that too.
+Note that this is a third party installer not controlled by the rclone developers so it may be out of date. Its current version is as below.
+Many Linux, Windows, macOS and other OS distributions package and distribute rclone.
+The distributed versions of rclone are often quite out of date and for this reason we recommend one of the other installation methods if possible.
+You can get an idea of how up to date or not your OS distribution's package is here.
+These images are built as part of the release process based on a minimal Alpine Linux.
This will check out the rclone source in subfolder rclone, which you can later modify and send pull requests with. Then it will build the rclone executable in the same folder. As an initial check you can now run ./rclone version
(.\rclone version
on Windows).
-This assumes you have a GCC compatible C compiler (GCC or Clang) in your PATH, as it uses cgo. But on Windows, the cgofuse library that the cmount implementation is based on, also supports building without cgo, i.e. by setting environment variable CGO_ENABLED to value 0 (static linking). This is how the official Windows release of rclone is being built, starting with version 1.59. It is still possible to build with cgo on Windows as well, by using the MinGW port of GCC, e.g. by installing it in a MSYS2 distribution (make sure you install it in the classic mingw64 subsystem, the ucrt64 version is not compatible).
-As an alternative you can download the source, build and install rclone in one operation, as a regular Go package. The source will be stored it in the Go module cache, and the resulting executable will be in your GOPATH bin folder ($(go env GOPATH)/bin
, which corresponds to ~/go/bin/rclone
by default).
+There are other make targets that can be used for more advanced builds, such as cross-compiling for all supported os/architectures, embedding icon and version info resources into windows executable, and packaging results into release artifacts. See Makefile and cross-compile.go for details.
+Another alternative is to download the source, build and install rclone in one operation, as a regular Go package. The source will be stored it in the Go module cache, and the resulting executable will be in your GOPATH bin folder ($(go env GOPATH)/bin
, which corresponds to ~/go/bin/rclone
by default).
@@ -295,12 +332,12 @@ go build
- hosts: rclone-hosts
roles:
- rclone
-Portable installation
+Portable installation
As mentioned above, rclone is single executable (rclone
, or rclone.exe
on Windows) that you can download as a zip archive and extract into a location of your choosing. When executing different commands, it may create files in different locations, such as a configuration file and various temporary files. By default the locations for these are according to your operating system, e.g. configuration file in your user profile directory and temporary files in the standard temporary directory, but you can customize all of them, e.g. to make a completely self-contained, portable installation.
Run the config paths command to see the locations that rclone will use.
To override them set the corresponding options (as command-line arguments, or as environment variables): - --config - --cache-dir - --temp-dir
Autostart
-After installing and configuring rclone, as described above, you are ready to use rclone as an interactive command line utility. If your goal is to perform periodic operations, such as a regular sync, you will probably want to configure your rclone command in your operating system's scheduler. If you need to expose service-like features, such as remote control, GUI, serve or mount, you will often want an rclone command always running in the background, and configuring it to run in a service infrastructure may be a better option. Below are some alternatives on how to achieve this on different operating systems.
+After installing and configuring rclone, as described above, you are ready to use rclone as an interactive command line utility. If your goal is to perform periodic operations, such as a regular sync, you will probably want to configure your rclone command in your operating system's scheduler. If you need to expose service-like features, such as remote control, GUI, serve or mount, you will often want an rclone command always running in the background, and configuring it to run in a service infrastructure may be a better option. Below are some alternatives on how to achieve this on different operating systems.
NOTE: Before setting up autorun it is highly recommended that you have tested your command manually from a Command Prompt first.
Autostart on Windows
The most relevant alternatives for autostart on Windows are: - Run at user log on using the Startup folder - Run at user log on, at system startup or at schedule using Task Scheduler - Run at system startup using Windows service
@@ -309,8 +346,8 @@ go build
Example command to run a sync in background:
c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclone\logs\sync_files.txt
User account
-As mentioned in the mount documentation, mounted drives created as Administrator are not visible to other accounts, not even the account that was elevated as Administrator. By running the mount command as the built-in SYSTEM
user account, it will create drives accessible for everyone on the system. Both scheduled task and Windows service can be used to achieve this.
-NOTE: Remember that when rclone runs as the SYSTEM
user, the user profile that it sees will not be yours. This means that if you normally run rclone with configuration file in the default location, to be able to use the same configuration when running as the system user you must explicitely tell rclone where to find it with the --config
option, or else it will look in the system users profile path (C:\Windows\System32\config\systemprofile
). To test your command manually from a Command Prompt, you can run it with the PsExec utility from Microsoft's Sysinternals suite, which takes option -s
to execute commands as the SYSTEM
user.
+As mentioned in the mount documentation, mounted drives created as Administrator are not visible to other accounts, not even the account that was elevated as Administrator. By running the mount command as the built-in SYSTEM
user account, it will create drives accessible for everyone on the system. Both scheduled task and Windows service can be used to achieve this.
+NOTE: Remember that when rclone runs as the SYSTEM
user, the user profile that it sees will not be yours. This means that if you normally run rclone with configuration file in the default location, to be able to use the same configuration when running as the system user you must explicitly tell rclone where to find it with the --config
option, or else it will look in the system users profile path (C:\Windows\System32\config\systemprofile
). To test your command manually from a Command Prompt, you can run it with the PsExec utility from Microsoft's Sysinternals suite, which takes option -s
to execute commands as the SYSTEM
user.
Start from Startup folder
To quickly execute an rclone command you can simply create a standard Windows Explorer shortcut for the complete rclone command you want to run. If you store this shortcut in the special "Startup" start-menu folder, Windows will automatically run it at login. To open this folder in Windows Explorer, enter path %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup
, or C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp
if you want the command to start for every user that logs in.
This is the easiest approach to autostarting of rclone, but it offers no functionality to set it to run as different user, or to set conditions or actions on certain events. Setting up a scheduled task as described below will often give you better results.
@@ -324,7 +361,7 @@ go build
New-Service -Name Rclone -BinaryPathName 'c:\rclone\rclone.exe mount remote:/files X: --config c:\rclone\config\rclone.conf --log-file c:\rclone\logs\mount.txt'
The WinFsp service infrastructure supports incorporating services for file system implementations, such as rclone, into its own launcher service, as kind of "child services". This has the additional advantage that it also implements a network provider that integrates into Windows standard methods for managing network drives. This is currently not officially supported by Rclone, but with WinFsp version 2019.3 B2 / v1.5B2 or later it should be possible through path rewriting as described here.
Third-party service integration
-To Windows service running any rclone command, the excellent third-party utility NSSM, the "Non-Sucking Service Manager", can be used. It includes some advanced features such as adjusting process periority, defining process environment variables, redirect to file anything written to stdout, and customized response to different exit codes, with a GUI to configure everything from (although it can also be used from command line ).
+To Windows service running any rclone command, the excellent third-party utility NSSM, the "Non-Sucking Service Manager", can be used. It includes some advanced features such as adjusting process priority, defining process environment variables, redirect to file anything written to stdout, and customized response to different exit codes, with a GUI to configure everything from (although it can also be used from command line ).
There are also several other alternatives. To mention one more, WinSW, "Windows Service Wrapper", is worth checking out. It requires .NET Framework, but it is preinstalled on newer versions of Windows, and it also provides alternative standalone distributions which includes necessary runtime (.NET 5). WinSW is a command-line only utility, where you have to manually create an XML file with service configuration. This may be a drawback for some, but it can also be an advantage as it is easy to back up and re-use the configuration settings, without having go through manual steps in a GUI. One thing to note is that by default it does not restart the service on error, one have to explicit enable this in the configuration file (via the "onfailure" parameter).
Autostart on Linux
Start as a service
@@ -363,7 +400,6 @@ go build
- HDFS
- HiDrive
- HTTP
-- Hubic
- Internet Archive
- Jottacloud
- Koofr
@@ -374,6 +410,7 @@ go build
- Microsoft OneDrive
- OpenStack Swift / Rackspace Cloudfiles / Memset Memstore
- OpenDrive
+- Oracle Object Storage
- Pcloud
- premiumize.me
- put.io
@@ -381,6 +418,7 @@ go build
- Seafile
- SFTP
- Sia
+- SMB
- Storj
- SugarSync
- Union
@@ -469,6 +507,7 @@ destpath/sourcepath/two.txt
Note that files in the destination won't be deleted if there were any errors at any point. Duplicate objects (files with the same name, on those providers that support it) are also not yet handled.
It is always the contents of the directory that is synced, not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy command if unsure.
If dest:path doesn't exist, it is created and the source:path contents go there.
+It is not possible to sync overlapping remotes. However, you may exclude the destination from the sync with a filter rule or by putting an exclude-if-present file inside the destination directory and sync to a destination that is inside the source directory.
Note: Use the -P
/--progress
flag to view real-time transfer statistics
Note: Use the rclone dedupe
command to deal with "Duplicate object/directory found in source/destination - ignoring" errors. See this forum post for more info.
rclone sync source:path dest:path [flags]
@@ -616,7 +655,7 @@ rclone --dry-run --min-size 100M delete remote:path
ls
,lsl
,lsd
are designed to be human-readable. lsf
is designed to be human and machine-readable. lsjson
is designed to be machine-readable.
Note that ls
and lsl
recurse by default - use --max-depth 1
to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
-Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
+Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
rclone ls remote:path [flags]
Options
-h, --help help for ls
@@ -651,7 +690,7 @@ rclone --dry-run --min-size 100M delete remote:path
ls
,lsl
,lsd
are designed to be human-readable. lsf
is designed to be human and machine-readable. lsjson
is designed to be machine-readable.
Note that ls
and lsl
recurse by default - use --max-depth 1
to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
-Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
+Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
rclone lsd remote:path [flags]
Options
-h, --help help for lsd
@@ -683,7 +722,7 @@ rclone --dry-run --min-size 100M delete remote:path
ls
,lsl
,lsd
are designed to be human-readable. lsf
is designed to be human and machine-readable. lsjson
is designed to be machine-readable.
Note that ls
and lsl
recurse by default - use --max-depth 1
to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
-Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
+Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
rclone lsl remote:path [flags]
Options
-h, --help help for lsl
@@ -698,7 +737,7 @@ rclone --dry-run --min-size 100M delete remote:path
Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces.
By default, the hash is requested from the remote. If MD5 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling MD5 for any remote.
For other algorithms, see the hashsum command. Running rclone md5sum remote:path
is equivalent to running rclone hashsum MD5 remote:path
.
-This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a relative path).
+This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hyphen will be treated literally, as a relative path).
rclone md5sum remote:path [flags]
Options
--base64 Output base64 encoded hashsum
@@ -717,7 +756,7 @@ rclone --dry-run --min-size 100M delete remote:path
Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces.
By default, the hash is requested from the remote. If SHA-1 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling SHA-1 for any remote.
For other algorithms, see the hashsum command. Running rclone sha1sum remote:path
is equivalent to running rclone hashsum SHA1 remote:path
.
-This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a relative path).
+This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hyphen will be treated literally, as a relative path).
This command can also hash data received on STDIN, if not passing a remote:path.
rclone sha1sum remote:path [flags]
Options
@@ -959,9 +998,9 @@ rclone backend help <backendname>
- rclone - Show help for rclone commands, flags and backends.
rclone bisync
-Perform bidirectonal synchronization between two paths.
+Perform bidirectional synchronization between two paths.
Synopsis
-Perform bidirectonal synchronization between two paths.
+Perform bidirectional synchronization between two paths.
Bisync provides a bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. On each successive run it will: - list files on Path1 and Path2, and check for changes on each side. Changes include New
, Newer
, Older
, and Deleted
files. - Propagate changes on Path1 to Path2, and vice-versa.
See full bisync description for details.
rclone bisync remote1:path1 remote2:path2 [flags]
@@ -1061,10 +1100,10 @@ rclone backend help <backendname>
To load completions in your current shell session:
source <(rclone completion bash)
To load completions for every new session, execute once:
-Linux:
+Linux:
rclone completion bash > /etc/bash_completion.d/rclone
-macOS:
-rclone completion bash > /usr/local/etc/bash_completion.d/rclone
+macOS:
+rclone completion bash > $(brew --prefix)/etc/bash_completion.d/rclone
You will need to start a new shell for this setup to take effect.
rclone completion bash
Options
@@ -1115,11 +1154,13 @@ rclone backend help <backendname>
Generate the autocompletion script for the zsh shell.
If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once:
echo "autoload -U compinit; compinit" >> ~/.zshrc
+To load completions in your current shell session:
+source <(rclone completion zsh); compdef _rclone rclone
To load completions for every new session, execute once:
-Linux:
+Linux:
rclone completion zsh > "${fpath[1]}/_rclone"
-macOS:
-rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
+macOS:
+rclone completion zsh > $(brew --prefix)/share/zsh/site-functions/_rclone
You will need to start a new shell for this setup to take effect.
rclone completion zsh [flags]
Options
@@ -1142,7 +1183,7 @@ rclone config create myremote swift env_auth=true
Note that if the config process would normally ask a question the default is taken (unless --non-interactive
is used). Each time that happens rclone will print or DEBUG a message saying how to affect the value taken.
If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file.
NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the --obscure
flag, or if you are 100% certain you are already passing obscured passwords then use --no-obscure
. You can also set obscured passwords using the rclone config password
command.
-The flag --non-interactive
is for use by applications that wish to configure rclone themeselves, rather than using rclone's text based configuration questions. If this flag is set, and rclone needs to ask the user a question, a JSON blob will be returned with the question in it.
+The flag --non-interactive
is for use by applications that wish to configure rclone themselves, rather than using rclone's text based configuration questions. If this flag is set, and rclone needs to ask the user a question, a JSON blob will be returned with the question in it.
This will look something like (some irrelevant detail removed):
{
"State": "*oauth-islocal,teamdrive,,",
@@ -1339,7 +1380,7 @@ rclone config update myremote env_auth=true
Note that if the config process would normally ask a question the default is taken (unless --non-interactive
is used). Each time that happens rclone will print or DEBUG a message saying how to affect the value taken.
If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file.
NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the --obscure
flag, or if you are 100% certain you are already passing obscured passwords then use --no-obscure
. You can also set obscured passwords using the rclone config password
command.
-The flag --non-interactive
is for use by applications that wish to configure rclone themeselves, rather than using rclone's text based configuration questions. If this flag is set, and rclone needs to ask the user a question, a JSON blob will be returned with the question in it.
+The flag --non-interactive
is for use by applications that wish to configure rclone themselves, rather than using rclone's text based configuration questions. If this flag is set, and rclone needs to ask the user a question, a JSON blob will be returned with the question in it.
This will look something like (some irrelevant detail removed):
{
"State": "*oauth-islocal,teamdrive,,",
@@ -1608,7 +1649,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.
By default, the hash is requested from the remote. If the hash is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote.
For the MD5 and SHA1 algorithms there are also dedicated commands, md5sum and sha1sum.
-This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hypen will be treated literaly, as a relative path).
+This command can also hash data received on standard input (stdin), by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hyphen will be treated literally, as a relative path).
Run without a hash to see the list of all supported hashes, e.g.
$ rclone hashsum
Supported hashes are:
@@ -1742,7 +1783,7 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
ls
,lsl
,lsd
are designed to be human-readable. lsf
is designed to be human and machine-readable. lsjson
is designed to be machine-readable.
Note that ls
and lsl
recurse by default - use --max-depth 1
to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
-Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
+Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
rclone lsf remote:path [flags]
Options
--absolute Put a leading / in front of path names
@@ -1790,7 +1831,7 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
If --encrypted
is not specified the Encrypted won't be emitted.
If --dirs-only
is not specified files in addition to directories are returned
If --files-only
is not specified directories in addition to the files will be returned.
-If --metadata
is set then an additional Metadata key will be returned. This will have metdata in rclone standard format as a JSON object.
+If --metadata
is set then an additional Metadata key will be returned. This will have metadata in rclone standard format as a JSON object.
if --stat
is set then a single JSON blob will be returned about the item pointed to. This will return an error if the item isn't found. However on bucket based backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will return an empty directory as it isn't possible to tell empty directories from missing directories there.
The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive
the Path will always be the same as Name.
If the directory is a bucket in a bucket-based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true".
@@ -1808,7 +1849,7 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
ls
,lsl
,lsd
are designed to be human-readable. lsf
is designed to be human and machine-readable. lsjson
is designed to be machine-readable.
Note that ls
and lsl
recurse by default - use --max-depth 1
to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use -R
to make them recurse.
-Listing a non-existent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
+Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).
rclone lsjson remote:path [flags]
Options
--dirs-only Show only directories in the listing
@@ -1856,7 +1897,7 @@ umount /path/to/local/mount
Mounting modes on windows
Unlike other operating systems, Microsoft Windows provides a different filesystem type for network and fixed drives. It optimises access on the assumption fixed disk drives are fast and reliable, while network drives have relatively high latency and less reliability. Some settings can also be differentiated between the two types, for example that Windows Explorer should just display icons and not create preview thumbnails for image and video files on network drives.
In most cases, rclone will mount the remote as a normal, fixed disk drive by default. However, you can also choose to mount it as a remote network drive, often described as a network share. If you mount an rclone remote using the default, fixed drive mode and experience unexpected program errors, freezes or other issues, consider mounting as a network drive instead.
-When mounting as a fixed disk drive you can either mount to an unused drive letter, or to a path representing a non-existent subdirectory of an existing parent directory or drive. Using the special value *
will tell rclone to automatically assign the next available drive letter, starting with Z: and moving backward. Examples:
+When mounting as a fixed disk drive you can either mount to an unused drive letter, or to a path representing a nonexistent subdirectory of an existing parent directory or drive. Using the special value *
will tell rclone to automatically assign the next available drive letter, starting with Z: and moving backward. Examples:
rclone mount remote:path/to/files *
rclone mount remote:path/to/files X:
rclone mount remote:path/to/files C:\path\parent\mount
@@ -1865,10 +1906,10 @@ rclone mount remote:path/to/files X:
To mount as network drive, you can add option --network-mode
to your mount command. Mounting to a directory path is not supported in this mode, it is a limitation Windows imposes on junctions, so the remote must always be mounted to a drive letter.
rclone mount remote:path/to/files X: --network-mode
A volume name specified with --volname
will be used to create the network share path. A complete UNC path, such as \\cloud\remote
, optionally with path \\cloud\remote\madeup\path
, will be used as is. Any other string will be used as the share part, after a default prefix \\server\
. If no volume name is specified then \\server\share
will be used. You must make sure the volume name is unique when you are mounting more than one drive, or else the mount command will fail. The share name will treated as the volume label for the mapped drive, shown in Windows Explorer etc, while the complete \\server\share
will be reported as the remote UNC path by net use
etc, just like a normal network drive mapping.
-If you specify a full network share UNC path with --volname
, this will implicitely set the --network-mode
option, so the following two examples have same result:
+If you specify a full network share UNC path with --volname
, this will implicitly set the --network-mode
option, so the following two examples have same result:
rclone mount remote:path/to/files X: --network-mode
rclone mount remote:path/to/files X: --volname \\server\share
-You may also specify the network share UNC path as the mountpoint itself. Then rclone will automatically assign a drive letter, same as with *
and use that as mountpoint, and instead use the UNC path specified as the volume name, as if it were specified with the --volname
option. This will also implicitely set the --network-mode
option. This means the following two examples have same result:
+You may also specify the network share UNC path as the mountpoint itself. Then rclone will automatically assign a drive letter, same as with *
and use that as mountpoint, and instead use the UNC path specified as the volume name, as if it were specified with the --volname
option. This will also implicitly set the --network-mode
option. This means the following two examples have same result:
rclone mount remote:path/to/files \\cloud\remote
rclone mount remote:path/to/files * --volname \\cloud\remote
There is yet another way to enable network mode, and to set the share path, and that is to pass the "native" libfuse/WinFsp option directly: --fuse-flag --VolumePrefix=\server\share
. Note that the path must be with just a single backslash prefix in this case.
@@ -1878,7 +1919,7 @@ rclone mount remote:path/to/files * --volname \\cloud\remote
Windows filesystem permissions
The FUSE emulation layer on Windows must convert between the POSIX-based permission model used in FUSE, and the permission model used in Windows, based on access-control lists (ACL).
The mounted filesystem will normally get three entries in its access-control list (ACL), representing permissions for the POSIX permission scopes: Owner, group and others. By default, the owner and group will be taken from the current user, and the built-in group "Everyone" will be used to represent others. The user/group can be customized with FUSE options "UserName" and "GroupName", e.g. -o UserName=user123 -o GroupName="Authenticated Users"
. The permissions on each entry will be set according to options --dir-perms
and --file-perms
, which takes a value in traditional numeric notation.
-The default permissions corresponds to --file-perms 0666 --dir-perms 0777
, i.e. read and write permissions to everyone. This means you will not be able to start any programs from the the mount. To be able to do that you must add execute permissions, e.g. --file-perms 0777 --dir-perms 0777
to add it to everyone. If the program needs to write files, chances are you will have to enable VFS File Caching as well (see also limitations).
+The default permissions corresponds to --file-perms 0666 --dir-perms 0777
, i.e. read and write permissions to everyone. This means you will not be able to start any programs from the mount. To be able to do that you must add execute permissions, e.g. --file-perms 0777 --dir-perms 0777
to add it to everyone. If the program needs to write files, chances are you will have to enable VFS File Caching as well (see also limitations).
Note that the mapping of permissions is not always trivial, and the result you see in Windows Explorer may not be exactly like you expected. For example, when setting a value that includes write access, this will be mapped to individual permissions "write attributes", "write data" and "append data", but not "write extended attributes". Windows will then show this as basic permission "Special" instead of "Write", because "Write" includes the "write extended attributes" permission.
If you set POSIX permissions for only allowing access to the owner, using --file-perms 0600 --dir-perms 0700
, the user group and the built-in "Everyone" group will still be given some special permissions, such as "read attributes" and "read permissions", in Windows. This is done for compatibility reasons, e.g. to allow users without additional permissions to be able to read basic metadata about files like in UNIX. One case that may arise is that other programs (incorrectly) interprets this as the file being accessible by everyone. For example an SSH client may warn about "unprotected private key file".
WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity", that allows the complete specification of file security descriptors using SDDL. With this you can work around issues such as the mentioned "unprotected private key file" by specifying -o FileSecurity="D:P(A;;FA;;;OW)"
, for file all access (FA) to the owner (OW).
@@ -1890,7 +1931,7 @@ rclone mount remote:path/to/files * --volname \\cloud\remote
Note that mapping to a directory path, instead of a drive letter, does not suffer from the same limitations.
Limitations
Without the use of --vfs-cache-mode
this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without --vfs-cache-mode writes
or --vfs-cache-mode full
. See the VFS File Caching section for more info.
-The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
+The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
When rclone mount
is invoked on Unix with --daemon
flag, the main rclone program will wait for the background mount to become ready or until the timeout specified by the --daemon-wait
flag. On Linux it can check mount status using ProcFS so the flag in fact sets maximum time to wait, while the real wait can be less. On macOS / BSD the time to wait is constant and the check is performed only at the end. We advise you to set wait time on macOS reasonably.
Only supported on Linux, FreeBSD, OS X and Windows at the moment.
rclone mount vs rclone sync/copy
@@ -2159,7 +2200,7 @@ if src is directory
^L refresh screen (fix screen corruption)
? to toggle help on and off
q/ESC/^c to quit
-Listed files/directories may be prefixed by a one-character flag, some of them combined with a description in brackes at end of line. These flags have the following meaning:
+Listed files/directories may be prefixed by a one-character flag, some of them combined with a description in brackets at end of line. These flags have the following meaning:
e means this is an empty directory, i.e. contains no files (but
may contain empty subdirectories)
~ means this is a directory where some of the files (possibly in
@@ -2458,11 +2499,13 @@ ffmpeg - | rclone rcat remote:path/to/file
rclone serve dlna remote:path [flags]
Options
--addr string The ip:port or :port to bind the DLNA http server to (default ":7879")
+ --announce-interval duration The interval between SSDP announcements (default 12m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for dlna
+ --interface stringArray The interface to use for SSDP (repeat as necessary)
--log-trace Enable trace logging of SOAP traffic
--name string Name of DLNA server
--no-checksum Don't compare checksums on up/download
@@ -2885,6 +2928,7 @@ ffmpeg - | rclone rcat remote:path/to/file
SSL/TLS
By default this will serve over http. If you want you can serve over https. You will need to supply the --cert
and --key
flags. If you wish to do client side certificate validation then you will need to supply --client-ca
also.
--cert
should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key
should be the PEM encoded private key and --client-ca
should be the PEM encoded client certificate authority certificate.
+--min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
Template
--template
allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:
@@ -3105,6 +3149,7 @@ htpasswd -B htpasswd anotherUser
--htpasswd string A htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
@@ -3279,6 +3324,7 @@ htpasswd -B htpasswd anotherUser
SSL/TLS
By default this will serve over HTTP. If you want you can serve over HTTPS. You will need to supply the --cert
and --key
flags. If you wish to do client side certificate validation then you will need to supply --client-ca
also.
--cert
should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key
should be the PEM encoded private key and --client-ca
should be the PEM encoded client certificate authority certificate.
+--min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
rclone serve restic remote:path [flags]
Options
--addr string IPaddress:Port or :Port to bind server to (default "localhost:8080")
@@ -3291,6 +3337,7 @@ htpasswd -B htpasswd anotherUser
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
--private-repos Users can only access their private repo
--realm string Realm for authentication (default "rclone")
@@ -3307,12 +3354,13 @@ htpasswd -B htpasswd anotherUser
rclone serve sftp
Serve the remote over SFTP.
Synopsis
-Run a SFTP server to serve a remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.
-You can use the filter flags (e.g. --include
, --exclude
) to control what is served.
+Run an SFTP server to serve a remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.
+You can use the filter flags (e.g. --include
, --exclude
) to control what is served.
+The server will respond to a small number of shell commands, mainly md5sum, sha1sum and df, which enable it to provide support for checksums and the about feature when accessed from an sftp remote.
+Note that this server uses standard 32 KiB packet payload size, which means you must not configure the client to expect anything else, e.g. with the chunk_size option on an sftp remote.
The server will log errors. Use -v
to see access logs.
--bwlimit
will be respected for file transfers. Use --stats
to control the stats printing.
You must provide some means of authentication, either with --user
/--pass
, an authorized keys file (specify location with --authorized-keys
- the default is the same as ssh), an --auth-proxy
, or set the --no-auth
flag for no authentication when logging in.
-Note that this also implements a small number of shell commands so that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend.
If you don't supply a host --key
then rclone will generate rsa, ecdsa and ed25519 variants, and cache them for later use in rclone's cache directory (see rclone help flags cache-dir
) in the "serve-sftp" directory.
By default the server binds to localhost:2022 - if you want it to be reachable externally then supply --addr :2022
for example.
Note that the default of --vfs-cache-mode off
is fine for the rclone sftp backend, but it may not be with other SFTP clients.
@@ -3483,7 +3531,7 @@ htpasswd -B htpasswd anotherUser
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
- --stdio Run an sftp server on run stdin/stdout
+ --stdio Run an sftp server on stdin/stdout
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
@@ -3612,6 +3660,7 @@ htpasswd -B htpasswd anotherUser
SSL/TLS
By default this will serve over HTTP. If you want you can serve over HTTPS. You will need to supply the --cert
and --key
flags. If you wish to do client side certificate validation then you will need to supply --client-ca
also.
--cert
should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key
should be the PEM encoded private key and --client-ca
should be the PEM encoded client certificate authority certificate.
+--min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.
Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below.
@@ -3774,6 +3823,7 @@ htpasswd -B htpasswd anotherUser
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
@@ -4093,7 +4143,7 @@ rclone copy :sftp,host=example.com:path/to/dir /tmp/dir
If you want to send a '
you will need to use "
, e.g.
rclone copy "O'Reilly Reviews" remote:backup
The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell.
-Windows
+Windows
If your names have spaces in you need to put them in "
, e.g.
rclone copy "E:\folder name\folder name\folder name" remote:backup
If you are using the root directory on its own then don't quote it (see #464 for why), e.g.
@@ -4138,7 +4188,7 @@ rclone sync -i /path/to/files remote:current-backup
- length is backend dependent
Each backend can provide system metadata that it understands. Some backends can also store arbitrary user metadata.
-Where possible the key names are standardized, so, for example, it is possible to copy object metadata from s3 to azureblob for example and metadata will be translated apropriately.
+Where possible the key names are standardized, so, for example, it is possible to copy object metadata from s3 to azureblob for example and metadata will be translated appropriately.
Some backends have limits on the size of the metadata and rclone will give errors on upload if they are exceeded.
The goal of the implementation is to
@@ -4231,12 +4281,32 @@ rclone sync -i /path/to/files remote:current-backup
Options
Rclone has a number of options to control its behaviour.
Options that take parameters can have the values passed in two ways, --option=value
or --option value
. However boolean (true/false) options behave slightly differently to the other options in that --boolean
sets the option to true
and the absence of the flag sets it to false
. It is also possible to specify --boolean=false
or --boolean=true
. Note that --boolean false
is not valid - this is parsed as --boolean
and the false
is parsed as an extra command line argument for rclone.
-Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+Time or duration options
+TIME or DURATION options can be specified as a duration string or a time string.
+A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Default units are seconds or the following abbreviations are valid:
+
+ms
- Milliseconds
+s
- Seconds
+m
- Minutes
+h
- Hours
+d
- Days
+w
- Weeks
+M
- Months
+y
- Years
+
+These can also be specified as an absolute time in the following formats:
+
+- RFC3339 - e.g.
2006-01-02T15:04:05Z
or 2006-01-02T15:04:05+07:00
+- ISO8601 Date and time, local timezone -
2006-01-02T15:04:05
+- ISO8601 Date and time, local timezone -
2006-01-02 15:04:05
+- ISO8601 Date -
2006-01-02
(YYYY-MM-DD)
+
+Size options
Options which use SIZE use KiB (multiples of 1024 bytes) by default. However, a suffix of B
for Byte, K
for KiB, M
for MiB, G
for GiB, T
for TiB and P
for PiB may be used. These are the binary units, e.g. 1, 2**10, 2**20, 2**30 respectively.
--backup-dir=DIR
When using sync
, copy
or move
any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.
If --suffix
is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.
-The remote in use must support server-side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory.
+The remote in use must support server-side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory without it being excluded by a filter rule.
For example
rclone sync -i /path/to/local remote:current --backup-dir remote:old
will sync /path/to/local
to remote:current
, but for any files which would have been updated or deleted will be stored in remote:old
.
@@ -4248,7 +4318,7 @@ rclone sync -i /path/to/files remote:current-backup
This option controls the bandwidth limit. For example
--bwlimit 10M
would mean limit the upload and download bandwidth to 10 MiB/s. NB this is bytes per second not bits per second. To use a single limit, specify the desired bandwidth in KiB/s, or use a suffix B|K|M|G|T|P. The default is 0
which means to not limit bandwidth.
-The upload and download bandwidth can be specified seperately, as --bwlimit UP:DOWN
, so
+The upload and download bandwidth can be specified separately, as --bwlimit UP:DOWN
, so
--bwlimit 10M:100k
would mean limit the upload bandwidth to 10 MiB/s and the download bandwidth to 100 KiB/s. Either limit can be "off" meaning no limit, so to just limit the upload bandwidth you would use
--bwlimit 10M:off
@@ -4622,6 +4692,10 @@ y/n/s/!/q> n
--retries-sleep=TIME
This sets the interval between each retry specified by --retries
The default is 0
. Use 0
to disable.
+--server-side-across-configs
+Allow server-side operations (e.g. copy or move) to work across different configurations.
+This can be useful if you wish to do a server-side copy or move between two remotes which use the same backend but are configured differently.
+Note that this isn't enabled by default because it isn't easy for rclone to tell if it will work between any two configurations.
--size-only
Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size.
This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.
@@ -4685,14 +4759,15 @@ y/n/s/!/q> n
This may be used to increase performance of --tpslimit
without changing the long term average number of transactions per second.
--track-renames
By default, rclone doesn't keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy.
-If you use this flag, and the remote supports server-side copy or server-side move, and the source and destination have a compatible hash, then this will track renames during sync
operations and perform renaming server-side.
-Files will be matched by size and hash - if both match then a rename will be considered.
+An rclone sync with --track-renames
runs like a normal sync, but keeps track of objects which exist in the destination but not in the source (which would normally be deleted), and which objects exist in the source but not the destination (which would normally be transferred). These objects are then candidates for renaming.
+After the sync, rclone matches up the source only and destination only objects using the --track-renames-strategy
specified and either renames the destination object or transfers the source and deletes the destination object. --track-renames
is stateless like all of rclone's syncs.
+To use this flag the destination must support server-side copy or server-side move, and to use a hash based --track-renames-strategy
(the default) the source and the destination must have a compatible hash.
If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console.
Encrypted destinations are not currently supported by --track-renames
if --track-renames-strategy
includes hash
.
Note that --track-renames
is incompatible with --no-traverse
and that it uses extra memory to keep track of all the rename candidates.
Note also that --track-renames
is incompatible with --delete-before
and will select --delete-after
instead of --delete-during
.
--track-renames-strategy (hash,modtime,leaf,size)
-This option changes the matching criteria for --track-renames
.
+This option changes the file matching criteria for --track-renames
.
The matching is controlled by a comma separated selection of these tokens:
modtime
- the modification time of the file - not supported on all backends
@@ -4700,9 +4775,9 @@ y/n/s/!/q> n
leaf
- the name of the file not including its directory name
size
- the size of the file (this is always enabled)
-So using --track-renames-strategy modtime,leaf
would match files based on modification time, the leaf of the file name and the size only.
+The default option is hash
.
+Using --track-renames-strategy modtime,leaf
would match files based on modification time, the leaf of the file name and the size only.
Using --track-renames-strategy modtime
or leaf
can enable --track-renames
support for encrypted destinations.
-If nothing is specified, the default option is matching by hash
es.
Note that the hash
strategy is not supported with encrypted destinations.
--delete-(before,during,after)
This option allows you to specify when files on your destination are deleted when you sync folders.
@@ -4711,7 +4786,7 @@ y/n/s/!/q> n
Specifying --delete-after
(the default value) will delay deletion of files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors
.
--fast-list
When doing anything which involves a directory listing (e.g. sync
, copy
, ls
- in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very quickly using the least amount of memory.
-However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket-based remotes (e.g. S3, B2, GCS, Swift, Hubic).
+However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket-based remotes (e.g. S3, B2, GCS, Swift).
If you use the --fast-list
flag then rclone will use this method for listing directories. This will have the following consequences for the listing:
- It will use fewer transactions (important if you pay for them)
@@ -4735,7 +4810,7 @@ y/n/s/!/q> n
If an existing destination file has a modification time older than the source file's, it will be updated if the sizes are different. If the sizes are the same, it will be updated if the checksum is different or not available.
If an existing destination file has a modification time equal (within the computed modify window) to the source file's, it will be updated if the sizes are different. The checksum will not be checked in this case unless the --checksum
flag is provided.
In all other cases the file will not be updated.
-Consider using the --modify-window
flag to compensate for time skews between the source and the backend, for backends that do not support mod times, and instead use uploaded times. However, if the backend does not support checksums, note that sync'ing or copying within the time skew window may still result in additional transfers for safety.
+Consider using the --modify-window
flag to compensate for time skews between the source and the backend, for backends that do not support mod times, and instead use uploaded times. However, if the backend does not support checksums, note that syncing or copying within the time skew window may still result in additional transfers for safety.
--use-mmap
If this flag is set then rclone will use anonymous memory allocated by mmap on Unix based platforms and VirtualAlloc on Windows for its transfer buffers (size controlled by --buffer-size
). Memory allocated like this does not go on the Go heap and can be returned to the OS immediately when it is finished with.
If this flag is not set then rclone will allocate and free the buffers using the Go memory allocator which may use more memory as memory pages are returned less aggressively to the OS.
@@ -5149,7 +5224,7 @@ ASCII character classes (e.g. [[:alnum:]], [[:alpha:]], [[:punct:]], [[:xdigit:]
|
|
-/dir/file.gif |
+/dir/file.png |
/dir/file.gif |
@@ -5430,34 +5505,21 @@ user2/prefect
--min-size
- Don't transfer any file smaller than this
Controls the minimum size file within the scope of an rclone command. Default units are KiB
but abbreviations K
, M
, G
, T
or P
are valid.
E.g. rclone ls remote: --min-size 50k
lists files on remote:
of 50 KiB size or larger.
+See the size option docs for more info.
--max-size
- Don't transfer any file larger than this
Controls the maximum size file within the scope of an rclone command. Default units are KiB
but abbreviations K
, M
, G
, T
or P
are valid.
E.g. rclone ls remote: --max-size 1G
lists files on remote:
of 1 GiB size or smaller.
+See the size option docs for more info.
--max-age
- Don't transfer any file older than this
-Controls the maximum age of files within the scope of an rclone command. Default units are seconds or the following abbreviations are valid:
-
-ms
- Milliseconds
-s
- Seconds
-m
- Minutes
-h
- Hours
-d
- Days
-w
- Weeks
-M
- Months
-y
- Years
-
---max-age
can also be specified as an absolute time in the following formats:
-
-- RFC3339 - e.g.
2006-01-02T15:04:05Z
or 2006-01-02T15:04:05+07:00
-- ISO8601 Date and time, local timezone -
2006-01-02T15:04:05
-- ISO8601 Date and time, local timezone -
2006-01-02 15:04:05
-- ISO8601 Date -
2006-01-02
(YYYY-MM-DD)
-
+Controls the maximum age of files within the scope of an rclone command.
--max-age
applies only to files and not to directories.
E.g. rclone ls remote: --max-age 2d
lists files on remote:
of 2 days old or less.
+See the time option docs for valid formats.
--min-age
- Don't transfer any file younger than this
Controls the minimum age of files within the scope of an rclone command. (see --max-age
for valid formats)
--min-age
applies only to files and not to directories.
E.g. rclone ls remote: --min-age 2d
lists files on remote:
of 2 days old or more.
+See the time option docs for valid formats.
Other flags
--delete-excluded
- Delete files on dest excluded from sync
Important this flag is dangerous to your data - use with --dry-run
and -v
first.
@@ -5573,6 +5635,8 @@ dir1/dir2/dir3/.ignore
SSL PEM Private key
Maximum size of request header (default 4096)
+--rc-min-tls-version=VALUE
+The minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
--rc-user=VALUE
User name for authentication.
--rc-pass=VALUE
@@ -5743,7 +5807,7 @@ dir1/dir2/dir3/.ignore
Specifying remotes to work on
Remotes are specified with the fs=
, srcFs=
, dstFs=
parameters depending on the command being used.
The parameters can be a string as per the rest of rclone, eg s3:bucket/path
or :sftp:/my/dir
. They can also be specified as JSON blobs.
-If specifyng a JSON blob it should be a object mapping strings to strings. These values will be used to configure the remote. There are 3 special values which may be set:
+If specifying a JSON blob it should be a object mapping strings to strings. These values will be used to configure the remote. There are 3 special values which may be set:
type
- set to type
to specify a remote called :type:
_name
- set to name
to specify a remote called name:
@@ -6140,6 +6204,11 @@ OR
- jobid - id of the job (integer).
+job/stopgroup: Stop all running jobs in a group
+Parameters:
+
+- group - name of the group (string).
+
mount/listmounts: Show current mount points
This shows currently mounted points, which can be used for performing an unmount.
This takes no parameters and returns
@@ -6186,8 +6255,8 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache
Example:
rclone rc mount/unmount mountPoint=/home/<user>/mountPoint
Authentication is required for this call.
-mount/unmountall: Show current mount points
-This shows currently mounted points, which can be used for performing an unmount.
+mount/unmountall: Unmount all active mounts
+rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
This takes no parameters and returns error if unmount does not succeed.
Eg
rclone rc mount/unmountall
@@ -6569,7 +6638,7 @@ rclone rc options/set --json '{"main": {"LogLevel": 8}}&
rc/noopauth: Echo the input to the output parameters requiring auth
This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.
Authentication is required for this call.
-sync/bisync: Perform bidirectonal synchronization between two paths.
+sync/bisync: Perform bidirectional synchronization between two paths.
This takes the following parameters
- path1 - a remote directory string e.g.
drive:path1
@@ -6954,15 +7023,6 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
-Hubic |
-MD5 |
-R/W |
-No |
-No |
-R/W |
-- |
-
-
Internet Archive |
MD5, SHA1, CRC32 |
R/W ¹¹ |
@@ -6971,7 +7031,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
RWU |
-
+
Jottacloud |
MD5 |
R/W |
@@ -6980,7 +7040,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
Koofr |
MD5 |
- |
@@ -6989,7 +7049,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Mail.ru Cloud |
Mailru ⁶ |
R/W |
@@ -6998,7 +7058,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Mega |
- |
- |
@@ -7007,7 +7067,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Memory |
MD5 |
R/W |
@@ -7016,7 +7076,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Microsoft Azure Blob Storage |
MD5 |
R/W |
@@ -7025,7 +7085,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W |
- |
-
+
Microsoft OneDrive |
SHA1 ⁵ |
R/W |
@@ -7034,7 +7094,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
OpenDrive |
MD5 |
R/W |
@@ -7043,7 +7103,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
OpenStack Swift |
MD5 |
R/W |
@@ -7052,6 +7112,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W |
- |
+
+Oracle Object Storage |
+MD5 |
+R/W |
+No |
+No |
+R/W |
+- |
+
pCloud |
MD5, SHA1 ⁷ |
@@ -7116,6 +7185,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
+SMB |
+- |
+- |
+Yes |
+No |
+- |
+- |
+
+
SugarSync |
- |
- |
@@ -7124,7 +7202,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Storj |
- |
R |
@@ -7133,7 +7211,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Uptobox |
- |
- |
@@ -7142,7 +7220,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
WebDAV |
MD5, SHA1 ³ |
R ⁴ |
@@ -7151,7 +7229,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
Yandex Disk |
MD5 |
R/W |
@@ -7160,7 +7238,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R |
- |
-
+
Zoho WorkDrive |
- |
- |
@@ -7169,7 +7247,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
- |
- |
-
+
The local filesystem |
All |
R/W |
@@ -7197,7 +7275,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
The cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum
flag in syncs and in the check
command.
To use the verify checksums when transferring between cloud storage systems they must support a common hash type.
ModTime
-Allmost all cloud storage systems store some sort of timestamp on objects, but several of them not something that is appropriate to use for syncing. E.g. some backends will only write a timestamp that represent the time of the upload. To be relevant for syncing it should be able to store the modification time of the source object. If this is not the case, rclone will only check the file size by default, though can be configured to check the file hash (with the --checksum
flag). Ideally it should also be possible to change the timestamp of an existing file without having to re-upload it.
+Almost all cloud storage systems store some sort of timestamp on objects, but several of them not something that is appropriate to use for syncing. E.g. some backends will only write a timestamp that represent the time of the upload. To be relevant for syncing it should be able to store the modification time of the source object. If this is not the case, rclone will only check the file size by default, though can be configured to check the file hash (with the --checksum
flag). Ideally it should also be possible to change the timestamp of an existing file without having to re-upload it.
Storage systems with a -
in the ModTime column, means the modification read on objects is not the modification time of the file when uploaded. It is most likely the time the file was uploaded, or possibly something else (like the time the picture was taken in Google Photos).
Storage systems with a R
(for read-only) in the ModTime column, means the it keeps modification times on objects, and updates them when uploading objects, but it does not support changing only the modification time (SetModTime
operation) without re-uploading, possibly not even without deleting existing first. Some operations in rclone, such as copy
and sync
commands, will automatically check for SetModTime
support and re-upload if necessary to keep the modification times in sync. Other commands will not work without SetModTime
support, e.g. touch
command on an existing file will fail, and changes to modification time only on a files in a mount
will be silently ignored.
Storage systems with R/W
(for read/write) in the ModTime column, means they do also support modtime-only operations.
@@ -7872,19 +7950,6 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
-Hubic |
-Yes † |
-Yes |
-No |
-No |
-No |
-Yes |
-Yes |
-No |
-Yes |
-No |
-
-
Internet Archive |
No |
Yes |
@@ -7897,7 +7962,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
No |
-
+
Jottacloud |
Yes |
Yes |
@@ -7910,7 +7975,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
Koofr |
Yes |
Yes |
@@ -7923,7 +7988,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
Mail.ru Cloud |
Yes |
Yes |
@@ -7936,7 +8001,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
Mega |
Yes |
No |
@@ -7949,7 +8014,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
Memory |
No |
Yes |
@@ -7962,7 +8027,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
No |
-
+
Microsoft Azure Blob Storage |
Yes |
Yes |
@@ -7975,7 +8040,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
No |
-
+
Microsoft OneDrive |
Yes |
Yes |
@@ -7988,7 +8053,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
OpenDrive |
Yes |
Yes |
@@ -8001,7 +8066,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
Yes |
-
+
OpenStack Swift |
Yes † |
Yes |
@@ -8014,6 +8079,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
No |
+
+Oracle Object Storage |
+Yes |
+Yes |
+No |
+No |
+Yes |
+Yes |
+No |
+No |
+No |
+No |
+
pCloud |
Yes |
@@ -8106,6 +8184,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
+SMB |
+No |
+No |
+Yes |
+Yes |
+No |
+No |
+Yes |
+No |
+No |
+Yes |
+
+
SugarSync |
Yes |
Yes |
@@ -8118,7 +8209,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
Yes |
-
+
Storj |
Yes † |
No |
@@ -8131,7 +8222,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
No |
-
+
Uptobox |
No |
Yes |
@@ -8144,7 +8235,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No |
No |
-
+
WebDAV |
Yes |
Yes |
@@ -8157,7 +8248,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
Yandex Disk |
Yes |
Yes |
@@ -8170,7 +8261,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
Zoho WorkDrive |
Yes |
Yes |
@@ -8183,7 +8274,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes |
Yes |
-
+
The local filesystem |
Yes |
No |
@@ -8200,7 +8291,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Purge
This deletes a directory quicker than just deleting all the files in the directory.
-† Note Swift, Hubic, and Storj implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.
+† Note Swift and Storj implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.
‡ StreamUpload is not supported with Nextcloud
Copy
Used when copying an object to and from the same remote. This known as a server-side copy so you can copy a file without downloading it and uploading it again. It is used if you use rclone copy
or rclone move
if the remote doesn't support Move
directly.
@@ -8337,6 +8428,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--rc-job-expire-interval duration Interval to check for expired async jobs (default 10s)
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--rc-no-auth Don't require auth for certain methods
--rc-pass string Password for authentication
--rc-realm string Realm for authentication (default "rclone")
@@ -8353,6 +8445,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--refresh-times Refresh the modtime of remote files
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable)
+ --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
@@ -8378,7 +8471,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.60.0")
-v, --verbose count Print lots more stuff (repeat for more)
Backend Flags
These flags are available for every command. They control the backends and may be set in the config file.
@@ -8525,7 +8618,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
- --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish comitting (default 10m0s)
+ --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -8559,6 +8652,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
--ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
+ --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
@@ -8577,6 +8671,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gcs-endpoint string Endpoint for the service
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
--gcs-object-acl string Access Control List for new objects
@@ -8622,14 +8717,6 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
- --hubic-auth-url string Auth server URL
- --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
- --hubic-client-id string OAuth Client Id
- --hubic-client-secret string OAuth Client Secret
- --hubic-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
- --hubic-no-chunk Don't chunk files during streaming upload
- --hubic-token string OAuth Access Token as a JSON blob
- --hubic-token-url string Token server url
--internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
--internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
@@ -8698,6 +8785,22 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
+ --oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
+ --oos-compartment string Object storage compartment OCID
+ --oos-config-file string Path to OCI config file (default "~/.oci/config")
+ --oos-config-profile string Profile name inside the oci config file (default "Default")
+ --oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
+ --oos-copy-timeout Duration Timeout for copy (default 1m0s)
+ --oos-disable-checksum Don't store MD5 checksum with object metadata
+ --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --oos-endpoint string Endpoint for Object storage API
+ --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
+ --oos-namespace string Object storage namespace
+ --oos-no-check-bucket If set, don't attempt to check the bucket exists or create it
+ --oos-provider string Choose your Auth Provider (default "env_auth")
+ --oos-region string Object storage Region
+ --oos-upload-concurrency int Concurrency for multipart uploads (default 10)
+ --oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
@@ -8729,6 +8832,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-bucket-acl string Canned ACL used when creating buckets
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
+ --s3-decompress If set this will decompress gzip encoded objects
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
@@ -8747,6 +8851,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity
--s3-no-head-object If set, do not do HEAD before GET when getting objects
+ --s3-no-system-metadata Suppress setting and reading of system metadata
--s3-profile string Profile to use in the shared credentials file
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
@@ -8756,7 +8861,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
- --s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data
+ --s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
+ --s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
@@ -8766,6 +8872,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
+ --s3-version-at Time Show file versions as they were at the specified time (default off)
+ --s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
--seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
@@ -8812,6 +8920,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent")
--skip-links Don't warn about skipped symlinks
+ --smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
+ --smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
+ --smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true)
+ --smb-host string SMB server hostname to connect to
+ --smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
+ --smb-pass string SMB password (obscured)
+ --smb-port int SMB port number (default 445)
+ --smb-user string SMB username (default "$USER")
--storj-access-grant string Access grant
--storj-api-key string API key
--storj-passphrase string Encryption passphrase
@@ -8842,6 +8959,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--swift-key string API key or password (OS_PASSWORD)
--swift-leave-parts-on-error If true avoid calling abort upload on a failure
--swift-no-chunk Don't chunk files during streaming upload
+ --swift-no-large-objects Disable support for static and dynamic large objects
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
@@ -9314,7 +9432,7 @@ Optional Flags:
Error handling
Certain bisync critical errors, such as file copy/move failing, will result in a bisync lockout of following runs. The lockout is asserted because the sync status and history of the Path1 and Path2 filesystems cannot be trusted, so it is safer to block any further changes until someone checks things out. The recovery is to do a --resync
again.
It is recommended to use --resync --dry-run --verbose
initially and carefully review what changes will be made before running the --resync
without --dry-run
.
-Most of these events come up due to a error status from an internal call. On such a critical error the {...}.path1.lst
and {...}.path2.lst
listing files are renamed to extension .lst-err
, which blocks any future bisync runs (since the normal .lst
files are not found). Bisync keeps them under bisync
subdirectory of the rclone cache direcory, typically at ${HOME}/.cache/rclone/bisync/
on Linux.
+Most of these events come up due to a error status from an internal call. On such a critical error the {...}.path1.lst
and {...}.path2.lst
listing files are renamed to extension .lst-err
, which blocks any future bisync runs (since the normal .lst
files are not found). Bisync keeps them under bisync
subdirectory of the rclone cache directory, typically at ${HOME}/.cache/rclone/bisync/
on Linux.
Some errors are considered temporary and re-running the bisync is not blocked. The critical return blocks further bisync runs.
Lock file
When bisync is running, a lock file is created in the bisync working directory, typically at ~/.cache/rclone/bisync/PATH1..PATH2.lck
on Linux. If bisync should crash or hang, the lock file will remain in place and block any further runs of bisync for the same paths. Delete the lock file as part of debugging the situation. The lock file effectively blocks follow-on (e.g., scheduled by cron) runs when the prior invocation is taking a long time. The lock file contains PID of the blocking process, which may help in debug.
@@ -9339,7 +9457,7 @@ rclone copy PATH2 PATH2 --filter "+ */" --filter "- **" --cr
Case sensitivity
Synching with case-insensitive filesystems, such as Windows or Box
, can result in file name conflicts. This will be fixed in a future release. The near term workaround is to make sure that files on both sides don't have spelling case differences (Smile.jpg
vs. smile.jpg
).
Windows support
-Bisync has been tested on Windows 8.1, Windows 10 Pro 64-bit and on Windows Github runners.
+Bisync has been tested on Windows 8.1, Windows 10 Pro 64-bit and on Windows GitHub runners.
Drive letters are allowed, including drive letters mapped to network drives (rclone bisync J:\localsync GDrive:
). If a drive letter is omitted, the shell current drive is the default. Drive letters are a single character follows by :
, so cloud names must be more than one character long.
Absolute paths (with or without a drive letter), and relative paths (with or without a drive letter) are supported.
Working directory is created at C:\Users\MyLogin\AppData\Local\rclone\bisync
.
@@ -9596,11 +9714,11 @@ Options:
path1
and/or path2
subdirectories are created in a temporary directory under the respective local or cloud test remote.
- By default, the Path1 and Path2 test dirs and workdir will be deleted after each test run. The
-no-cleanup
flag disables purging these directories when validating and debugging a given test. These directories will be flushed before running another test, independent of the -no-cleanup
usage.
- You will likely want to add `- /testdir/
to your normal bisync
--filters-fileso that normal syncs do not attempt to sync the test temporary directories, which may have
RCLONE_TESTmiscompares in some testcases which would otherwise trip the
--check-accesssystem. The
--check-accessmechanism is hard-coded to ignore
RCLONE_TESTfiles beneath
bisync/testdata`, so the test cases may reside on the synched tree even if there are check file mismatches in the test tree.
-- Some Dropbox tests can fail, notably printing the following message:
src and dst identical but can't set mod time without deleting and re-uploading
This is expected and happens due a way Dropbox handles modificaion times. You should use the -refresh-times
test flag to make up for this.
+- Some Dropbox tests can fail, notably printing the following message:
src and dst identical but can't set mod time without deleting and re-uploading
This is expected and happens due a way Dropbox handles modification times. You should use the -refresh-times
test flag to make up for this.
- If Dropbox tests hit request limit for you and print error message
too_many_requests/...: Too many requests or write operations.
then follow the Dropbox App ID instructions.
Updating golden results
-Sometimes even a slight change in the bisync source can cause little changes spread around many log files. Updating them manually would be a nighmare.
+Sometimes even a slight change in the bisync source can cause little changes spread around many log files. Updating them manually would be a nightmare.
The -golden
flag will store the test.log
and *.lst
listings from each test case into respective golden directories. Golden results will automatically contain generic strings instead of local or cloud paths which means that they should match when run with a different cloud service.
Your normal workflow might be as follows: 1. Git-clone the rclone sources locally 2. Modify bisync source and check that it builds 3. Run the whole test suite go test ./cmd/bisync -remote local
4. If some tests show log difference, recheck them individually, e.g.: go test ./cmd/bisync -remote local -case basic
5. If you are convinced with the difference, goldenize all tests at once: go test ./cmd/bisync -remote local -golden
6. Use word diff: git diff --word-diff ./cmd/bisync/testdata/
. Please note that normal line-level diff is generally useless here. 7. Check the difference carefully! 8. Commit the change (git commit
) only if you are sure. If unsure, save your code changes then wipe the log diffs from git: git reset [--hard]
.
Structure of test scenarios
@@ -9891,6 +10009,7 @@ y/e/d> y
During the initial setup with rclone config
you will specify the target remote. The target remote can either be a local path or another remote.
Subfolders can be used in target remote. Assume an alias remote named backup
with the target mydrive:private/backup
. Invoking rclone mkdir backup:desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop
.
There will be no special handling of paths containing ..
segments. Invoking rclone mkdir backup:../desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop
. The empty path is not allowed as a remote. To alias the current directory use .
instead.
+The target remote can also be a connection string. This can be used to modify the config of a remote for different uses, e.g. the alias myDriveTrash
with the target remote myDrive,trashed_only:
can be used to only show the trashed files in myDrive
.
Configuration
Here is an example of how to make an alias called remote
for local folder. First run:
rclone config
@@ -10172,7 +10291,9 @@ y/e/d> y
- Huawei OBS
- IBM COS S3
- IDrive e2
+- IONOS Cloud
- Minio
+- Qiniu Cloud Object Storage (Kodo)
- RackCorp Object Storage
- Scaleway
- Seagate Lyve Cloud
@@ -10430,7 +10551,7 @@ y/e/d>
These flags can and should be used in combination with --fast-list
- see below.
If using rclone mount
or any command using the VFS (eg rclone serve
) commands then you might want to consider using the VFS flag --no-modtime
which will stop rclone reading the modification time for every object. You could also use --use-server-modtime
if you are happy with the modification times of the objects being the time of upload.
Avoiding GET requests to read directory listings
-Rclone's default directory traversal is to process each directory individually. This takes one API call per directory. Using the --fast-list
flag will read all info about the the objects into memory first using a smaller number of API calls (one per 1000 objects). See the rclone docs for more details.
+Rclone's default directory traversal is to process each directory individually. This takes one API call per directory. Using the --fast-list
flag will read all info about the objects into memory first using a smaller number of API calls (one per 1000 objects). See the rclone docs for more details.
rclone sync --fast-list --checksum /path/to/source s3:bucket
--fast-list
trades off API transactions for memory use. As a rough guide rclone uses 1k of memory per object stored, so using --fast-list
on a sync of a million objects will use roughly 1 GiB of RAM.
If you are only copying a small number of files into a big repository then using --no-traverse
is a good idea. This finds objects directly instead of through directory listings. You can do a "top-up" sync very cheaply by using --max-age
and --no-traverse
to copy only recent files, eg
@@ -10446,6 +10567,36 @@ y/e/d>
However for objects which were uploaded as multipart uploads or with server side encryption (SSE-AWS or SSE-C) the ETag
header is no longer the MD5 sum of the data, so rclone adds an additional piece of metadata X-Amz-Meta-Md5chksum
which is a base64 encoded MD5 hash (in the same format as is required for Content-MD5
).
For large objects, calculating this hash can take some time so the addition of this hash can be disabled with --s3-disable-checksum
. This will mean that these objects do not have an MD5 checksum.
Note that reading this from the object takes an additional HEAD
request as the metadata isn't returned in object listings.
+Versions
+When bucket versioning is enabled (this can be done with rclone with the rclone backend versioning
command) when rclone uploads a new version of a file it creates a new version of it Likewise when you delete a file, the old version will be marked hidden and still be available.
+Old versions of files, where available, are visible using the --s3-versions
flag.
+It is also possible to view a bucket as it was at a certain point in time, using the --s3-version-at
flag. This will show the file versions as they were at that time, showing files that have been deleted afterwards, and hiding files that were created since.
+If you wish to remove all the old versions then you can use the rclone backend cleanup-hidden remote:bucket
command which will delete all the old hidden versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, e.g. rclone backend cleanup-hidden remote:bucket/path/to/stuff
.
+When you purge
a bucket, the current and the old versions will be deleted then the bucket will be deleted.
+However delete
will cause the current versions of the files to become hidden old versions.
+Here is a session showing the listing and retrieval of an old version followed by a cleanup
of the old versions.
+Show current version and all the versions with --s3-versions
flag.
+$ rclone -q ls s3:cleanup-test
+ 9 one.txt
+
+$ rclone -q --s3-versions ls s3:cleanup-test
+ 9 one.txt
+ 8 one-v2016-07-04-141032-000.txt
+ 16 one-v2016-07-04-141003-000.txt
+ 15 one-v2016-07-02-155621-000.txt
+Retrieve an old version
+$ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
+
+$ ls -l /tmp/one-v2016-07-04-141003-000.txt
+-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
+Clean up all the old versions and show that they've gone.
+$ rclone -q backend cleanup-hidden s3:cleanup-test
+
+$ rclone -q ls s3:cleanup-test
+ 9 one.txt
+
+$ rclone -q --s3-versions ls s3:cleanup-test
+ 9 one.txt
Cleanup
If you run rclone cleanup s3:bucket
then it will remove all pending multipart uploads older than 24 hours. You can use the -i
flag to see exactly what it will do. If you want more control over the expiry date then run rclone backend cleanup s3:bucket -o max-age=1h
to expire all uploads older than one hour. You can use rclone backend list-multipart-uploads s3:bucket
to see the pending multipart uploads.
Restricted filename characters
@@ -10592,7 +10743,7 @@ y/e/d>
As mentioned in the Hashes section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail. A simple solution is to set the --s3-upload-cutoff 0
and force all the files to be uploaded as multipart.
Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).
--s3-provider
Choose your S3 provider.
Properties:
@@ -10647,6 +10798,10 @@ y/e/d>
+- "IONOS"
+
- "LyveCloud"
- Seagate Lyve Cloud
@@ -10687,6 +10842,10 @@ y/e/d>
+- "Qiniu"
+
+- Qiniu Object Storage (Kodo)
+
- "Other"
- Any other S3 compatible provider
@@ -11079,12 +11238,86 @@ y/e/d>
--s3-region
Region to connect to.
+Properties:
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+
+- "cn-east-1"
+
+- The default endpoint - a good choice if you are unsure.
+- East China Region 1.
+- Needs location constraint cn-east-1.
+
+- "cn-east-2"
+
+- East China Region 2.
+- Needs location constraint cn-east-2.
+
+- "cn-north-1"
+
+- North China Region 1.
+- Needs location constraint cn-north-1.
+
+- "cn-south-1"
+
+- South China Region 1.
+- Needs location constraint cn-south-1.
+
+- "us-north-1"
+
+- North America Region.
+- Needs location constraint us-north-1.
+
+- "ap-southeast-1"
+
+- Southeast Asia Region 1.
+- Needs location constraint ap-southeast-1.
+
+- "ap-northeast-1"
+
+- Northeast Asia Region 1.
+- Needs location constraint ap-northeast-1.
+
+
+
+--s3-region
+Region where your bucket will be created and your data stored.
+Properties:
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Provider: IONOS
+- Type: string
+- Required: false
+- Examples:
+
+- "de"
+
+- "eu-central-2"
+
+- "eu-south-2"
+
+
+
+--s3-region
+Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
-- Provider: !AWS,Alibaba,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
+- Provider: !AWS,Alibaba,ChinaMobile,Cloudflare,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
- Type: string
- Required: false
- Examples:
@@ -11531,6 +11764,32 @@ y/e/d>
--s3-endpoint
+Endpoint for IONOS S3 Object Storage.
+Specify the endpoint from the same region.
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Provider: IONOS
+- Type: string
+- Required: false
+- Examples:
+
+- "s3-eu-central-1.ionoscloud.com"
+
+- "s3-eu-central-2.ionoscloud.com"
+
+- "s3-eu-south-2.ionoscloud.com"
+
+
+
+--s3-endpoint
Endpoint for OSS API.
Properties:
@@ -11643,7 +11902,7 @@ y/e/d>
---s3-endpoint
+--s3-endpoint
Endpoint for OBS API.
Properties:
@@ -11716,7 +11975,7 @@ y/e/d>
---s3-endpoint
+--s3-endpoint
Endpoint for Scaleway Object Storage.
Properties:
@@ -11741,7 +12000,7 @@ y/e/d>
---s3-endpoint
+--s3-endpoint
Endpoint for StackPath Object Storage.
Properties:
@@ -11766,7 +12025,7 @@ y/e/d>
---s3-endpoint
+--s3-endpoint
Endpoint of the Shared Gateway.
Properties:
@@ -11791,7 +12050,7 @@ y/e/d>
---s3-endpoint
+--s3-endpoint
Endpoint for Tencent COS API.
Properties:
@@ -11880,7 +12139,7 @@ y/e/d>
---s3-endpoint
+--s3-endpoint
Endpoint for RackCorp Object Storage.
Properties:
@@ -11969,14 +12228,55 @@ y/e/d>
---s3-endpoint
+--s3-endpoint
+Endpoint for Qiniu Object Storage.
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+
+- "s3-cn-east-1.qiniucs.com"
+
+- "s3-cn-east-2.qiniucs.com"
+
+- "s3-cn-north-1.qiniucs.com"
+
+- North China Endpoint 1
+
+- "s3-cn-south-1.qiniucs.com"
+
+- South China Endpoint 1
+
+- "s3-us-north-1.qiniucs.com"
+
+- North America Endpoint 1
+
+- "s3-ap-southeast-1.qiniucs.com"
+
+- Southeast Asia Endpoint 1
+
+- "s3-ap-northeast-1.qiniucs.com"
+
+- Northeast Asia Endpoint 1
+
+
+
+--s3-endpoint
Endpoint for S3 API.
Required when using an S3 clone.
Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
-- Provider: !AWS,IBMCOS,IDrive,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp
+- Provider: !AWS,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp,Qiniu
- Type: string
- Required: false
- Examples:
@@ -12542,12 +12842,54 @@ y/e/d>
--s3-location-constraint
Location constraint - must be set to match the Region.
+Used when creating buckets only.
+Properties:
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+
+- "cn-east-1"
+
+- "cn-east-2"
+
+- "cn-north-1"
+
+- "cn-south-1"
+
+- "us-north-1"
+
+- North America Region 1
+
+- "ap-southeast-1"
+
+- Southeast Asia Region 1
+
+- "ap-northeast-1"
+
+- Northeast Asia Region 1
+
+
+
+--s3-location-constraint
+Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Properties:
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: !AWS,IBMCOS,IDrive,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS
+- Provider: !AWS,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS
- Type: string
- Required: false
@@ -12855,8 +13197,37 @@ y/e/d>
+--s3-storage-class
+The storage class to use when storing new objects in Qiniu.
+Properties:
+
+- Config: storage_class
+- Env Var: RCLONE_S3_STORAGE_CLASS
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+
+- "STANDARD"
+
+- Standard storage class
+
+- "LINE"
+
+- Infrequent access storage mode
+
+- "GLACIER"
+
+- "DEEP_ARCHIVE"
+
+- Deep archive storage mode
+
+
+
Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).
--s3-bucket-acl
Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
@@ -12924,7 +13295,8 @@ y/e/d>
--s3-sse-customer-key
-If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.
+To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data.
+Alternatively you can provide --sse-customer-key-base64.
Properties:
- Config: sse_customer_key
@@ -12940,6 +13312,24 @@ y/e/d>
+--s3-sse-customer-key-base64
+If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data.
+Alternatively you can provide --sse-customer-key.
+Properties:
+
+- Config: sse_customer_key_base64
+- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_BASE64
+- Provider: AWS,Ceph,ChinaMobile,Minio
+- Type: string
+- Required: false
+- Examples:
+
+
--s3-sse-customer-key-md5
If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
@@ -13251,6 +13641,47 @@ Windows: "%USERPROFILE%\.aws\credentials"
- Type: bool
- Default: false
+--s3-versions
+Include old versions in directory listings.
+Properties:
+
+- Config: versions
+- Env Var: RCLONE_S3_VERSIONS
+- Type: bool
+- Default: false
+
+--s3-version-at
+Show file versions as they were at the specified time.
+The parameter should be a date, "2006-01-02", datetime "2006-01-02 15:04:05" or a duration for that long ago, eg "100d" or "1h".
+Note that when using this no file write operations are permitted, so you can't upload files or delete them.
+See the time option docs for valid formats.
+Properties:
+
+- Config: version_at
+- Env Var: RCLONE_S3_VERSION_AT
+- Type: Time
+- Default: off
+
+--s3-decompress
+If set this will decompress gzip encoded objects.
+It is possible to upload objects to S3 with "Content-Encoding: gzip" set. Normally rclone will download these files as compressed objects.
+If this flag is set then rclone will decompress these files with "Content-Encoding: gzip" as they are received. This means that rclone can't check the size and hash but the file contents will be decompressed.
+Properties:
+
+- Config: decompress
+- Env Var: RCLONE_S3_DECOMPRESS
+- Type: bool
+- Default: false
+
+
+Suppress setting and reading of system metadata
+Properties:
+
+- Config: no_system_metadata
+- Env Var: RCLONE_S3_NO_SYSTEM_METADATA
+- Type: bool
+- Default: false
+
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
Here are the possible system metadata items for the s3 backend.
@@ -13406,6 +13837,20 @@ rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
- "max-age": Max age of upload to delete
+cleanup-hidden
+Remove old versions of files.
+rclone backend cleanup-hidden remote: [options] [<arguments>+]
+This command removes any old hidden versions of files on a versions enabled bucket.
+Note that you can use -i/--dry-run with this command to see what it would do.
+rclone backend cleanup-hidden s3:bucket/path/to/dir
+versioning
+Set/get versioning support for a bucket.
+rclone backend versioning remote: [options] [<arguments>+]
+This command sets versioning support if a parameter is passed and then returns the current versioning status for the bucket supplied.
+rclone backend versioning s3:bucket # read status only
+rclone backend versioning s3:bucket Enabled
+rclone backend versioning s3:bucket Suspended
+It may return "Enabled", "Suspended" or "Unversioned". Note that once versioning has been enabled the status can't be set back to "Unversioned".
Anonymous access to public buckets
If you want to use rclone to access a public bucket, configure with a blank access_key_id
and secret_access_key
. Your config should end up looking like this:
[anons3]
@@ -13973,6 +14418,132 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
+IONOS Cloud
+IONOS S3 Object Storage is a service offered by IONOS for storing and accessing unstructured data. To connect to the service, you will need an access key and a secret key. These can be found in the Data Center Designer, by selecting Manager resources > Object Storage Key Manager.
+Here is an example of a configuration. First, run rclone config
. This will walk you through an interactive setup process. Type n
to add the new remote, and then enter a name:
+Enter name for new remote.
+name> ionos-fra
+Type s3
to choose the connection type:
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
+ \ (s3)
+[snip]
+Storage> s3
+Type IONOS
:
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / IONOS Cloud
+ \ (IONOS)
+[snip]
+provider> IONOS
+Press Enter to choose the default option Enter AWS credentials in the next step
:
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth>
+Enter your Access Key and Secret key. These can be retrieved in the Data Center Designer, click on the menu “Manager resources” / "Object Storage Key Manager".
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> YOUR_ACCESS_KEY
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> YOUR_SECRET_KEY
+Choose the region where your bucket is located:
+Option region.
+Region where your bucket will be created and your data stored.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Frankfurt, Germany
+ \ (de)
+ 2 / Berlin, Germany
+ \ (eu-central-2)
+ 3 / Logrono, Spain
+ \ (eu-south-2)
+region> 2
+Choose the endpoint from the same region:
+Option endpoint.
+Endpoint for IONOS S3 Object Storage.
+Specify the endpoint from the same region.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Frankfurt, Germany
+ \ (s3-eu-central-1.ionoscloud.com)
+ 2 / Berlin, Germany
+ \ (s3-eu-central-2.ionoscloud.com)
+ 3 / Logrono, Spain
+ \ (s3-eu-south-2.ionoscloud.com)
+endpoint> 1
+Press Enter to choose the default option or choose the desired ACL setting:
+Option acl.
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn't copy the ACL from the source but rather writes a fresh one.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+ / Owner gets FULL_CONTROL.
+[snip]
+acl>
+Press Enter to skip the advanced config:
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n>
+Press Enter to save the configuration, and then q
to quit the configuration process:
+Configuration complete.
+Options:
+- type: s3
+- provider: IONOS
+- access_key_id: YOUR_ACCESS_KEY
+- secret_access_key: YOUR_SECRET_KEY
+- endpoint: s3-eu-central-1.ionoscloud.com
+Keep this "ionos-fra" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Done! Now you can try some commands (for macOS, use ./rclone
instead of rclone
).
+
+- Create a bucket (the name must be unique within the whole IONOS S3)
+
+rclone mkdir ionos-fra:my-bucket
+
+- List available buckets
+
+rclone lsd ionos-fra:
+
+- Copy a file from local to remote
+
+rclone copy /Users/file.txt ionos-fra:my-bucket
+
+- List contents of a bucket
+
+rclone ls ionos-fra:my-bucket
+
+- Copy a file from remote to local
+
+rclone copy ionos-fra:my-bucket/file.txt
Minio
Minio is an object storage server built for cloud application developers and devops.
It is very easy to install and provides an S3 compatible server which can be used by rclone.
@@ -14019,6 +14590,191 @@ location_constraint =
server_side_encryption =
So once set up, for example, to copy files into a bucket
rclone copy /path/to/files minio:bucket
+Qiniu Cloud Object Storage (Kodo)
+Qiniu Cloud Object Storage (Kodo), a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management.
+To configure access to Qiniu Kodo, follow the steps below:
+
+- Run
rclone config
and select n
for a new remote.
+
+rclone config
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+- Give the name of the configuration. For example, name it 'qiniu'.
+
+name> qiniu
+
+- Select
s3
storage.
+
+Choose a number from below, or type in your own value
+ 1 / 1Fichier
+ \ (fichier)
+ 2 / Akamai NetStorage
+ \ (netstorage)
+ 3 / Alias for an existing remote
+ \ (alias)
+ 4 / Amazon Drive
+ \ (amazon cloud drive)
+ 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi
+ \ (s3)
+[snip]
+Storage> s3
+
+- Select
Qiniu
provider.
+
+Choose a number from below, or type in your own value
+1 / Amazon Web Services (AWS) S3
+ \ "AWS"
+[snip]
+22 / Qiniu Object Storage (Kodo)
+ \ (Qiniu)
+[snip]
+provider> Qiniu
+
+- Enter your SecretId and SecretKey of Qiniu Kodo.
+
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Enter a boolean value (true or false). Press Enter for the default ("false").
+Choose a number from below, or type in your own value
+ 1 / Enter AWS credentials in the next step
+ \ "false"
+ 2 / Get AWS credentials from the environment (env vars or IAM)
+ \ "true"
+env_auth> 1
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a string value. Press Enter for the default ("").
+access_key_id> AKIDxxxxxxxxxx
+AWS Secret Access Key (password)
+Leave blank for anonymous access or runtime credentials.
+Enter a string value. Press Enter for the default ("").
+secret_access_key> xxxxxxxxxxx
+
+- Select endpoint for Qiniu Kodo. This is the standard endpoint for different region.
+
+ / The default endpoint - a good choice if you are unsure.
+ 1 | East China Region 1.
+ | Needs location constraint cn-east-1.
+ \ (cn-east-1)
+ / East China Region 2.
+ 2 | Needs location constraint cn-east-2.
+ \ (cn-east-2)
+ / North China Region 1.
+ 3 | Needs location constraint cn-north-1.
+ \ (cn-north-1)
+ / South China Region 1.
+ 4 | Needs location constraint cn-south-1.
+ \ (cn-south-1)
+ / North America Region.
+ 5 | Needs location constraint us-north-1.
+ \ (us-north-1)
+ / Southeast Asia Region 1.
+ 6 | Needs location constraint ap-southeast-1.
+ \ (ap-southeast-1)
+ / Northeast Asia Region 1.
+ 7 | Needs location constraint ap-northeast-1.
+ \ (ap-northeast-1)
+[snip]
+endpoint> 1
+
+Option endpoint.
+Endpoint for Qiniu Object Storage.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / East China Endpoint 1
+ \ (s3-cn-east-1.qiniucs.com)
+ 2 / East China Endpoint 2
+ \ (s3-cn-east-2.qiniucs.com)
+ 3 / North China Endpoint 1
+ \ (s3-cn-north-1.qiniucs.com)
+ 4 / South China Endpoint 1
+ \ (s3-cn-south-1.qiniucs.com)
+ 5 / North America Endpoint 1
+ \ (s3-us-north-1.qiniucs.com)
+ 6 / Southeast Asia Endpoint 1
+ \ (s3-ap-southeast-1.qiniucs.com)
+ 7 / Northeast Asia Endpoint 1
+ \ (s3-ap-northeast-1.qiniucs.com)
+endpoint> 1
+
+Option location_constraint.
+Location constraint - must be set to match the Region.
+Used when creating buckets only.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / East China Region 1
+ \ (cn-east-1)
+ 2 / East China Region 2
+ \ (cn-east-2)
+ 3 / North China Region 1
+ \ (cn-north-1)
+ 4 / South China Region 1
+ \ (cn-south-1)
+ 5 / North America Region 1
+ \ (us-north-1)
+ 6 / Southeast Asia Region 1
+ \ (ap-southeast-1)
+ 7 / Northeast Asia Region 1
+ \ (ap-northeast-1)
+location_constraint> 1
+
+- Choose acl and storage class.
+
+Note that this ACL is applied when server-side copying objects as S3
+doesn't copy the ACL from the source but rather writes a fresh one.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+ / Owner gets FULL_CONTROL.
+ 2 | The AllUsers group gets READ access.
+ \ (public-read)
+[snip]
+acl> 2
+The storage class to use when storing new objects in Tencent COS.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+ 1 / Standard storage class
+ \ (STANDARD)
+ 2 / Infrequent access storage mode
+ \ (LINE)
+ 3 / Archive storage mode
+ \ (GLACIER)
+ 4 / Deep archive storage mode
+ \ (DEEP_ARCHIVE)
+[snip]
+storage_class> 1
+Edit advanced config? (y/n)
+y) Yes
+n) No (default)
+y/n> n
+Remote config
+--------------------
+[qiniu]
+- type: s3
+- provider: Qiniu
+- access_key_id: xxx
+- secret_access_key: xxx
+- region: cn-east-1
+- endpoint: s3-cn-east-1.qiniucs.com
+- location_constraint: cn-east-1
+- acl: public-read
+- storage_class: STANDARD
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+qiniu s3
RackCorp
RackCorp Object Storage is an S3 compatible object storage platform from your friendly cloud provider RackCorp. The service is fast, reliable, well priced and located in many strategic locations unserviced by others, to ensure you can maintain data sovereignty.
Before you can use RackCorp Object Storage, you'll need to "sign up" for an account on our "portal". Next you can create an access key
, a secret key
and buckets
, in your location of choice with ease. These details are required for the next steps of configuration, when rclone config
asks for your access_key_id
and secret_access_key
.
@@ -14986,7 +15742,7 @@ y/e/d> y
Transfers
Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equipped laptop the optimum setting is about --transfers 32
though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4
is definitely too low for Backblaze B2 though.
Note that uploading big files (bigger than 200 MiB by default) will use a 96 MiB RAM buffer by default. There can be at most --transfers
of these in use at any moment, so this sets the upper limit on the memory used.
-Versions
+Versions
When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the --b2-hard-delete
flag which would permanently remove the file instead of hiding it.
Old versions of files, where available, are visible using the --b2-versions
flag.
It is also possible to view a bucket as it was at a certain point in time, using the --b2-version-at
flag. This will show the file versions as they were at that time, showing files that have been deleted afterwards, and hiding files that were created since.
@@ -15033,7 +15789,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test
/b2api/v1/b2_get_upload_part_url
/b2api/v1/b2_upload_part/
/b2api/v1/b2_finish_large_file
-Versions
+Versions
Versions can be viewed with the --b2-versions
flag. When it is set rclone will show and act on older versions of files. For example
Listing without --b2-versions
$ rclone -q ls b2:cleanup-test
@@ -16643,10 +17399,10 @@ y/e/d>
Note: A string which do not contain a :
will by rclone be treated as a relative path in the local filesystem. For example, if you enter the name remote
without the trailing :
, it will be treated as a subdirectory of the current directory with name "remote".
If a path remote:path/to/dir
is specified, rclone stores encrypted files in path/to/dir
on the remote. With file name encryption, files saved to secret:subdir/subfile
are stored in the unencrypted path path/to/dir
but the subdir/subpath
element is encrypted.
The path you specify does not have to exist, rclone will create it when needed.
-If you intend to use the wrapped remote both directly for keeping unencrypted content, as well as through a crypt remote for encrypted content, it is recommended to point the crypt remote to a separate directory within the wrapped remote. If you use a bucket-based storage system (e.g. Swift, S3, Google Compute Storage, B2, Hubic) it is generally advisable to wrap the crypt remote around a specific bucket (s3:bucket
). If wrapping around the entire root of the storage (s3:
), and use the optional file name encryption, rclone will encrypt the bucket name.
+If you intend to use the wrapped remote both directly for keeping unencrypted content, as well as through a crypt remote for encrypted content, it is recommended to point the crypt remote to a separate directory within the wrapped remote. If you use a bucket-based storage system (e.g. Swift, S3, Google Compute Storage, B2) it is generally advisable to wrap the crypt remote around a specific bucket (s3:bucket
). If wrapping around the entire root of the storage (s3:
), and use the optional file name encryption, rclone will encrypt the bucket name.
Changing password
Should the password, or the configuration file containing a lightly obscured form of the password, be compromised, you need to re-encrypt your data with a new password. Since rclone uses secret-key encryption, where the encryption key is generated directly from the password kept on the client, it is not possible to change the password/key of already encrypted content. Just changing the password configured for an existing crypt remote means you will no longer able to decrypt any of the previously encrypted content. The only possibility is to re-upload everything via a crypt remote configured with your new password.
-Depending on the size of your data, your bandwith, storage quota etc, there are different approaches you can take: - If you have everything in a different location, for example on your local system, you could remove all of the prior encrypted files, change the password for your configured crypt remote (or delete and re-create the crypt configuration), and then re-upload everything from the alternative location. - If you have enough space on the storage system you can create a new crypt remote pointing to a separate directory on the same backend, and then use rclone to copy everything from the original crypt remote to the new, effectively decrypting everything on the fly using the old password and re-encrypting using the new password. When done, delete the original crypt remote directory and finally the rclone crypt configuration with the old password. All data will be streamed from the storage system and back, so you will get half the bandwith and be charged twice if you have upload and download quota on the storage system.
+Depending on the size of your data, your bandwidth, storage quota etc, there are different approaches you can take: - If you have everything in a different location, for example on your local system, you could remove all of the prior encrypted files, change the password for your configured crypt remote (or delete and re-create the crypt configuration), and then re-upload everything from the alternative location. - If you have enough space on the storage system you can create a new crypt remote pointing to a separate directory on the same backend, and then use rclone to copy everything from the original crypt remote to the new, effectively decrypting everything on the fly using the old password and re-encrypting using the new password. When done, delete the original crypt remote directory and finally the rclone crypt configuration with the old password. All data will be streamed from the storage system and back, so you will get half the bandwidth and be charged twice if you have upload and download quota on the storage system.
Note: A security problem related to the random password generator was fixed in rclone version 1.53.3 (released 2020-11-19). Passwords generated by rclone config in version 1.49.0 (released 2019-08-26) to 1.53.2 (released 2020-10-26) are not considered secure and should be changed. If you made up your own password, or used rclone version older than 1.49.0 or newer than 1.53.2 to generate it, you are not affected by this issue. See issue #4783 for more details, and a tool you can use to check if you are affected.
Example
Create the following file structure using "standard" file name encryption.
@@ -16852,7 +17608,7 @@ $ rclone -q ls secret:
--crypt-filename-encoding
How to encode the encrypted filename to text string.
-This option could help with shortening the encrypted filename. The suitable option would depend on the way your remote count the filename length and if it's case sensitve.
+This option could help with shortening the encrypted filename. The suitable option would depend on the way your remote count the filename length and if it's case sensitive.
Properties:
- Config: filename_encoding
@@ -17059,7 +17815,7 @@ y/e/d> y
--compress-level
GZIP compression level (-2 to 9).
Generally -1 (default, equivalent to 5) is recommended. Levels 1 to 9 increase compression at the cost of speed. Going past 6 generally offers very little return.
-Level -2 uses Huffmann encoding only. Only use if you know what you are doing. Level 0 turns off compression.
+Level -2 uses Huffman encoding only. Only use if you know what you are doing. Level 0 turns off compression.
Properties:
- Config: level
@@ -17156,7 +17912,7 @@ remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives]
type = combine
-remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
+upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
If you then add that config to your config file (find it with rclone config file
) then you can access all the shared drives in one place with the AllDrives:
remote.
See the Google Drive docs for full info.
Standard options
@@ -17452,7 +18208,7 @@ y/e/d> y
- Default: 0s
--dropbox-batch-commit-timeout
-Max time to wait for a batch to finish comitting
+Max time to wait for a batch to finish committing
Properties:
- Config: batch_commit_timeout
@@ -17492,7 +18248,7 @@ y/e/d> y
Enterprise File Fabric
This backend supports Storage Made Easy's Enterprise File Fabric™ which provides a software solution to integrate and unify File and Object Storage accessible through a global file system.
Configuration
-The initial setup for the Enterprise File Fabric backend involves getting a token from the the Enterprise File Fabric which you need to do in your browser. rclone config
walks you through it.
+The initial setup for the Enterprise File Fabric backend involves getting a token from the Enterprise File Fabric which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -17684,7 +18440,7 @@ y/e/d> y
Configuration
To create an FTP configuration named remote
, run
rclone config
-Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, use anonymous
as username and your email address as password.
+Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, see below.
No remotes found, make a new one?
n) New remote
r) Rename remote
@@ -17748,13 +18504,19 @@ y/e/d> y
rclone ls remote:path/to/directory
Sync /home/local/directory
to the remote directory, deleting any excess files in the directory.
rclone sync -i /home/local/directory remote:directory
-Example without a config file
-rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=`rclone obscure dummy`
+Anonymous FTP
+When connecting to a FTP server that allows anonymous login, you can use the special "anonymous" username. Traditionally, this user account accepts any string as a password, although it is common to use either the password "anonymous" or "guest". Some servers require the use of a valid e-mail address as password.
+Using on-the-fly or connection string remotes makes it easy to access such servers, without requiring any configuration in advance. The following are examples of that:
+rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy)
+rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy):
+The above examples work in Linux shells and in PowerShell, but not Windows Command Prompt. They execute the rclone obscure command to create a password string in the format required by the pass option. The following examples are exactly the same, except use an already obscured string representation of the same password "dummy", and therefore works even in Windows Command Prompt:
+rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM
+rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM:
Implicit TLS
Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the FTP backend config for the remote, or with --ftp-tls
. The default FTPS port is 990
, not 21
and can be set with --ftp-port
.
Restricted filename characters
In addition to the default restricted characters set the following characters are also replaced:
-File names cannot end with the following characters. Repacement is limited to the last character in a file name:
+File names cannot end with the following characters. Replacement is limited to the last character in a file name:
format |
Name of format identified by Internet Archive |
string |
Comma-Separated Values |
-N |
+Y |
md5 |
MD5 hash calculated by Internet Archive |
string |
01234567012345670123456701234567 |
-N |
+Y |
mtime |
Time of last modification, managed by Rclone |
RFC 3339 |
2006-01-02T15:04:05.999999999Z |
-N |
+Y |
name |
Full file path, without the bucket part |
filename |
backend/internetarchive/internetarchive.go |
-N |
+Y |
old_version |
Whether the file was replaced and moved by keep-old-version flag |
boolean |
true |
-N |
+Y |
rclone-ia-mtime |
@@ -21201,48 +21837,60 @@ y/e/d> y
SHA1 hash calculated by Internet Archive |
string |
0123456701234567012345670123456701234567 |
-N |
+Y |
size |
File size in bytes |
decimal number |
123456 |
-N |
+Y |
source |
The source of the file |
string |
original |
-N |
+Y |
+summation |
+Check https://forum.rclone.org/t/31922 for how it is used |
+string |
+md5 |
+Y |
+
+
viruscheck |
The last time viruscheck process was run for the file (?) |
unixtime |
1654191352 |
-N |
+Y |
See the metadata docs for more info.
Jottacloud
-Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway. In addition to the official service at jottacloud.com, it also provides white-label solutions to different companies, such as: * Telia * Telia Cloud (cloud.telia.se) * Telia Sky (sky.telia.no) * Tele2 * Tele2 Cloud (mittcloud.tele2.se) * Elkjøp (with subsidiaries): * Elkjøp Cloud (cloud.elkjop.no) * Elgiganten Sweden (cloud.elgiganten.se) * Elgiganten Denmark (cloud.elgiganten.dk) * Giganti Cloud (cloud.gigantti.fi) * ELKO Clouud (cloud.elko.is)
+Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway. In addition to the official service at jottacloud.com, it also provides white-label solutions to different companies, such as: * Telia * Telia Cloud (cloud.telia.se) * Telia Sky (sky.telia.no) * Tele2 * Tele2 Cloud (mittcloud.tele2.se) * Elkjøp (with subsidiaries): * Elkjøp Cloud (cloud.elkjop.no) * Elgiganten Sweden (cloud.elgiganten.se) * Elgiganten Denmark (cloud.elgiganten.dk) * Giganti Cloud (cloud.gigantti.fi) * ELKO Cloud (cloud.elko.is)
Most of the white-label versions are supported by this backend, although may require different authentication setup - described below.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
Authentication types
Some of the whitelabel versions uses a different authentication method than the official service, and you have to choose the correct one when setting up the remote.
Standard authentication
-To configure Jottacloud you will need to generate a personal security token in the Jottacloud web interface. You will the option to do in your account security settings (for whitelabel version you need to find this page in its web interface). Note that the web interface may refer to this token as a JottaCli token.
+The standard authentication method used by the official service (jottacloud.com), as well as some of the whitelabel services, requires you to generate a single-use personal login token from the account security settings in the service's web interface. Log in to your account, go to "Settings" and then "Security", or use the direct link presented to you by rclone when configuring the remote: https://www.jottacloud.com/web/secure. Scroll down to the section "Personal login token", and click the "Generate" button. Note that if you are using a whitelabel service you probably can't use the direct link, you need to find the same page in their dedicated web interface, and also it may be in a different location than described above.
+To access your account from multiple instances of rclone, you need to configure each of them with a separate personal login token. E.g. you create a Jottacloud remote with rclone in one location, and copy the configuration file to a second location where you also want to run rclone and access the same remote. Then you need to replace the token for one of them, using the config reconnect command, which requires you to generate a new personal login token and supply as input. If you do not do this, the token may easily end up being invalidated, resulting in both instances failing with an error message something along the lines of:
+oauth2: cannot fetch token: 400 Bad Request
+Response: {"error":"invalid_grant","error_description":"Stale token"}
+When this happens, you need to replace the token as described above to be able to use your remote again.
+All personal login tokens you have taken into use will be listed in the web interface under "My logged in devices", and from the right side of that list you can click the "X" button to revoke individual tokens.
Legacy authentication
If you are using one of the whitelabel versions (e.g. from Elkjøp) you may not have the option to generate a CLI token. In this case you'll have to use the legacy authentication. To do this select yes when the setup asks for legacy authentication and enter your username and password. The rest of the setup is identical to the default setup.
Telia Cloud authentication
Similar to other whitelabel versions Telia Cloud doesn't offer the option of creating a CLI token, and additionally uses a separate authentication flow where the username is generated internally. To setup rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is identical to the default setup.
Tele2 Cloud authentication
As Tele2-Com Hem merger was completed this authentication can be used for former Com Hem Cloud and Tele2 Cloud customers as no support for creating a CLI token exists, and additionally uses a separate authentication flow where the username is generated internally. To setup rclone to use Tele2 Cloud, choose Tele2 Cloud authentication in the setup. The rest of the setup is identical to the default setup.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
with the default setup. First run:
rclone config
This will guide you through an interactive setup process:
@@ -21339,7 +21987,7 @@ y/e/d> y
With rclone you'll want to use the standard Jotta/Archive device/mountpoint in most cases. However, you may for example want to access files from the sync or backup functionality provided by the official clients, and rclone therefore provides the option to select other devices and mountpoints during config.
You are allowed to create new devices and mountpoints. All devices except the built-in Jotta device are treated as backup devices by official Jottacloud clients, and the mountpoints on them are individual backup sets.
With the built-in Jotta device, only existing, built-in, mountpoints can be selected. In addition to the mentioned Archive and Sync, it may contain several other mountpoints such as: Latest, Links, Shared and Trash. All of these are special mountpoints with a different internal representation than the "regular" mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing.
---fast-list
+--fast-list
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown.
Note also that with rclone version 1.58 and newer information about MIME types are not available when using --fast-list
.
@@ -21398,12 +22046,12 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
Deleting files
By default, rclone will send all files to the trash when deleting files. They will be permanently deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately by using the --jottacloud-hard-delete flag, or set the equivalent environment variable. Emptying the trash is supported by the cleanup command.
-Versions
+Versions
Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website.
Versioning can be disabled by --jottacloud-no-versions
option. This is achieved by deleting the remote file prior to uploading a new version. If the upload the fails no version of the file will be available in the remote.
To view your current quota you can use the rclone about remote:
command which will display your usage limit (unless it is unlimited) and the current usage.
-Advanced options
+Advanced options
Here are the Advanced options specific to jottacloud (Jottacloud).
--jottacloud-md5-memory-limit
Files bigger than this will be cached on disk to calculate the MD5 if required.
@@ -21461,7 +22109,7 @@ y/e/d> y
- Type: MultiEncoder
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
Jottacloud only supports filenames up to 255 characters in length.
@@ -21470,7 +22118,7 @@ y/e/d> y
Koofr
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr web application, giving the password a nice name like rclone
and clicking on generate.
Here is an example of how to make a remote called koofr
. First run:
rclone config
@@ -21557,7 +22205,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Standard options
+Standard options
Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
--koofr-provider
Choose your storage provider.
@@ -21635,7 +22283,7 @@ y/e/d> y
- Type: string
- Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
--koofr-mountid
Mount ID of the mount to use.
@@ -21667,7 +22315,7 @@ y/e/d> y
- Type: MultiEncoder
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Providers
Koofr
@@ -21809,7 +22457,7 @@ y/e/d> y
- Storage keeps hash for all files and performs transparent deduplication, the hash algorithm is a modified SHA1
- If a particular file is already present in storage, one can quickly submit file hash instead of long file upload (this optimization is supported by rclone)
-Configuration
+Configuration
Here is an example of making a mailru configuration. First create a Mail.ru Cloud account and choose a tariff, then run
rclone config
This will guide you through an interactive setup process:
@@ -21875,7 +22523,7 @@ y/e/d> y
rclone ls remote:directory
Sync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync -i /home/local/directory remote:directory
-Modified time
+Modified time
Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as "Jan 1 1970".
Hash checksums
Hash sums use a custom Mail.ru algorithm based on SHA1. If file size is less than or equal to the SHA1 block size (20 bytes), its hash is simply its data right-padded with zero bytes. Hash sum of a larger file is computed as a SHA1 sum of the file data bytes concatenated with a decimal representation of the data length.
@@ -21937,7 +22585,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to mailru (Mail.ru Cloud).
--mailru-user
User name (usually email).
@@ -21979,7 +22627,7 @@ y/e/d> y
-Advanced options
+Advanced options
Here are the Advanced options specific to mailru (Mail.ru Cloud).
--mailru-speedup-file-patterns
Comma separated list of file name patterns eligible for speedup (put by hash).
@@ -22109,7 +22757,7 @@ y/e/d> y
- Type: MultiEncoder
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits.
Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Mega
@@ -22117,7 +22765,7 @@ y/e/d> y
This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -22195,7 +22843,7 @@ y/e/d> y
Use rclone dedupe
to fix duplicated files.
Failure to log-in
Object not found
-If you are connecting to your Mega remote for the first time, to test access and syncronisation, you may receive an error such as
+If you are connecting to your Mega remote for the first time, to test access and synchronization, you may receive an error such as
Failed to create file system for "my-mega-remote:":
couldn't login: Object (typically, node or user) not found
The diagnostic steps often recommended in the rclone forum start with the MEGAcmd utility. Note that this refers to the official C++ command from https://github.com/meganz/MEGAcmd and not the go language built command from t3rm1n4l/megacmd that is no longer maintained.
@@ -22218,7 +22866,7 @@ me@example.com:/$
Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though.
Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum.
So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while.
-Standard options
+Standard options
Here are the Standard options specific to mega (Mega).
--mega-user
User name.
@@ -22239,7 +22887,7 @@ me@example.com:/$
- Type: string
- Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to mega (Mega).
--mega-debug
Output more debug from Mega.
@@ -22271,13 +22919,13 @@ me@example.com:/$
- Type: MultiEncoder
- Default: Slash,InvalidUtf8,Dot
-Limitations
+Limitations
This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.
Mega allows duplicate files which may confuse rclone.
Memory
The memory backend is an in RAM backend. It does not persist its data - use the local backend for that.
The memory backend behaves like a bucket-based remote (e.g. like s3). Because it has no parameters you can just use it with the :memory:
remote name.
-Configuration
+Configuration
You can configure it as a remote like this with rclone config
too if you want to:
No remotes found, make a new one?
n) New remote
@@ -22317,7 +22965,7 @@ rclone serve sftp :memory:
Paths are specified as remote:
You may put subdirectories in too, e.g. remote:/path/to/dir
. If you have a CP code you can use that as the folder after the domain such as <domain>/<cpcode>/<internal directories within cpcode>.
For example, this is commonly configured with or without a CP code: * With a CP code. [your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/
* Without a CP code. [your-domain-prefix]-nsu.akamaihd.net
See all buckets rclone lsd remote: The initial setup for Netstorage involves getting an account and secret. Use rclone config
to walk you through the setup process.
-Configuration
+Configuration
Here's an example of how to make a remote called ns1
.
- To begin the interactive configuration process, enter this command:
@@ -22411,7 +23059,7 @@ y/e/d> y
With NetStorage, directories can exist in one of two forms:
- Explicit Directory. This is an actual, physical directory that you have created in a storage group.
-- Implicit Directory. This refers to a directory within a path that has not been physically created. For example, during upload of a file, non-existent subdirectories can be specified in the target path. NetStorage creates these as "implicit." While the directories aren't physically created, they exist implicitly and the noted path is connected with the uploaded file.
+- Implicit Directory. This refers to a directory within a path that has not been physically created. For example, during upload of a file, nonexistent subdirectories can be specified in the target path. NetStorage creates these as "implicit." While the directories aren't physically created, they exist implicitly and the noted path is connected with the uploaded file.
Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly.
--fast-list
/ ListR support
@@ -22425,7 +23073,7 @@ y/e/d> y
Purge
NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method.
Note: Read the NetStorage Usage API for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible.
-Standard options
+Standard options
Here are the Standard options specific to netstorage (Akamai NetStorage).
--netstorage-host
Domain+path of NetStorage host to connect to.
@@ -22457,7 +23105,7 @@ y/e/d> y
- Type: string
- Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to netstorage (Akamai NetStorage).
--netstorage-protocol
Select between HTTP or HTTPS protocol.
@@ -22497,7 +23145,7 @@ y/e/d> y
The desired path location (including applicable sub-directories) ending in the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if applicable. rclone backend symlink <src> <path>
Microsoft Azure Blob Storage
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:container/path/to/dir
.
-Configuration
+Configuration
Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -22539,9 +23187,9 @@ y/e/d> y
rclone ls remote:container
Sync /home/local/directory
to the remote container, deleting any excess files in the container.
rclone sync -i /home/local/directory remote:container
---fast-list
+--fast-list
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
-Modified time
+Modified time
The modified time is stored as metadata on the object with the mtime
key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it.
When uploading large files, increasing the value of --azureblob-upload-concurrency
will increase performance at the cost of using more memory. The default of 16 is set quite conservatively to use less memory. It maybe be necessary raise it to 64 or higher to fully utilize a 1 GBit/s link with a single file transfer.
@@ -22604,7 +23252,7 @@ container/
Note that you can't see or access any other containers - this will fail
rclone ls azureblob:othercontainer
Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server.
-Standard options
+Standard options
Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage).
--azureblob-account
Storage Account Name.
@@ -22672,7 +23320,7 @@ container/
- Type: bool
- Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage).
--azureblob-msi-object-id
Object ID of the user-assigned MSI to use, if any.
@@ -22852,7 +23500,7 @@ container/
- Type: bool
- Default: false
-Limitations
+Limitations
MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.
rclone about
is not supported by the Microsoft Azure Blob storage backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
@@ -22863,7 +23511,7 @@ container/
Microsoft OneDrive
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -22950,7 +23598,7 @@ y/e/d> y
rclone copy /home/source remote:backup
Getting your own Client ID and Key
rclone uses a default Client ID when talking to OneDrive, unless a custom client_id
is specified in the config. The default Client ID and Key are shared by all rclone users when performing requests.
-You may choose to create and use your own Client ID, in case the default one does not work well for you. For example, you might see throtting.
+You may choose to create and use your own Client ID, in case the default one does not work well for you. For example, you might see throttling.
Creating Client ID for OneDrive Personal
To create your own Client ID, please follow these steps:
@@ -22968,7 +23616,7 @@ y/e/d> y
You may try to verify you account, or try to limit the App to your organization only, as shown below.
- Make sure to create the App with your business account.
-- Follow the steps above to create an App. However, we need a different account type here:
Accounts in this organizational directory only (*** - Single tenant)
. Note that you can also change the account type aftering creating the App.
+- Follow the steps above to create an App. However, we need a different account type here:
Accounts in this organizational directory only (*** - Single tenant)
. Note that you can also change the account type after creating the App.
- Find the tenant ID of your organization.
- In the rclone config, set
auth_url
to https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize
.
- In the rclone config, set
token_url
to https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token
.
@@ -23078,7 +23726,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Deleting files
Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.
-Standard options
+Standard options
Here are the Standard options specific to onedrive (Microsoft OneDrive).
--onedrive-client-id
OAuth Client Id.
@@ -23124,11 +23772,11 @@ y/e/d> y
- "cn"
-- Azure and Office 365 operated by 21Vianet in China
+- Azure and Office 365 operated by Vnet Group in China
-Advanced options
+Advanced options
Here are the Advanced options specific to onedrive (Microsoft OneDrive).
--onedrive-token
OAuth Access Token as a JSON blob.
@@ -23342,7 +23990,7 @@ y/e/d> y
- Type: MultiEncoder
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
-Limitations
+Limitations
If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the rclone config reconnect remote:
command to get a new token and refresh token.
Naming
Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
@@ -23354,8 +24002,8 @@ y/e/d> y
Number of files
OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000 rclone will get errors listing the directory like couldn’t list files: UnknownError:
. See #2707 for more info.
An official document about the limitations for different types of OneDrive can be found here.
-Versions
-Every change in a file OneDrive causes the service to create a new version of the the file. This counts against a users quota. For example changing the modification time of a file creates a second version, so the file apparently uses twice the space.
+Versions
+Every change in a file OneDrive causes the service to create a new version of the file. This counts against a users quota. For example changing the modification time of a file creates a second version, so the file apparently uses twice the space.
For example the copy
command is affected by this as rclone copies the file and then afterwards sets the modification time to match the source file which uses another version.
You can use the rclone cleanup
command (see below) to remove all old versions.
Or you can set the no_versions
parameter to true
and rclone will remove versions after operations which create new versions. This takes extra transactions so only enable it if you need it.
@@ -23414,7 +24062,7 @@ Description: Due to a configuration change made by your administrator, or becaus
OpenDrive
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -23557,7 +24205,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to opendrive (OpenDrive).
--opendrive-username
Username.
@@ -23578,7 +24226,7 @@ y/e/d> y
- Type: string
- Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to opendrive (OpenDrive).
--opendrive-encoding
The encoding for the backend.
@@ -23600,11 +24248,388 @@ y/e/d> y
- Type: SizeSuffix
- Default: 10Mi
-Limitations
+Limitations
Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
rclone about
is not supported by the OpenDrive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
+Oracle Object Storage
+Oracle Object Storage Overview
+Oracle Object Storage FAQ
+Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
+Configuration
+Here is an example of making an oracle object storage configuration. rclone config
walks you through it.
+Here is an example of how to make a remote called remote
. First run:
+ rclone config
+This will guide you through an interactive setup process:
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> n
+
+Enter name for new remote.
+name> remote
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Oracle Cloud Infrastructure Object Storage
+ \ (oracleobjectstorage)
+Storage> oracleobjectstorage
+
+Option provider.
+Choose your Auth Provider
+Choose a number from below, or type in your own string value.
+Press Enter for the default (env_auth).
+ 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins
+ \ (env_auth)
+ / use an OCI user and an API key for authentication.
+ 2 | you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
+ | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
+ \ (user_principal_auth)
+ / use instance principals to authorize an instance to make API calls.
+ 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata.
+ | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
+ \ (instance_principal_auth)
+ 4 / use resource principals to make API calls
+ \ (resource_principal_auth)
+ 5 / no credentials needed, this is typically for reading public buckets
+ \ (no_auth)
+provider> 2
+
+Option namespace.
+Object storage namespace
+Enter a value.
+namespace> idbamagbg734
+
+Option compartment.
+Object storage compartment OCID
+Enter a value.
+compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
+
+Option region.
+Object storage Region
+Enter a value.
+region> us-ashburn-1
+
+Option endpoint.
+Endpoint for Object storage API.
+Leave blank to use the default endpoint for the region.
+Enter a value. Press Enter to leave empty.
+endpoint>
+
+Option config_file.
+Path to OCI config file
+Choose a number from below, or type in your own string value.
+Press Enter for the default (~/.oci/config).
+ 1 / oci configuration file location
+ \ (~/.oci/config)
+config_file> /etc/oci/dev.conf
+
+Option config_profile.
+Profile name inside OCI config file
+Choose a number from below, or type in your own string value.
+Press Enter for the default (Default).
+ 1 / Use the default profile
+ \ (Default)
+config_profile> Test
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: oracleobjectstorage
+- namespace: idbamagbg734
+- compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
+- region: us-ashburn-1
+- provider: user_principal_auth
+- config_file: /etc/oci/dev.conf
+- config_profile: Test
+Keep this "remote" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+See all buckets
+rclone lsd remote:
+Create a new bucket
+rclone mkdir remote:bucket
+List the contents of a bucket
+rclone ls remote:bucket
+rclone ls remote:bucket --max-depth 1
+Modified time
+The modified time is stored as metadata on the object as opc-meta-mtime
as floating point since the epoch, accurate to 1 ns.
+If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb, the object will be uploaded rather than copied.
+Note that reading this from the object takes an additional HEAD
request as the metadata isn't returned in object listings.
+Multipart uploads
+rclone supports multipart uploads with OOS which means that it can upload files bigger than 5 GiB.
+Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.
+rclone switches from single part uploads to multipart uploads at the point specified by --oos-upload-cutoff
. This can be a maximum of 5 GiB and a minimum of 0 (ie always upload multipart files).
+The chunk sizes used in the multipart upload are specified by --oos-chunk-size
and the number of chunks uploaded concurrently is specified by --oos-upload-concurrency
.
+Multipart uploads will use --transfers
* --oos-upload-concurrency
* --oos-chunk-size
extra memory. Single part uploads to not use extra memory.
+Single part transfers can be faster than multipart transfers or slower depending on your latency from oos - the more latency, the more likely single part transfers will be faster.
+Increasing --oos-upload-concurrency
will increase throughput (8 would be a sensible value) and increasing --oos-chunk-size
also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.
+Standard options
+Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
+--oos-provider
+Choose your Auth Provider
+Properties:
+
+- Config: provider
+- Env Var: RCLONE_OOS_PROVIDER
+- Type: string
+- Default: "env_auth"
+- Examples:
+
+- "env_auth"
+
+- automatically pickup the credentials from runtime(env), first one to provide auth wins
+
+- "user_principal_auth"
+
+- use an OCI user and an API key for authentication.
+- you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
+- https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
+
+- "instance_principal_auth"
+
+- use instance principals to authorize an instance to make API calls.
+- each instance has its own identity, and authenticates using the certificates that are read from instance metadata.
+- https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
+
+- "resource_principal_auth"
+
+- use resource principals to make API calls
+
+- "no_auth"
+
+- no credentials needed, this is typically for reading public buckets
+
+
+
+--oos-namespace
+Object storage namespace
+Properties:
+
+- Config: namespace
+- Env Var: RCLONE_OOS_NAMESPACE
+- Type: string
+- Required: true
+
+--oos-compartment
+Object storage compartment OCID
+Properties:
+
+- Config: compartment
+- Env Var: RCLONE_OOS_COMPARTMENT
+- Provider: !no_auth
+- Type: string
+- Required: true
+
+--oos-region
+Object storage Region
+Properties:
+
+- Config: region
+- Env Var: RCLONE_OOS_REGION
+- Type: string
+- Required: true
+
+--oos-endpoint
+Endpoint for Object storage API.
+Leave blank to use the default endpoint for the region.
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_OOS_ENDPOINT
+- Type: string
+- Required: false
+
+--oos-config-file
+Path to OCI config file
+Properties:
+
+- Config: config_file
+- Env Var: RCLONE_OOS_CONFIG_FILE
+- Provider: user_principal_auth
+- Type: string
+- Default: "~/.oci/config"
+- Examples:
+
+- "~/.oci/config"
+
+- oci configuration file location
+
+
+
+--oos-config-profile
+Profile name inside the oci config file
+Properties:
+
+- Config: config_profile
+- Env Var: RCLONE_OOS_CONFIG_PROFILE
+- Provider: user_principal_auth
+- Type: string
+- Default: "Default"
+- Examples:
+
+- "Default"
+
+- Use the default profile
+
+
+
+Advanced options
+Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
+--oos-upload-cutoff
+Cutoff for switching to chunked upload.
+Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB.
+Properties:
+
+- Config: upload_cutoff
+- Env Var: RCLONE_OOS_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 200Mi
+
+--oos-chunk-size
+Chunk size to use for uploading.
+When uploading files larger than upload_cutoff or files with unknown size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google photos or google docs) they will be uploaded as multipart uploads using this chunk size.
+Note that "upload_concurrency" chunks of this size are buffered in memory per transfer.
+If you are transferring large files over high-speed links and you have enough memory, then increasing this will speed up the transfers.
+Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit.
+Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5 MiB and there can be at most 10,000 chunks, this means that by default the maximum size of a file you can stream upload is 48 GiB. If you wish to stream upload larger files then you will need to increase chunk_size.
+Increasing the chunk size decreases the accuracy of the progress statistics displayed with "-P" flag.
+Properties:
+
+- Config: chunk_size
+- Env Var: RCLONE_OOS_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5Mi
+
+--oos-upload-concurrency
+Concurrency for multipart uploads.
+This is the number of chunks of the same file that are uploaded concurrently.
+If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 10
+
+--oos-copy-cutoff
+Cutoff for switching to multipart copy.
+Any files larger than this that need to be server-side copied will be copied in chunks of this size.
+The minimum is 0 and the maximum is 5 GiB.
+Properties:
+
+- Config: copy_cutoff
+- Env Var: RCLONE_OOS_COPY_CUTOFF
+- Type: SizeSuffix
+- Default: 4.656Gi
+
+--oos-copy-timeout
+Timeout for copy.
+Copy is an asynchronous operation, specify timeout to wait for copy to succeed
+Properties:
+
+- Config: copy_timeout
+- Env Var: RCLONE_OOS_COPY_TIMEOUT
+- Type: Duration
+- Default: 1m0s
+
+--oos-disable-checksum
+Don't store MD5 checksum with object metadata.
+Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.
+Properties:
+
+- Config: disable_checksum
+- Env Var: RCLONE_OOS_DISABLE_CHECKSUM
+- Type: bool
+- Default: false
+
+--oos-encoding
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_OOS_ENCODING
+- Type: MultiEncoder
+- Default: Slash,InvalidUtf8,Dot
+
+--oos-leave-parts-on-error
+If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
+It should be set to true for resuming uploads across different sessions.
+WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and will add additional costs if not cleaned up.
+Properties:
+
+- Config: leave_parts_on_error
+- Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR
+- Type: bool
+- Default: false
+
+--oos-no-check-bucket
+If set, don't attempt to check the bucket exists or create it.
+This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already.
+It can also be needed if the user you are using does not have bucket creation permissions.
+Properties:
+
+- Config: no_check_bucket
+- Env Var: RCLONE_OOS_NO_CHECK_BUCKET
+- Type: bool
+- Default: false
+
+Backend commands
+Here are the commands specific to the oracleobjectstorage backend.
+Run them with
+rclone backend COMMAND remote:
+The help below will explain what arguments each command takes.
+See the backend command for more info on how to pass options and arguments.
+These can be run on a running backend using the rc command backend/command.
+rename
+change the name of an object
+rclone backend rename remote: [options] [<arguments>+]
+This command can be used to rename a object.
+Usage Examples:
+rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
+list-multipart-uploads
+List the unfinished multipart uploads
+rclone backend list-multipart-uploads remote: [options] [<arguments>+]
+This command lists the unfinished multipart uploads in JSON format.
+rclone backend list-multipart-uploads oos:bucket/path/to/object
+It returns a dictionary of buckets with values as lists of unfinished multipart uploads.
+You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path.
+{
+ "test-bucket": [
+ {
+ "namespace": "test-namespace",
+ "bucket": "test-bucket",
+ "object": "600m.bin",
+ "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
+ "timeCreated": "2022-07-29T06:21:16.595Z",
+ "storageTier": "Standard"
+ }
+ ]
+cleanup
+Remove unfinished multipart uploads.
+rclone backend cleanup remote: [options] [<arguments>+]
+This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours.
+Note that you can use -i/--dry-run with this command to see what it would do.
+rclone backend cleanup oos:bucket/path/to/object
+rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
+Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
+Options:
+
+- "max-age": Max age of upload to delete
+
QingStor
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
Configuration
@@ -23675,9 +24700,9 @@ y/e/d> y
rclone ls remote:bucket
Sync /home/local/directory
to the remote bucket, deleting any excess files in the bucket.
rclone sync -i /home/local/directory remote:bucket
---fast-list
+--fast-list
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
-Multipart uploads
+Multipart uploads
rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5 GiB. Note that files uploaded with multipart upload don't have an MD5SUM.
Note that incomplete multipart uploads older than 24 hours can be removed with rclone cleanup remote:bucket
just for one bucket rclone cleanup remote:
for all buckets. QingStor does not ever remove incomplete multipart uploads so it may be necessary to run this from time to time.
Buckets and Zone
@@ -23838,7 +24863,7 @@ y/e/d> y
- Type: MultiEncoder
- Default: Slash,Ctl,InvalidUtf8
-Limitations
+Limitations
rclone about
is not supported by the qingstor backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
Sia
@@ -23953,7 +24978,7 @@ y/e/d> y
- Type: MultiEncoder
- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:container/path/to/dir
.
@@ -24103,7 +25128,7 @@ tenant = $OS_TENANT_NAME
export RCLONE_CONFIG_MYREMOTE_TYPE=swift
export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
rclone lsd myremote:
---fast-list
+--fast-list
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
--update and --use-server-modtime
As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.
@@ -24402,6 +25427,19 @@ rclone lsd myremote:
- Type: bool
- Default: false
+--swift-no-large-objects
+Disable support for static and dynamic large objects
+Swift cannot transparently store files bigger than 5 GiB. There are two schemes for doing that, static or dynamic large objects, and the API does not allow rclone to determine whether a file is a static or dynamic large object without doing a HEAD on the object. Since these need to be treated differently, this means rclone has to issue HEAD requests for objects for example when reading checksums.
+When no_large_objects
is set, rclone will assume that there are no static or dynamic large objects stored. This means it can stop doing the extra HEAD calls which in turn increases performance greatly especially when doing a swift to swift transfer with --checksum
set.
+Setting this option implies no_chunk
and also that no files will be uploaded in chunks, so files bigger than 5 GiB will just fail on upload.
+If you set this option and there are static or dynamic large objects, then this will give incorrect hashes for them. Downloads will succeed, but other operations such as Remove and Copy will fail.
+Properties:
+
+- Config: no_large_objects
+- Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS
+- Type: bool
+- Default: false
+
--swift-encoding
The encoding for the backend.
See the encoding section in the overview for more info.
@@ -24412,7 +25450,7 @@ rclone lsd myremote:
- Type: MultiEncoder
- Default: Slash,InvalidUtf8
-Limitations
+Limitations
The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
Troubleshooting
Rclone gives Failed to create file system for "remote:": Bad Request
@@ -24732,7 +25770,7 @@ y/e/d>
- Type: MultiEncoder
- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
premiumize.me file names can't have the \
or "
characters in. rclone maps these to and from an identical looking unicode equivalents \
and "
premiumize.me only supports filenames up to 255 characters in length.
@@ -24833,7 +25871,7 @@ e/n/d/r/c/s/q> q
- Type: MultiEncoder
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
put.io has rate limiting. When you hit a limit, rclone automatically retries after waiting the amount of time requested by the server.
If you want to avoid ever hitting these limits, you may use the --tpslimit
flag with a low number. Note that the imposed limits may be different for different operations, and may change over time.
Seafile
@@ -24995,7 +26033,7 @@ y/e/d> y
rclone ls seafile:directory
Sync /home/local/directory
to the remote library, deleting any excess files in the library.
rclone sync -i /home/local/directory seafile:
---fast-list
+--fast-list
Seafile version 7+ supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Please note this is not supported on seafile server version 6.x
Restricted filename characters
In addition to the default restricted characters set the following characters are also replaced:
@@ -25143,7 +26181,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
- rsync.net
SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.
-Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user's home directory. For example, rclone lsd remote:
would list the home directory of the user cofigured in the rclone remote config (i.e /home/sftpuser
). However, rclone lsd remote:/
would list the root directory for remote machine (i.e. /
)
+Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user's home directory. For example, rclone lsd remote:
would list the home directory of the user configured in the rclone remote config (i.e /home/sftpuser
). However, rclone lsd remote:/
would list the root directory for remote machine (i.e. /
)
Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net and Hetzner, on the other hand, requires users to OMIT the leading /.
Note that by default rclone will try to execute shell commands on the server, see shell access considerations.
Configuration
@@ -25269,12 +26307,12 @@ known_hosts_file = ~/.ssh/known_hosts
These commands can be used in scripts of course.
Shell access
Some functionality of the SFTP backend relies on remote shell access, and the possibility to execute commands. This includes checksum, and in some cases also about. The shell commands that must be executed may be different on different type of shells, and also quoting/escaping of file path arguments containing special characters may be different. Rclone therefore needs to know what type of shell it is, and if shell access is available at all.
-Most servers run on some version of Unix, and then a basic Unix shell can be assumed, without further distinction. Windows 10, Server 2019, and later can also run a SSH server, which is a port of OpenSSH (see official installation guide). On a Windows server the shell handling is different: Although it can also be set up to use a Unix type shell, e.g. Cygwin bash, the default is to use Windows Command Prompt (cmd.exe), and PowerShell is a recommended alternative. All of these have bahave differently, which rclone must handle.
+Most servers run on some version of Unix, and then a basic Unix shell can be assumed, without further distinction. Windows 10, Server 2019, and later can also run a SSH server, which is a port of OpenSSH (see official installation guide). On a Windows server the shell handling is different: Although it can also be set up to use a Unix type shell, e.g. Cygwin bash, the default is to use Windows Command Prompt (cmd.exe), and PowerShell is a recommended alternative. All of these have behave differently, which rclone must handle.
Rclone tries to auto-detect what type of shell is used on the server, first time you access the SFTP remote. If a remote shell session is successfully created, it will look for indications that it is CMD or PowerShell, with fall-back to Unix if not something else is detected. If unable to even create a remote shell session, then shell command execution will be disabled entirely. The result is stored in the SFTP remote configuration, in option shell_type
, so that the auto-detection only have to be performed once. If you manually set a value for this option before first run, the auto-detection will be skipped, and if you set a different value later this will override any existing. Value none
can be set to avoid any attempts at executing shell commands, e.g. if this is not allowed on the server.
When the server is rclone serve sftp, the rclone SFTP remote will detect this as a Unix type shell - even if it is running on Windows. This server does not actually have a shell, but it accepts input commands matching the specific ones that the SFTP backend relies on for Unix shells, e.g. md5sum
and df
. Also it handles the string escape rules used for Unix shell. Treating it as a Unix type shell from a SFTP remote will therefore always be correct, and support all features.
Shell access considerations
-The shell type auto-detection logic, described above, means that by default rclone will try to run a shell command the first time a new sftp remote is accessed. If you configure a sftp remote without a config file, e.g. an on the fly remote, rclone will have nowhere to store the result, and it will re-run the command on every access. To avoid this you should explicitely set the shell_type
option to the correct value, or to none
if you want to prevent rclone from executing any remote shell commands.
-It is also important to note that, since the shell type decides how quoting and escaping of file paths used as command-line arguments are performed, configuring the wrong shell type may leave you exposed to command injection exploits. Make sure to confirm the auto-detected shell type, or explicitely set the shell type you know is correct, or disable shell access until you know.
+The shell type auto-detection logic, described above, means that by default rclone will try to run a shell command the first time a new sftp remote is accessed. If you configure a sftp remote without a config file, e.g. an on the fly remote, rclone will have nowhere to store the result, and it will re-run the command on every access. To avoid this you should explicitly set the shell_type
option to the correct value, or to none
if you want to prevent rclone from executing any remote shell commands.
+It is also important to note that, since the shell type decides how quoting and escaping of file paths used as command-line arguments are performed, configuring the wrong shell type may leave you exposed to command injection exploits. Make sure to confirm the auto-detected shell type, or explicitly set the shell type you know is correct, or disable shell access until you know.
Checksum
SFTP does not natively support checksums (file hash), but rclone is able to use checksumming if the same login has shell access, and can execute remote commands. If there is a command that can calculate compatible checksums on the remote system, Rclone can then be configured to execute this whenever a checksum is needed, and read back the results. Currently MD5 and SHA-1 are supported.
Normally this requires an external utility being available on the server. By default rclone will try commands md5sum
, md5
and rclone md5sum
for MD5 checksums, and the first one found usable will be picked. Same with sha1sum
, sha1
and rclone sha1sum
commands for SHA-1 checksums. These utilities normally need to be in the remote's PATH to be found.
@@ -25601,10 +26639,8 @@ known_hosts_file = ~/.ssh/known_hosts
--sftp-chunk-size
Upload and download chunk size.
-This controls the maximum packet size used in the SFTP protocol. The RFC limits this to 32768 bytes (32k), however a lot of servers support larger sizes and setting it larger will increase transfer speed dramatically on high latency links.
-Only use a setting higher than 32k if you always connect to the same server or after sufficiently broad testing.
-For example using the value of 252k with OpenSSH works well with its maximum packet size of 256k.
-If you get the error "failed to send packet header: EOF" when copying a large file, try lowering this number.
+This controls the maximum size of payload in SFTP protocol packets. The RFC limits this to 32768 bytes (32k), which is the default. However, a lot of servers support larger sizes, typically limited to a maximum total package size of 256k, and setting it larger will increase transfer speed dramatically on high latency links. This includes OpenSSH, and, for example, using the value of 255k works well, leaving plenty of room for overhead while still being within a total packet size of 256k.
+Make sure to test thoroughly before using a value higher than 32k, and only use it if you always connect to the same server or after sufficiently broad testing. If you get errors such as "failed to send packet payload: EOF", lots of "connection lost", or "corrupted on transfer", when copying a larger file, try lowering the value. The server run by rclone serve sftp sends packets with standard 32k maximum payload so you must not set a different chunk_size when downloading files, but it accepts packets up to the 256k total size, so for uploads the chunk_size can be set as for the OpenSSH example above.
Properties:
- Config: chunk_size
@@ -25638,7 +26674,7 @@ known_hosts_file = ~/.ssh/known_hosts
- Type: SpaceSepList
- Default:
-Limitations
+Limitations
On some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck
is a good idea.
The only ssh agent supported under Windows is Putty's pageant.
The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher
setting in the configuration file to true
. Further details on the insecurity of this cipher can be found in this paper.
@@ -25651,6 +26687,180 @@ known_hosts_file = ~/.ssh/known_hosts
Hetzner Storage Box
Hetzner Storage Boxes are supported through the SFTP backend on port 23.
See Hetzner's documentation for details
+SMB
+SMB is a communication protocol to share files over network.
+This relies on go-smb2 library for communication with SMB protocol.
+Paths are specified as remote:sharename
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:item/path/to/dir
.
+Notes
+The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in smb.conf
(usually in /etc/samba/
) file. You can find shares by quering the root if you're unsure (e.g. rclone lsd remote:
).
+You can't access to the shared printers from rclone, obviously.
+You can't use Anonymous access for logging in. You have to use the guest
user with an empty password instead. The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and periods. Alternatively, the local backend on Windows can access SMB servers using UNC paths, by \\server\share
. This doesn't apply to non-Windows OSes, such as Linux and macOS.
+Configuration
+Here is an example of making a SMB configuration.
+First run
+rclone config
+This will guide you through an interactive setup process.
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+XX / SMB / CIFS
+ \ (smb)
+Storage> smb
+
+Option host.
+Samba hostname to connect to.
+E.g. "example.com".
+Enter a value.
+host> localhost
+
+Option user.
+Samba username.
+Enter a string value. Press Enter for the default (lesmi).
+user> guest
+
+Option port.
+Samba port number.
+Enter a signed integer. Press Enter for the default (445).
+port>
+
+Option pass.
+Samba password.
+Choose an alternative below. Press Enter for the default (n).
+y) Yes, type in my own password
+g) Generate random password
+n) No, leave this optional password blank (default)
+y/g/n> g
+Password strength in bits.
+64 is just about memorable
+128 is secure
+1024 is the maximum
+Bits> 64
+Your password is: XXXX
+Use this password? Please note that an obscured version of this
+password (and not the password itself) will be stored under your
+configuration file, so keep this generated password in a safe place.
+y) Yes (default)
+n) No
+y/n> y
+
+Option domain.
+Domain name for NTLM authentication.
+Enter a string value. Press Enter for the default (WORKGROUP).
+domain>
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: samba
+- host: localhost
+- user: guest
+- pass: *** ENCRYPTED ***
+Keep this "remote" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> d
+Standard options
+Here are the Standard options specific to smb (SMB / CIFS).
+--smb-host
+SMB server hostname to connect to.
+E.g. "example.com".
+Properties:
+
+- Config: host
+- Env Var: RCLONE_SMB_HOST
+- Type: string
+- Required: true
+
+--smb-user
+SMB username.
+Properties:
+
+- Config: user
+- Env Var: RCLONE_SMB_USER
+- Type: string
+- Default: "$USER"
+
+--smb-port
+SMB port number.
+Properties:
+
+- Config: port
+- Env Var: RCLONE_SMB_PORT
+- Type: int
+- Default: 445
+
+--smb-pass
+SMB password.
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+
+- Config: pass
+- Env Var: RCLONE_SMB_PASS
+- Type: string
+- Required: false
+
+--smb-domain
+Domain name for NTLM authentication.
+Properties:
+
+- Config: domain
+- Env Var: RCLONE_SMB_DOMAIN
+- Type: string
+- Default: "WORKGROUP"
+
+Advanced options
+Here are the Advanced options specific to smb (SMB / CIFS).
+--smb-idle-timeout
+Max time before closing idle connections.
+If no connections have been returned to the connection pool in the time given, rclone will empty the connection pool.
+Set to 0 to keep connections indefinitely.
+Properties:
+
+- Config: idle_timeout
+- Env Var: RCLONE_SMB_IDLE_TIMEOUT
+- Type: Duration
+- Default: 1m0s
+
+--smb-hide-special-share
+Hide special shares (e.g. print$) which users aren't supposed to access.
+Properties:
+
+- Config: hide_special_share
+- Env Var: RCLONE_SMB_HIDE_SPECIAL_SHARE
+- Type: bool
+- Default: true
+
+--smb-case-insensitive
+Whether the server is configured to be case-insensitive.
+Always true on Windows shares.
+Properties:
+
+- Config: case_insensitive
+- Env Var: RCLONE_SMB_CASE_INSENSITIVE
+- Type: bool
+- Default: true
+
+--smb-encoding
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_SMB_ENCODING
+- Type: MultiEncoder
+- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
+
Storj
Storj is an encrypted, secure, and cost-effective object storage service that enables you to store, back up, and archive large amounts of data in a decentralized manner.
Backend options
@@ -25710,7 +26920,7 @@ known_hosts_file = ~/.ssh/known_hosts
- S3 backend: secret encryption key is shared with the gateway
-Configuration
+Configuration
To make a new Storj configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Storj project you are a member of.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -25807,7 +27017,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-Standard options
+Standard options
Here are the Standard options specific to storj (Storj Decentralized Cloud Storage).
--storj-provider
Choose an authentication method.
@@ -25940,7 +27150,7 @@ y/e/d> y
rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
Or even between another cloud storage and Storj.
rclone sync -i --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
-Limitations
+Limitations
rclone about
is not supported by the rclone Storj backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
Known issues
@@ -25948,7 +27158,7 @@ y/e/d> y
To fix these, please raise your system limits. You can do this issuing a ulimit -n 65536
just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. $HOME/.bashrc
, or change the system-wide configuration, usually /etc/sysctl.conf
and/or /etc/security/limits.conf
, but please refer to your operating system manual.
SugarSync
SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.
-Configuration
+Configuration
The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config
walks you through it.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -26021,7 +27231,7 @@ y/e/d> y
Deleting files
Deleted files will be moved to the "Deleted items" folder by default.
However you can supply the flag --sugarsync-hard-delete
or set the config parameter hard_delete = true
if you would like files to be deleted straight away.
-Standard options
+Standard options
Here are the Standard options specific to sugarsync (Sugarsync).
--sugarsync-app-id
Sugarsync App ID.
@@ -26062,7 +27272,7 @@ y/e/d> y
- Type: bool
- Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to sugarsync (Sugarsync).
--sugarsync-refresh-token
Sugarsync refresh token.
@@ -26134,7 +27344,7 @@ y/e/d> y
- Type: MultiEncoder
- Default: Slash,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
rclone about
is not supported by the SugarSync backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
Tardigrade
@@ -26143,7 +27353,7 @@ y/e/d> y
This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional cloud storage provider and therefore not suitable for long term storage.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
To configure an Uptobox backend you'll need your personal api token. You'll find it in your account settings
Here is an example of how to make a remote called remote
with the default setup. First run:
rclone config
@@ -26223,7 +27433,7 @@ y/e/d>
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Standard options
+Standard options
Here are the Standard options specific to uptobox (Uptobox).
--uptobox-access-token
Your access token.
@@ -26235,7 +27445,7 @@ y/e/d>
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to uptobox (Uptobox).
--uptobox-encoding
The encoding for the backend.
@@ -26247,7 +27457,7 @@ y/e/d>
- Type: MultiEncoder
- Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
-Limitations
+Limitations
Uptobox will delete inactive files that have not been accessed in 60 days.
rclone about
is not supported by this backend an overview of used space can however been seen in the uptobox web interface.
Union
@@ -26257,7 +27467,7 @@ y/e/d>
Attribute :ro
and :nc
can be attach to the end of path to tag the remote as read only or no create, e.g. remote:directory/subdirectory:ro
or remote:directory/subdirectory:nc
.
Subfolders can be used in upstream remotes. Assume a union remote named backup
with the remotes mydrive:private/backup
. Invoking rclone mkdir backup:desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop
.
There will be no special handling of paths containing ..
segments. Invoking rclone mkdir backup:../desktop
is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop
.
-Configuration
+Configuration
Here is an example of how to make a union called remote
for local folders. First run:
rclone config
This will guide you through an interactive setup process:
@@ -26478,7 +27688,7 @@ e/n/d/r/c/s/q> q
-Standard options
+Standard options
Here are the Standard options specific to union (Union merges the contents of several upstream fs).
--union-upstreams
List of space separated upstreams.
@@ -26527,7 +27737,7 @@ e/n/d/r/c/s/q> q
- Type: int
- Default: 120
-Advanced options
+Advanced options
Here are the Advanced options specific to union (Union merges the contents of several upstream fs).
--union-min-free-space
Minimum viable free space for lfs/eplfs policies.
@@ -26545,7 +27755,7 @@ e/n/d/r/c/s/q> q
WebDAV
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Configuration
+Configuration
To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -26618,7 +27828,7 @@ y/e/d> y
Modified time and hashes
Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times.
Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
-Standard options
+Standard options
Here are the Standard options specific to webdav (WebDAV).
--webdav-url
URL of http host to connect to.
@@ -26691,7 +27901,7 @@ y/e/d> y
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to webdav (WebDAV).
--webdav-bearer-token-command
Command to run to get a bearer token.
@@ -26798,7 +28008,7 @@ vendor = other
bearer_token_command = oidc-token XDC
Yandex Disk
Yandex Disk is a cloud storage solution created by Yandex.
-Configuration
+Configuration
Here is an example of making a yandex configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -26862,7 +28072,7 @@ y/e/d> y
Restricted filename characters
The default restricted characters set are replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to yandex (Yandex Disk).
--yandex-client-id
OAuth Client Id.
@@ -26884,7 +28094,7 @@ y/e/d> y
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to yandex (Yandex Disk).
--yandex-token
OAuth Access Token as a JSON blob.
@@ -26934,13 +28144,13 @@ y/e/d> y
- Type: MultiEncoder
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout
parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers
errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m
, that is --timeout 60m
.
Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. Token generation will work without a mail account, but Rclone won't be able to complete any actions.
[403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
Zoho Workdrive
Zoho WorkDrive is a cloud storage solution created by Zoho.
-Configuration
+Configuration
Here is an example of making a zoho configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -27020,7 +28230,7 @@ y/e/d>
To view your current quota you can use the rclone about remote:
command which will display your current usage.
Restricted filename characters
Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.
-Standard options
+Standard options
Here are the Standard options specific to zoho (Zoho).
--zoho-client-id
OAuth Client Id.
@@ -27079,7 +28289,7 @@ y/e/d>
-Advanced options
+Advanced options
Here are the Advanced options specific to zoho (Zoho).
--zoho-token
OAuth Access Token as a JSON blob.
@@ -27132,7 +28342,7 @@ y/e/d>
Local paths are specified as normal filesystem paths, e.g. /path/to/wherever
, so
rclone sync -i /home/source /tmp/destination
Will sync /home/source
to /tmp/destination
.
-Configuration
+Configuration
For consistencies sake one can also configure a remote of type local
in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever
, but it is probably easier not to.
Modified time
Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
@@ -27503,7 +28713,7 @@ $ tree /tmp/b
0 file2
NB Rclone (like most unix tools such as du
, rsync
and tar
) treats a bind mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.
-Advanced options
+Advanced options
Here are the Advanced options specific to local (Local Disk).
--local-nounc
Disable UNC (long path names) conversion on Windows.
@@ -27740,7 +28950,7 @@ $ tree /tmp/b
See the metadata docs for more info.
-Backend commands
+Backend commands
Here are the commands specific to the local backend.
Run them with
rclone backend COMMAND remote:
@@ -27757,6 +28967,201 @@ $ tree /tmp/b
- "error": return an error based on option value
Changelog
+v1.60.0 - 2022-10-21
+See commits
+
+- New backends
+
+- New Features
+
+- build
+
+- Update to go1.19 and make go1.17 the minimum required version (Nick Craig-Wood)
+- Install.sh: fix arm-v7 download (Ole Frost)
+
+- fs: Warn the user when using an existing remote name without a colon (Nick Craig-Wood)
+- httplib: Add
--xxx-min-tls-version
option to select minimum TLS version for HTTP servers (Robert Newson)
+- librclone: Add PHP bindings and test program (Jordi Gonzalez Muñoz)
+- operations
+
+- Add
--server-side-across-configs
global flag for any backend (Nick Craig-Wood)
+- Optimise
--copy-dest
and --compare-dest
(Nick Craig-Wood)
+
+- rc: add
job/stopgroup
to stop group (Evan Spensley)
+- serve dlna
+
+- Add
--announce-interval
to control SSDP Announce Interval (YanceyChiew)
+- Add
--interface
to Specify SSDP interface names line (Simon Bos)
+- Add support for more external subtitles (YanceyChiew)
+- Add verification of addresses (YanceyChiew)
+
+- sync: Optimise
--copy-dest
and --compare-dest
(Nick Craig-Wood)
+- doc updates (albertony, Alexander Knorr, anonion, João Henrique Franco, Josh Soref, Lorenzo Milesi, Marco Molteni, Mark Trolley, Ole Frost, partev, Ryan Morey, Tom Mombourquette, YFdyh000)
+
+- Bug Fixes
+
+- filter
+
+- Fix incorrect filtering with
UseFilter
context flag and wrapping backends (Nick Craig-Wood)
+- Make sure we check
--files-from
when looking for a single file (Nick Craig-Wood)
+
+- rc
+
+- Fix
mount/listmounts
not returning the full Fs entered in mount/mount
(Tom Mombourquette)
+- Handle external unmount when mounting (Isaac Aymerich)
+- Validate Daemon option is not set when mounting a volume via RC (Isaac Aymerich)
+
+- sync: Update docs and error messages to reflect fixes to overlap checks (Nick Naumann)
+
+- VFS
+
+- Reduce memory use by embedding
sync.Cond
(Nick Craig-Wood)
+- Reduce memory usage by re-ordering commonly used structures (Nick Craig-Wood)
+- Fix excess CPU used by VFS cache cleaner looping (Nick Craig-Wood)
+
+- Local
+
+- Obey file filters in listing to fix errors on excluded files (Nick Craig-Wood)
+- Fix "Failed to read metadata: function not implemented" on old Linux kernels (Nick Craig-Wood)
+
+- Compress
+
+- Fix crash due to nil metadata (Nick Craig-Wood)
+- Fix error handling to not use or return nil objects (Nick Craig-Wood)
+
+- Drive
+
+- Make
--drive-stop-on-upload-limit
obey quota exceeded error (Steve Kowalik)
+
+- FTP
+
+- Add
--ftp-force-list-hidden
option to show hidden items (Øyvind Heddeland Instefjord)
+- Fix hang when using ExplicitTLS to certain servers. (Nick Craig-Wood)
+
+- Google Cloud Storage
+
+- Add
--gcs-endpoint
flag and config parameter (Nick Craig-Wood)
+
+- Hubic
+
+- Remove backend as service has now shut down (Nick Craig-Wood)
+
+- Onedrive
+
+- Rename Onedrive(cn) 21Vianet to Vnet Group (Yen Hu)
+- Disable change notify in China region since it is not supported (Nick Craig-Wood)
+
+- S3
+
+- Implement
--s3-versions
flag to show old versions of objects if enabled (Nick Craig-Wood)
+- Implement
--s3-version-at
flag to show versions of objects at a particular time (Nick Craig-Wood)
+- Implement
backend versioning
command to get/set bucket versioning (Nick Craig-Wood)
+- Implement
Purge
to purge versions and backend cleanup-hidden
(Nick Craig-Wood)
+- Add
--s3-decompress
flag to decompress gzip-encoded files (Nick Craig-Wood)
+- Add
--s3-sse-customer-key-base64
to supply keys with binary data (Richard Bateman)
+- Try to keep the maximum precision in ModTime with
--user-server-modtime
(Nick Craig-Wood)
+- Drop binary metadata with an ERROR message as it can't be stored (Nick Craig-Wood)
+- Add
--s3-no-system-metadata
to suppress read and write of system metadata (Nick Craig-Wood)
+
+- SFTP
+
+- Fix directory creation races (Lesmiscore)
+
+- Swift
+
+- Add
--swift-no-large-objects
to reduce HEAD requests (Nick Craig-Wood)
+
+- Union
+
+- Propagate SlowHash feature to fix hasher interaction (Lesmiscore)
+
+
+v1.59.2 - 2022-09-15
+See commits
+
+- Bug Fixes
+
+- config: Move locking to fix fatal error: concurrent map read and map write (Nick Craig-Wood)
+
+- Local
+
+- Disable xattr support if the filesystems indicates it is not supported (Nick Craig-Wood)
+
+- Azure Blob
+
+- Fix chunksize calculations producing too many parts (Nick Craig-Wood)
+
+- B2
+
+- Fix chunksize calculations producing too many parts (Nick Craig-Wood)
+
+- S3
+
+- Fix chunksize calculations producing too many parts (Nick Craig-Wood)
+
+
+v1.59.1 - 2022-08-08
+See commits
+
+- Bug Fixes
+
+- accounting: Fix panic in core/stats-reset with unknown group (Nick Craig-Wood)
+- build: Fix android build after GitHub actions change (Nick Craig-Wood)
+- dlna: Fix SOAP action header parsing (Joram Schrijver)
+- docs: Fix links to mount command from install docs (albertony)
+- dropbox: Fix ChangeNotify was unable to decrypt errors (Nick Craig-Wood)
+- fs: Fix parsing of times and durations of the form "YYYY-MM-DD HH:MM:SS" (Nick Craig-Wood)
+- serve sftp: Fix checksum detection (Nick Craig-Wood)
+- sync: Add accidentally missed filter-sensitivity to --backup-dir option (Nick Naumann)
+
+- Combine
+
+- Fix docs showing
remote=
instead of upstreams=
(Nick Craig-Wood)
+- Throw error if duplicate directory name is specified (Nick Craig-Wood)
+- Fix errors with backends shutting down while in use (Nick Craig-Wood)
+
+- Dropbox
+
+- Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)
+- Fix infinite loop on uploading a corrupted file (Nick Craig-Wood)
+
+- Internetarchive
+
+- Ignore checksums for files using the different method (Lesmiscore)
+- Handle hash symbol in the middle of filename (Lesmiscore)
+
+- Jottacloud
+
+- Fix working with whitelabel Elgiganten Cloud
+- Do not store username in config when using standard auth (albertony)
+
+- Mega
+
+- Fix nil pointer exception when bad node received (Nick Craig-Wood)
+
+- S3
+
+- Fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput (Nick Craig-Wood)
+
+- SFTP
+
+- Fix issue with WS_FTP by working around failing RealPath (albertony)
+
+- Union
+
+- Fix duplicated files when using directories with leading / (Nick Craig-Wood)
+- Fix multiple files being uploaded when roots don't exist (Nick Craig-Wood)
+- Fix panic due to misalignment of struct field in 32 bit architectures (r-ricci)
+
+
v1.59.0 - 2022-07-09
See commits
@@ -28109,7 +29514,7 @@ $ tree /tmp/b
- Fix ARM architecture version in .deb packages after nfpm change (Nick Craig-Wood)
- Hard fork
github.com/jlaffaye/ftp
to fix go get github.com/rclone/rclone
(Nick Craig-Wood)
-- oauthutil: Fix crash when webrowser requests
/robots.txt
(Nick Craig-Wood)
+- oauthutil: Fix crash when webbrowser requests
/robots.txt
(Nick Craig-Wood)
- operations: Fix goroutine leak in case of copy retry (Ankur Gupta)
- rc:
@@ -28253,7 +29658,7 @@ $ tree /tmp/b
- Add rclone to list of supported
md5sum
/sha1sum
commands to look for (albertony)
- Refactor so we only have one way of running remote commands (Nick Craig-Wood)
- Fix timeout on hashing large files by sending keepalives (Nick Craig-Wood)
-- Fix unecessary seeking when uploading and downloading files (Nick Craig-Wood)
+- Fix unnecessary seeking when uploading and downloading files (Nick Craig-Wood)
- Update docs on how to create
known_hosts
file (Nick Craig-Wood)
- Storj
@@ -29147,9 +30552,9 @@ $ tree /tmp/b
- Add sort by average size in directory (Adam Plánský)
- Add toggle option for average s3ize in directory - key 'a' (Adam Plánský)
- Add empty folder flag into ncdu browser (Adam Plánský)
-- Add
!
(errror) and .
(unreadable) file flags to go with e
(empty) (Nick Craig-Wood)
+- Add
!
(error) and .
(unreadable) file flags to go with e
(empty) (Nick Craig-Wood)
-- obscure: Make
rclone osbcure -
ignore newline at end of line (Nick Craig-Wood)
+- obscure: Make
rclone obscure -
ignore newline at end of line (Nick Craig-Wood)
- operations
- Add logs when need to upload files to set mod times (Nick Craig-Wood)
@@ -29187,7 +30592,7 @@ $ tree /tmp/b
- move: Fix data loss when source and destination are the same object (Nick Craig-Wood)
- operations
-- Fix
--cutof-mode
hard not cutting off immediately (Nick Craig-Wood)
+- Fix
--cutoff-mode
hard not cutting off immediately (Nick Craig-Wood)
- Fix
--immutable
error message (Nick Craig-Wood)
- sync
@@ -29258,7 +30663,7 @@ $ tree /tmp/b
- Box
- Fix NewObject for files that differ in case (Nick Craig-Wood)
-- Fix finding directories in a case insentive way (Nick Craig-Wood)
+- Fix finding directories in a case insensitive way (Nick Craig-Wood)
- Chunker
@@ -29377,7 +30782,7 @@ $ tree /tmp/b
- Sugarsync
- Fix NewObject for files that differ in case (Nick Craig-Wood)
-- Fix finding directories in a case insentive way (Nick Craig-Wood)
+- Fix finding directories in a case insensitive way (Nick Craig-Wood)
- Swift
@@ -29488,7 +30893,7 @@ $ tree /tmp/b
- Bug Fixes
Forum
diff --git a/MANUAL.md b/MANUAL.md
index 6cd4e9604..c7030525b 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Jul 09, 2022
+% Oct 21, 2022
# Rclone syncs your files to cloud storage
@@ -112,7 +112,6 @@ WebDAV or S3, that work out of the box.)
- China Mobile Ecloud Elastic Object Storage (EOS)
- Arvan Cloud Object Storage (AOS)
- Citrix ShareFile
-- C14
- Cloudflare R2
- DigitalOcean Spaces
- Digi Storage
@@ -127,11 +126,11 @@ WebDAV or S3, that work out of the box.)
- Hetzner Storage Box
- HiDrive
- HTTP
-- Hubic
- Internet Archive
- Jottacloud
- IBM COS S3
- IDrive e2
+- IONOS Cloud
- Koofr
- Mail.ru Cloud
- Memset Memstore
@@ -144,12 +143,14 @@ WebDAV or S3, that work out of the box.)
- OVH
- OpenDrive
- OpenStack Swift
-- Oracle Cloud Storage
+- Oracle Cloud Storage Swift
+- Oracle Object Storage
- ownCloud
- pCloud
- premiumize.me
- put.io
- QingStor
+- Qiniu Cloud Object Storage (Kodo)
- Rackspace Cloud Files
- rsync.net
- Scaleway
@@ -158,6 +159,7 @@ WebDAV or S3, that work out of the box.)
- SeaweedFS
- SFTP
- Sia
+- SMB / CIFS
- StackPath
- Storj
- SugarSync
@@ -202,7 +204,7 @@ Rclone is a Go program and comes as a single binary file.
* Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) for more details.
* Optionally configure [automatic execution](#autostart).
-See below for some expanded Linux / macOS instructions.
+See below for some expanded Linux / macOS / Windows instructions.
See the [usage](https://rclone.org/docs/) docs for how to use rclone, or
run `rclone -h`.
@@ -223,7 +225,9 @@ For beta installation, run:
Note that this script checks the version of rclone installed first and
won't re-download if not needed.
-## Linux installation from precompiled binary
+## Linux installation {#linux}
+
+### Precompiled binary {#linux-precompiled}
Fetch and unpack
@@ -247,7 +251,9 @@ Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/)
rclone config
-## macOS installation with brew
+## macOS installation {#macos}
+
+### Installation with brew {#macos-brew}
brew install rclone
@@ -256,7 +262,12 @@ NOTE: This version of rclone will not support `mount` any more (see
on macOS, either install a precompiled binary or enable the relevant option
when [installing from source](#install-from-source).
-## macOS installation from precompiled binary, using curl
+Note that this is a third party installer not controlled by the rclone
+developers so it may be out of date. Its current version is as below.
+
+[![Homebrew package](https://repology.org/badge/version-for-repo/homebrew/rclone.svg)](https://repology.org/project/rclone/versions)
+
+### Precompiled binary, using curl {#macos-precompiled}
To avoid problems with macOS gatekeeper enforcing the binary to be signed and
notarized it is enough to download with `curl`.
@@ -284,7 +295,7 @@ Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/)
rclone config
-## macOS installation from precompiled binary, using a web browser
+### Precompiled binary, using a web browser {#macos-precompiled-web}
When downloading a binary with a web browser, the browser will set the macOS
gatekeeper quarantine attribute. Starting from Catalina, when attempting to run
@@ -297,11 +308,73 @@ The simplest fix is to run
xattr -d com.apple.quarantine rclone
-## Install with docker
+## Windows installation {#windows}
-The rclone maintains a [docker image for rclone](https://hub.docker.com/r/rclone/rclone).
-These images are autobuilt by docker hub from the rclone source based
-on a minimal Alpine linux image.
+### Precompiled binary {#windows-precompiled}
+
+Fetch the correct binary for your processor type by clicking on these
+links. If not sure, use the first link.
+
+- [Intel/AMD - 64 Bit](https://downloads.rclone.org/rclone-current-linux-amd64.zip)
+- [Intel/AMD - 32 Bit](https://downloads.rclone.org/rclone-current-linux-386.zip)
+- [ARM - 64 Bit](https://downloads.rclone.org/rclone-current-linux-arm64.zip)
+
+Open this file in the Explorer and extract `rclone.exe`. Rclone is a
+portable executable so you can place it wherever is convenient.
+
+Open a CMD window (or powershell) and run the binary. Note that rclone
+does not launch a GUI by default, it runs in the CMD Window.
+
+- Run `rclone.exe config` to setup. See [rclone config docs](https://rclone.org/docs/) for more details.
+- Optionally configure [automatic execution](#autostart).
+
+If you are planning to use the [rclone mount](https://rclone.org/commands/rclone_mount/)
+feature then you will need to install the third party utility
+[WinFsp](https://winfsp.dev/) also.
+
+### Chocolatey package manager {#windows-chocolatey}
+
+Make sure you have [Choco](https://chocolatey.org/) installed
+
+```
+choco search rclone
+choco install rclone
+```
+
+This will install rclone on your Windows machine. If you are planning
+to use [rclone mount](https://rclone.org/commands/rclone_mount/) then
+
+```
+choco install winfsp
+```
+
+will install that too.
+
+Note that this is a third party installer not controlled by the rclone
+developers so it may be out of date. Its current version is as below.
+
+[![Chocolatey package](https://repology.org/badge/version-for-repo/chocolatey/rclone.svg)](https://repology.org/project/rclone/versions)
+
+## Package manager installation {#package-manager}
+
+Many Linux, Windows, macOS and other OS distributions package and
+distribute rclone.
+
+The distributed versions of rclone are often quite out of date and for
+this reason we recommend one of the other installation methods if
+possible.
+
+You can get an idea of how up to date or not your OS distribution's
+package is here.
+
+[![Packaging status](https://repology.org/badge/vertical-allrepos/rclone.svg?columns=3)](https://repology.org/project/rclone/versions)
+
+## Docker installation {#docker}
+
+The rclone developers maintain a [docker image for rclone](https://hub.docker.com/r/rclone/rclone).
+
+These images are built as part of the release process based on a
+minimal Alpine Linux.
The `:latest` tag will always point to the latest stable release. You
can use the `:beta` tag to get the latest build from master. You can
@@ -376,10 +449,10 @@ ls ~/data/mount
kill %1
```
-## Install from source
+## Source installation {#source}
Make sure you have git and [Go](https://golang.org/) installed.
-Go version 1.16 or newer is required, latest release is recommended.
+Go version 1.17 or newer is required, latest release is recommended.
You can get it from your package manager, or download it from
[golang.org/dl](https://golang.org/dl/). Then you can run the following:
@@ -395,7 +468,7 @@ in the same folder. As an initial check you can now run `./rclone version`
(`.\rclone version` on Windows).
Note that on macOS and Windows the [mount](https://rclone.org/commands/rclone_mount/)
-command will not be available unless you specify additional build tag `cmount`.
+command will not be available unless you specify an additional build tag `cmount`.
```
go build -tags cmount
@@ -414,7 +487,7 @@ distribution (make sure you install it in the classic mingw64 subsystem, the
ucrt64 version is not compatible).
Additionally, on Windows, you must install the third party utility
-[WinFsp](http://www.secfs.net/winfsp/), with the "Developer" feature selected.
+[WinFsp](https://winfsp.dev/), with the "Developer" feature selected.
If building with cgo, you must also set environment variable CPATH pointing to
the fuse include directory within the WinFsp installation
(normally `C:\Program Files (x86)\WinFsp\inc\fuse`).
@@ -429,9 +502,10 @@ go build -trimpath -ldflags -s -tags cmount
```
Instead of executing the `go build` command directly, you can run it via the
-Makefile, which also sets version information and copies the resulting rclone
-executable into your GOPATH bin folder (`$(go env GOPATH)/bin`, which
-corresponds to `~/go/bin/rclone` by default).
+Makefile. It changes the version number suffix from "-DEV" to "-beta" and
+appends commit details. It also copies the resulting rclone executable into
+your GOPATH bin folder (`$(go env GOPATH)/bin`, which corresponds to
+`~/go/bin/rclone` by default).
```
make
@@ -443,7 +517,15 @@ To include mount command on macOS and Windows with Makefile build:
make GOTAGS=cmount
```
-As an alternative you can download the source, build and install rclone in one
+There are other make targets that can be used for more advanced builds,
+such as cross-compiling for all supported os/architectures, embedding
+icon and version info resources into windows executable, and packaging
+results into release artifacts.
+See [Makefile](https://github.com/rclone/rclone/blob/master/Makefile)
+and [cross-compile.go](https://github.com/rclone/rclone/blob/master/bin/cross-compile.go)
+for details.
+
+Another alternative is to download the source, build and install rclone in one
operation, as a regular Go package. The source will be stored it in the Go
module cache, and the resulting executable will be in your GOPATH bin folder
(`$(go env GOPATH)/bin`, which corresponds to `~/go/bin/rclone` by default).
@@ -462,7 +544,7 @@ with the current version):
go get github.com/rclone/rclone
```
-## Installation with Ansible
+## Ansible installation {#ansible}
This can be done with [Stefan Weichinger's ansible
role](https://github.com/stefangweichinger/ansible-rclone).
@@ -478,7 +560,7 @@ Instructions
- rclone
```
-## Portable installation
+## Portable installation {#portable}
As mentioned [above](https://rclone.org/install/#quickstart), rclone is single
executable (`rclone`, or `rclone.exe` on Windows) that you can download as a
@@ -506,7 +588,7 @@ such as a regular [sync](https://rclone.org/commands/rclone_sync/), you will pro
to configure your rclone command in your operating system's scheduler. If you need to
expose *service*-like features, such as [remote control](https://rclone.org/rc/),
[GUI](https://rclone.org/gui/), [serve](https://rclone.org/commands/rclone_serve/)
-or [mount](https://rclone.org/commands/rclone_move/), you will often want an rclone
+or [mount](https://rclone.org/commands/rclone_mount/), you will often want an rclone
command always running in the background, and configuring it to run in a service infrastructure
may be a better option. Below are some alternatives on how to achieve this on
different operating systems.
@@ -539,7 +621,7 @@ c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclo
#### User account
-As mentioned in the [mount](https://rclone.org/commands/rclone_move/) documentation,
+As mentioned in the [mount](https://rclone.org/commands/rclone_mount/) documentation,
mounted drives created as Administrator are not visible to other accounts, not even the
account that was elevated as Administrator. By running the mount command as the
built-in `SYSTEM` user account, it will create drives accessible for everyone on
@@ -548,7 +630,7 @@ the system. Both scheduled task and Windows service can be used to achieve this.
NOTE: Remember that when rclone runs as the `SYSTEM` user, the user profile
that it sees will not be yours. This means that if you normally run rclone with
configuration file in the default location, to be able to use the same configuration
-when running as the system user you must explicitely tell rclone where to find
+when running as the system user you must explicitly tell rclone where to find
it with the [`--config`](https://rclone.org/docs/#config-config-file) option,
or else it will look in the system users profile path (`C:\Windows\System32\config\systemprofile`).
To test your command manually from a Command Prompt, you can run it with
@@ -612,7 +694,7 @@ it should be possible through path rewriting as described [here](https://github.
To Windows service running any rclone command, the excellent third-party utility
[NSSM](http://nssm.cc), the "Non-Sucking Service Manager", can be used.
-It includes some advanced features such as adjusting process periority, defining
+It includes some advanced features such as adjusting process priority, defining
process environment variables, redirect to file anything written to stdout, and
customized response to different exit codes, with a GUI to configure everything from
(although it can also be used from command line ).
@@ -690,7 +772,6 @@ See the following for detailed instructions for
* [HDFS](https://rclone.org/hdfs/)
* [HiDrive](https://rclone.org/hidrive/)
* [HTTP](https://rclone.org/http/)
- * [Hubic](https://rclone.org/hubic/)
* [Internet Archive](https://rclone.org/internetarchive/)
* [Jottacloud](https://rclone.org/jottacloud/)
* [Koofr](https://rclone.org/koofr/)
@@ -701,6 +782,7 @@ See the following for detailed instructions for
* [Microsoft OneDrive](https://rclone.org/onedrive/)
* [OpenStack Swift / Rackspace Cloudfiles / Memset Memstore](https://rclone.org/swift/)
* [OpenDrive](https://rclone.org/opendrive/)
+ * [Oracle Object Storage](https://rclone.org/oracleobjectstorage/)
* [Pcloud](https://rclone.org/pcloud/)
* [premiumize.me](https://rclone.org/premiumizeme/)
* [put.io](https://rclone.org/putio/)
@@ -708,6 +790,7 @@ See the following for detailed instructions for
* [Seafile](https://rclone.org/seafile/)
* [SFTP](https://rclone.org/sftp/)
* [Sia](https://rclone.org/sia/)
+ * [SMB](https://rclone.org/smb/)
* [Storj](https://rclone.org/storj/)
* [SugarSync](https://rclone.org/sugarsync/)
* [Union](https://rclone.org/union/)
@@ -897,6 +980,11 @@ extended explanation in the [copy](https://rclone.org/commands/rclone_copy/) com
If dest:path doesn't exist, it is created and the source:path contents
go there.
+It is not possible to sync overlapping remotes. However, you may exclude
+the destination from the sync with a filter rule or by putting an
+exclude-if-present file inside the destination directory and sync to a
+destination that is inside the source directory.
+
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
**Note**: Use the `rclone dedupe` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors.
@@ -1222,7 +1310,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
-Listing a non-existent directory will produce an error except for
+Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
@@ -1290,7 +1378,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
-Listing a non-existent directory will produce an error except for
+Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
@@ -1349,7 +1437,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
-Listing a non-existent directory will produce an error except for
+Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
@@ -1391,7 +1479,7 @@ to running `rclone hashsum MD5 remote:path`.
This command can also hash data received on standard input (stdin),
by not passing a remote:path, or by passing a hyphen as remote:path
-when there is data to read (if not, the hypen will be treated literaly,
+when there is data to read (if not, the hyphen will be treated literally,
as a relative path).
@@ -1436,7 +1524,7 @@ to running `rclone hashsum SHA1 remote:path`.
This command can also hash data received on standard input (stdin),
by not passing a remote:path, or by passing a hyphen as remote:path
-when there is data to read (if not, the hypen will be treated literaly,
+when there is data to read (if not, the hyphen will be treated literally,
as a relative path).
This command can also hash data received on STDIN, if not passing
@@ -1884,11 +1972,11 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
# rclone bisync
-Perform bidirectonal synchronization between two paths.
+Perform bidirectional synchronization between two paths.
## Synopsis
-Perform bidirectonal synchronization between two paths.
+Perform bidirectional synchronization between two paths.
[Bisync](https://rclone.org/bisync/) provides a
bidirectional cloud sync solution in rclone.
@@ -2087,7 +2175,7 @@ To load completions for every new session, execute once:
### macOS:
- rclone completion bash > /usr/local/etc/bash_completion.d/rclone
+ rclone completion bash > $(brew --prefix)/etc/bash_completion.d/rclone
You will need to start a new shell for this setup to take effect.
@@ -2191,6 +2279,10 @@ to enable it. You can execute the following once:
echo "autoload -U compinit; compinit" >> ~/.zshrc
+To load completions in your current shell session:
+
+ source <(rclone completion zsh); compdef _rclone rclone
+
To load completions for every new session, execute once:
### Linux:
@@ -2199,7 +2291,7 @@ To load completions for every new session, execute once:
### macOS:
- rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
+ rclone completion zsh > $(brew --prefix)/share/zsh/site-functions/_rclone
You will need to start a new shell for this setup to take effect.
@@ -2261,7 +2353,7 @@ are 100% certain you are already passing obscured passwords then use
`rclone config password` command.
The flag `--non-interactive` is for use by applications that wish to
-configure rclone themeselves, rather than using rclone's text based
+configure rclone themselves, rather than using rclone's text based
configuration questions. If this flag is set, and rclone needs to ask
the user a question, a JSON blob will be returned with the question in
it.
@@ -2656,7 +2748,7 @@ are 100% certain you are already passing obscured passwords then use
`rclone config password` command.
The flag `--non-interactive` is for use by applications that wish to
-configure rclone themeselves, rather than using rclone's text based
+configure rclone themselves, rather than using rclone's text based
configuration questions. If this flag is set, and rclone needs to ask
the user a question, a JSON blob will be returned with the question in
it.
@@ -3211,7 +3303,7 @@ For the MD5 and SHA1 algorithms there are also dedicated commands,
This command can also hash data received on standard input (stdin),
by not passing a remote:path, or by passing a hyphen as remote:path
-when there is data to read (if not, the hypen will be treated literaly,
+when there is data to read (if not, the hyphen will be treated literally,
as a relative path).
Run without a hash to see the list of all supported hashes, e.g.
@@ -3452,7 +3544,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
-Listing a non-existent directory will produce an error except for
+Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
@@ -3533,7 +3625,7 @@ If `--files-only` is not specified directories in addition to the files
will be returned.
If `--metadata` is set then an additional Metadata key will be returned.
-This will have metdata in rclone standard format as a JSON object.
+This will have metadata in rclone standard format as a JSON object.
if `--stat` is set then a single JSON blob will be returned about the
item pointed to. This will return an error if the item isn't found.
@@ -3579,7 +3671,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
-Listing a non-existent directory will produce an error except for
+Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
@@ -3703,7 +3795,7 @@ and experience unexpected program errors, freezes or other issues, consider moun
as a network drive instead.
When mounting as a fixed disk drive you can either mount to an unused drive letter,
-or to a path representing a **non-existent** subdirectory of an **existing** parent
+or to a path representing a **nonexistent** subdirectory of an **existing** parent
directory or drive. Using the special value `*` will tell rclone to
automatically assign the next available drive letter, starting with Z: and moving backward.
Examples:
@@ -3734,7 +3826,7 @@ the mapped drive, shown in Windows Explorer etc, while the complete
`\\server\share` will be reported as the remote UNC path by
`net use` etc, just like a normal network drive mapping.
-If you specify a full network share UNC path with `--volname`, this will implicitely
+If you specify a full network share UNC path with `--volname`, this will implicitly
set the `--network-mode` option, so the following two examples have same result:
rclone mount remote:path/to/files X: --network-mode
@@ -3743,7 +3835,7 @@ set the `--network-mode` option, so the following two examples have same result:
You may also specify the network share UNC path as the mountpoint itself. Then rclone
will automatically assign a drive letter, same as with `*` and use that as
mountpoint, and instead use the UNC path specified as the volume name, as if it were
-specified with the `--volname` option. This will also implicitely set
+specified with the `--volname` option. This will also implicitly set
the `--network-mode` option. This means the following two examples have same result:
rclone mount remote:path/to/files \\cloud\remote
@@ -3779,7 +3871,7 @@ The permissions on each entry will be set according to [options](#options)
The default permissions corresponds to `--file-perms 0666 --dir-perms 0777`,
i.e. read and write permissions to everyone. This means you will not be able
-to start any programs from the the mount. To be able to do that you must add
+to start any programs from the mount. To be able to do that you must add
execute permissions, e.g. `--file-perms 0777 --dir-perms 0777` to add it
to everyone. If the program needs to write files, chances are you will have
to enable [VFS File Caching](#vfs-file-caching) as well (see also [limitations](#limitations)).
@@ -3850,8 +3942,8 @@ applications won't work with their files on an rclone mount without
`--vfs-cache-mode writes` or `--vfs-cache-mode full`.
See the [VFS File Caching](#vfs-file-caching) section for more info.
-The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2,
-Hubic) do not support the concept of empty directories, so empty
+The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2)
+do not support the concept of empty directories, so empty
directories will have a tendency to disappear once they fall out of
the directory cache.
@@ -4471,7 +4563,7 @@ press '?' to toggle the help on and off. The supported keys are:
q/ESC/^c to quit
Listed files/directories may be prefixed by a one-character flag,
-some of them combined with a description in brackes at end of line.
+some of them combined with a description in brackets at end of line.
These flags have the following meaning:
e means this is an empty directory, i.e. contains no files (but
@@ -5240,11 +5332,13 @@ rclone serve dlna remote:path [flags]
```
--addr string The ip:port or :port to bind the DLNA http server to (default ":7879")
+ --announce-interval duration The interval between SSDP announcements (default 12m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for dlna
+ --interface stringArray The interface to use for SSDP (repeat as necessary)
--log-trace Enable trace logging of SOAP traffic
--name string Name of DLNA server
--no-checksum Don't compare checksums on up/download
@@ -6237,6 +6331,10 @@ of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.
+--min-tls-version is minimum TLS version that is acceptable. Valid
+ values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
+ "tls1.0").
+
### Template
`--template` allows a user to specify a custom markup template for HTTP
@@ -6623,6 +6721,7 @@ rclone serve http remote:path [flags]
--htpasswd string A htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
@@ -6828,6 +6927,10 @@ of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.
+--min-tls-version is minimum TLS version that is acceptable. Valid
+ values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
+ "tls1.0").
+
```
rclone serve restic remote:path [flags]
@@ -6846,6 +6949,7 @@ rclone serve restic remote:path [flags]
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
--private-repos Users can only access their private repo
--realm string Realm for authentication (default "rclone")
@@ -6868,11 +6972,19 @@ Serve the remote over SFTP.
## Synopsis
-Run a SFTP server to serve a remote over SFTP. This can be used
-with an SFTP client or you can make a remote of type sftp to use with it.
+Run an SFTP server to serve a remote over SFTP. This can be used
+with an SFTP client or you can make a remote of type [sftp](/sftp) to use with it.
-You can use the filter flags (e.g. `--include`, `--exclude`) to control what
-is served.
+You can use the [filter](/filtering) flags (e.g. `--include`, `--exclude`)
+to control what is served.
+
+The server will respond to a small number of shell commands, mainly
+md5sum, sha1sum and df, which enable it to provide support for checksums
+and the about feature when accessed from an sftp remote.
+
+Note that this server uses standard 32 KiB packet payload size, which
+means you must not configure the client to expect anything else, e.g.
+with the [chunk_size](https://rclone.org/sftp/#sftp-chunk-size) option on an sftp remote.
The server will log errors. Use `-v` to see access logs.
@@ -6885,11 +6997,6 @@ You must provide some means of authentication, either with
`--auth-proxy`, or set the `--no-auth` flag for no
authentication when logging in.
-Note that this also implements a small number of shell commands so
-that it can provide md5sum/sha1sum/df information for the rclone sftp
-backend. This means that is can support SHA1SUMs, MD5SUMs and the
-about command when paired with the rclone sftp backend.
-
If you don't supply a host `--key` then rclone will generate rsa, ecdsa
and ed25519 variants, and cache them for later use in rclone's cache
directory (see `rclone help flags cache-dir`) in the "serve-sftp"
@@ -7341,7 +7448,7 @@ rclone serve sftp remote:path [flags]
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
- --stdio Run an sftp server on run stdin/stdout
+ --stdio Run an sftp server on stdin/stdout
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
@@ -7471,6 +7578,10 @@ of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.
+--min-tls-version is minimum TLS version that is acceptable. Valid
+ values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
+ "tls1.0").
+
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -7893,6 +8004,7 @@ rclone serve webdav remote:path [flags]
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
@@ -8651,7 +8763,7 @@ backends can also store arbitrary user metadata.
Where possible the key names are standardized, so, for example, it is
possible to copy object metadata from s3 to azureblob for example and
-metadata will be translated apropriately.
+metadata will be translated appropriately.
Some backends have limits on the size of the metadata and rclone will
give errors on upload if they are exceeded.
@@ -8713,10 +8825,34 @@ it to `false`. It is also possible to specify `--boolean=false` or
parsed as `--boolean` and the `false` is parsed as an extra command
line argument for rclone.
-Options which use TIME use the go time parser. A duration string is a
-possibly signed sequence of decimal numbers, each with optional
-fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid
-time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+### Time or duration options {#time-option}
+
+TIME or DURATION options can be specified as a duration string or a
+time string.
+
+A duration string is a possibly signed sequence of decimal numbers,
+each with optional fraction and a unit suffix, such as "300ms",
+"-1.5h" or "2h45m". Default units are seconds or the following
+abbreviations are valid:
+
+ * `ms` - Milliseconds
+ * `s` - Seconds
+ * `m` - Minutes
+ * `h` - Hours
+ * `d` - Days
+ * `w` - Weeks
+ * `M` - Months
+ * `y` - Years
+
+These can also be specified as an absolute time in the following
+formats:
+
+- RFC3339 - e.g. `2006-01-02T15:04:05Z` or `2006-01-02T15:04:05+07:00`
+- ISO8601 Date and time, local timezone - `2006-01-02T15:04:05`
+- ISO8601 Date and time, local timezone - `2006-01-02 15:04:05`
+- ISO8601 Date - `2006-01-02` (YYYY-MM-DD)
+
+### Size options {#size-option}
Options which use SIZE use KiB (multiples of 1024 bytes) by default.
However, a suffix of `B` for Byte, `K` for KiB, `M` for MiB,
@@ -8735,7 +8871,8 @@ been added) in DIR, then it will be overwritten.
The remote in use must support server-side move or copy and you must
use the same remote as the destination of the sync. The backup
-directory must not overlap the destination directory.
+directory must not overlap the destination directory without it being
+excluded by a filter rule.
For example
@@ -8769,7 +8906,7 @@ would mean limit the upload and download bandwidth to 10 MiB/s.
single limit, specify the desired bandwidth in KiB/s, or use a
suffix B|K|M|G|T|P. The default is `0` which means to not limit bandwidth.
-The upload and download bandwidth can be specified seperately, as
+The upload and download bandwidth can be specified separately, as
`--bwlimit UP:DOWN`, so
--bwlimit 10M:100k
@@ -9796,6 +9933,18 @@ This sets the interval between each retry specified by `--retries`
The default is `0`. Use `0` to disable.
+### --server-side-across-configs ###
+
+Allow server-side operations (e.g. copy or move) to work across
+different configurations.
+
+This can be useful if you wish to do a server-side copy or move
+between two remotes which use the same backend but are configured
+differently.
+
+Note that this isn't enabled by default because it isn't easy for
+rclone to tell if it will work between any two configurations.
+
### --size-only ###
Normally rclone will look at modification time and size of files to
@@ -9984,13 +10133,22 @@ By default, rclone doesn't keep track of renamed files, so if you
rename a file locally then sync it to a remote, rclone will delete the
old file on the remote and upload a new copy.
-If you use this flag, and the remote supports server-side copy or
-server-side move, and the source and destination have a compatible
-hash, then this will track renames during `sync`
-operations and perform renaming server-side.
+An rclone sync with `--track-renames` runs like a normal sync, but keeps
+track of objects which exist in the destination but not in the source
+(which would normally be deleted), and which objects exist in the
+source but not the destination (which would normally be transferred).
+These objects are then candidates for renaming.
-Files will be matched by size and hash - if both match then a rename
-will be considered.
+After the sync, rclone matches up the source only and destination only
+objects using the `--track-renames-strategy` specified and either
+renames the destination object or transfers the source and deletes the
+destination object. `--track-renames` is stateless like all of
+rclone's syncs.
+
+To use this flag the destination must support server-side copy or
+server-side move, and to use a hash based `--track-renames-strategy`
+(the default) the source and the destination must have a compatible
+hash.
If the destination does not support server-side copy or move, rclone
will fall back to the default behaviour and log an error level message
@@ -10008,7 +10166,7 @@ Note also that `--track-renames` is incompatible with
### --track-renames-strategy (hash,modtime,leaf,size) ###
-This option changes the matching criteria for `--track-renames`.
+This option changes the file matching criteria for `--track-renames`.
The matching is controlled by a comma separated selection of these tokens:
@@ -10017,15 +10175,15 @@ The matching is controlled by a comma separated selection of these tokens:
- `leaf` - the name of the file not including its directory name
- `size` - the size of the file (this is always enabled)
-So using `--track-renames-strategy modtime,leaf` would match files
+The default option is `hash`.
+
+Using `--track-renames-strategy modtime,leaf` would match files
based on modification time, the leaf of the file name and the size
only.
Using `--track-renames-strategy modtime` or `leaf` can enable
`--track-renames` support for encrypted destinations.
-If nothing is specified, the default option is matching by `hash`es.
-
Note that the `hash` strategy is not supported with encrypted destinations.
### --delete-(before,during,after) ###
@@ -10061,7 +10219,7 @@ quickly using the least amount of memory.
However, some remotes have a way of listing all files beneath a
directory in one (or a small number) of transactions. These tend to
-be the bucket-based remotes (e.g. S3, B2, GCS, Swift, Hubic).
+be the bucket-based remotes (e.g. S3, B2, GCS, Swift).
If you use the `--fast-list` flag then rclone will use this method for
listing directories. This will have the following consequences for
@@ -10127,7 +10285,7 @@ In all other cases the file will not be updated.
Consider using the `--modify-window` flag to compensate for time skews
between the source and the backend, for backends that do not support
mod times, and instead use uploaded times. However, if the backend
-does not support checksums, note that sync'ing or copying within the
+does not support checksums, note that syncing or copying within the
time skew window may still result in additional transfers for safety.
### --use-mmap ###
@@ -10920,7 +11078,7 @@ them into regular expressions.
| Rooted | `/*.jpg` | `/file.jpg` | `/file.png` |
| | | `/file2.jpg` | `/dir/file.jpg` |
| Alternates | `*.{jpg,png}` | `/file.jpg` | `/file.gif` |
-| | | `/dir/file.gif` | `/dir/file.gif` |
+| | | `/dir/file.png` | `/dir/file.gif` |
| Path Wildcard | `dir/**` | `/dir/anyfile` | `file.png` |
| | | `/subdir/dir/subsubdir/anyfile` | `/subdir/file.png` |
| Any Char | `*.t?t` | `/file.txt` | `/file.qxt` |
@@ -11420,6 +11578,8 @@ Default units are `KiB` but abbreviations `K`, `M`, `G`, `T` or `P` are valid.
E.g. `rclone ls remote: --min-size 50k` lists files on `remote:` of 50 KiB
size or larger.
+See [the size option docs](https://rclone.org/docs/#size-option) for more info.
+
### `--max-size` - Don't transfer any file larger than this
Controls the maximum size file within the scope of an rclone command.
@@ -11428,33 +11588,19 @@ Default units are `KiB` but abbreviations `K`, `M`, `G`, `T` or `P` are valid.
E.g. `rclone ls remote: --max-size 1G` lists files on `remote:` of 1 GiB
size or smaller.
+See [the size option docs](https://rclone.org/docs/#size-option) for more info.
+
### `--max-age` - Don't transfer any file older than this
Controls the maximum age of files within the scope of an rclone command.
-Default units are seconds or the following abbreviations are valid:
-
- * `ms` - Milliseconds
- * `s` - Seconds
- * `m` - Minutes
- * `h` - Hours
- * `d` - Days
- * `w` - Weeks
- * `M` - Months
- * `y` - Years
-
-`--max-age` can also be specified as an absolute time in the following
-formats:
-
-- RFC3339 - e.g. `2006-01-02T15:04:05Z` or `2006-01-02T15:04:05+07:00`
-- ISO8601 Date and time, local timezone - `2006-01-02T15:04:05`
-- ISO8601 Date and time, local timezone - `2006-01-02 15:04:05`
-- ISO8601 Date - `2006-01-02` (YYYY-MM-DD)
`--max-age` applies only to files and not to directories.
E.g. `rclone ls remote: --max-age 2d` lists files on `remote:` of 2 days
old or less.
+See [the time option docs](https://rclone.org/docs/#time-option) for valid formats.
+
### `--min-age` - Don't transfer any file younger than this
Controls the minimum age of files within the scope of an rclone command.
@@ -11465,6 +11611,8 @@ Controls the minimum age of files within the scope of an rclone command.
E.g. `rclone ls remote: --min-age 2d` lists files on `remote:` of 2 days
old or more.
+See [the time option docs](https://rclone.org/docs/#time-option) for valid formats.
+
## Other flags
### `--delete-excluded` - Delete files on dest excluded from sync
@@ -11660,6 +11808,11 @@ SSL PEM Private key
Maximum size of request header (default 4096)
+### --rc-min-tls-version=VALUE
+
+The minimum TLS version that is acceptable. Valid values are "tls1.0",
+"tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
+
### --rc-user=VALUE
User name for authentication.
@@ -12016,7 +12169,7 @@ The parameters can be a string as per the rest of rclone, eg
`s3:bucket/path` or `:sftp:/my/dir`. They can also be specified as
JSON blobs.
-If specifyng a JSON blob it should be a object mapping strings to
+If specifying a JSON blob it should be a object mapping strings to
strings. These values will be used to configure the remote. There are
3 special values which may be set:
@@ -12596,6 +12749,12 @@ Parameters:
- jobid - id of the job (integer).
+### job/stopgroup: Stop all running jobs in a group {#job-stopgroup}
+
+Parameters:
+
+- group - name of the group (string).
+
### mount/listmounts: Show current mount points {#mount-listmounts}
This shows currently mounted points, which can be used for performing an unmount.
@@ -12671,9 +12830,11 @@ Example:
**Authentication is required for this call.**
-### mount/unmountall: Show current mount points {#mount-unmountall}
+### mount/unmountall: Unmount all active mounts {#mount-unmountall}
-This shows currently mounted points, which can be used for performing an unmount.
+rclone allows Linux, FreeBSD, macOS and Windows to
+mount any of Rclone's cloud storage systems as a file system with
+FUSE.
This takes no parameters and returns error if unmount does not succeed.
@@ -13187,7 +13348,7 @@ check that parameter passing is working properly.
**Authentication is required for this call.**
-### sync/bisync: Perform bidirectonal synchronization between two paths. {#sync-bisync}
+### sync/bisync: Perform bidirectional synchronization between two paths. {#sync-bisync}
This takes the following parameters
@@ -13618,7 +13779,6 @@ Here is an overview of the major features of each cloud storage system.
| HDFS | - | R/W | No | No | - | - |
| HiDrive | HiDrive ¹² | R/W | No | No | - | - |
| HTTP | - | R | No | No | R | - |
-| Hubic | MD5 | R/W | No | No | R/W | - |
| Internet Archive | MD5, SHA1, CRC32 | R/W ¹¹ | No | No | - | RWU |
| Jottacloud | MD5 | R/W | Yes | No | R | - |
| Koofr | MD5 | - | Yes | No | - | - |
@@ -13629,6 +13789,7 @@ Here is an overview of the major features of each cloud storage system.
| Microsoft OneDrive | SHA1 ⁵ | R/W | Yes | No | R | - |
| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
+| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
| pCloud | MD5, SHA1 ⁷ | R | No | No | W | - |
| premiumize.me | - | - | Yes | No | R | - |
| put.io | CRC-32 | R/W | No | Yes | R | - |
@@ -13636,6 +13797,7 @@ Here is an overview of the major features of each cloud storage system.
| Seafile | - | - | No | No | - | - |
| SFTP | MD5, SHA1 ² | R/W | Depends | No | - | - |
| Sia | - | - | No | No | - | - |
+| SMB | - | - | Yes | No | - | - |
| SugarSync | - | - | No | No | - | - |
| Storj | - | R | No | No | - | - |
| Uptobox | - | - | No | Yes | - | - |
@@ -13697,7 +13859,7 @@ systems they must support a common hash type.
### ModTime ###
-Allmost all cloud storage systems store some sort of timestamp
+Almost all cloud storage systems store some sort of timestamp
on objects, but several of them not something that is appropriate
to use for syncing. E.g. some backends will only write a timestamp
that represent the time of the upload. To be relevant for syncing
@@ -14069,7 +14231,6 @@ upon backend-specific capabilities.
| HDFS | Yes | No | Yes | Yes | No | No | Yes | No | Yes | Yes |
| HiDrive | Yes | Yes | Yes | Yes | No | No | Yes | No | No | Yes |
| HTTP | No | No | No | No | No | No | No | No | No | Yes |
-| Hubic | Yes † | Yes | No | No | No | Yes | Yes | No | Yes | No |
| Internet Archive | No | Yes | No | No | Yes | Yes | No | Yes | Yes | No |
| Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |
| Koofr | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes | Yes |
@@ -14080,6 +14241,7 @@ upon backend-specific capabilities.
| Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes |
| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes |
| OpenStack Swift | Yes † | Yes | No | No | No | Yes | Yes | No | Yes | No |
+| Oracle Object Storage | Yes | Yes | No | No | Yes | Yes | No | No | No | No |
| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes |
| premiumize.me | Yes | No | Yes | Yes | No | No | No | Yes | Yes | Yes |
| put.io | Yes | No | Yes | Yes | Yes | No | Yes | No | Yes | Yes |
@@ -14087,6 +14249,7 @@ upon backend-specific capabilities.
| Seafile | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| SFTP | No | No | Yes | Yes | No | No | Yes | No | Yes | Yes |
| Sia | No | No | No | No | No | No | Yes | No | No | Yes |
+| SMB | No | No | Yes | Yes | No | No | Yes | No | No | Yes |
| SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes |
| Storj | Yes † | No | Yes | No | No | Yes | Yes | No | No | No |
| Uptobox | No | Yes | Yes | Yes | No | No | No | No | No | No |
@@ -14100,7 +14263,7 @@ upon backend-specific capabilities.
This deletes a directory quicker than just deleting all the files in
the directory.
-† Note Swift, Hubic, and Storj implement this in order to delete
+† Note Swift and Storj implement this in order to delete
directory markers but they don't actually have a quicker way of deleting
files other than deleting them individually.
@@ -14295,6 +14458,7 @@ These flags are available for every command.
--rc-job-expire-interval duration Interval to check for expired async jobs (default 10s)
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--rc-no-auth Don't require auth for certain methods
--rc-pass string Password for authentication
--rc-realm string Realm for authentication (default "rclone")
@@ -14311,6 +14475,7 @@ These flags are available for every command.
--refresh-times Refresh the modtime of remote files
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable)
+ --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
@@ -14336,7 +14501,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.60.0")
-v, --verbose count Print lots more stuff (repeat for more)
```
@@ -14489,7 +14654,7 @@ and may be set in the config file.
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
- --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish comitting (default 10m0s)
+ --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -14523,6 +14688,7 @@ and may be set in the config file.
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
--ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
+ --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
@@ -14541,6 +14707,7 @@ and may be set in the config file.
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gcs-endpoint string Endpoint for the service
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
--gcs-object-acl string Access Control List for new objects
@@ -14586,14 +14753,6 @@ and may be set in the config file.
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
- --hubic-auth-url string Auth server URL
- --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
- --hubic-client-id string OAuth Client Id
- --hubic-client-secret string OAuth Client Secret
- --hubic-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
- --hubic-no-chunk Don't chunk files during streaming upload
- --hubic-token string OAuth Access Token as a JSON blob
- --hubic-token-url string Token server url
--internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
--internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
@@ -14662,6 +14821,22 @@ and may be set in the config file.
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
+ --oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
+ --oos-compartment string Object storage compartment OCID
+ --oos-config-file string Path to OCI config file (default "~/.oci/config")
+ --oos-config-profile string Profile name inside the oci config file (default "Default")
+ --oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
+ --oos-copy-timeout Duration Timeout for copy (default 1m0s)
+ --oos-disable-checksum Don't store MD5 checksum with object metadata
+ --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --oos-endpoint string Endpoint for Object storage API
+ --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
+ --oos-namespace string Object storage namespace
+ --oos-no-check-bucket If set, don't attempt to check the bucket exists or create it
+ --oos-provider string Choose your Auth Provider (default "env_auth")
+ --oos-region string Object storage Region
+ --oos-upload-concurrency int Concurrency for multipart uploads (default 10)
+ --oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
@@ -14693,6 +14868,7 @@ and may be set in the config file.
--s3-bucket-acl string Canned ACL used when creating buckets
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
+ --s3-decompress If set this will decompress gzip encoded objects
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
@@ -14711,6 +14887,7 @@ and may be set in the config file.
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity
--s3-no-head-object If set, do not do HEAD before GET when getting objects
+ --s3-no-system-metadata Suppress setting and reading of system metadata
--s3-profile string Profile to use in the shared credentials file
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
@@ -14720,7 +14897,8 @@ and may be set in the config file.
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
- --s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data
+ --s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
+ --s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
@@ -14730,6 +14908,8 @@ and may be set in the config file.
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
+ --s3-version-at Time Show file versions as they were at the specified time (default off)
+ --s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
--seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
@@ -14776,6 +14956,15 @@ and may be set in the config file.
--sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent")
--skip-links Don't warn about skipped symlinks
+ --smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
+ --smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
+ --smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true)
+ --smb-host string SMB server hostname to connect to
+ --smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
+ --smb-pass string SMB password (obscured)
+ --smb-port int SMB port number (default 445)
+ --smb-user string SMB username (default "$USER")
--storj-access-grant string Access grant
--storj-api-key string API key
--storj-passphrase string Encryption passphrase
@@ -14806,6 +14995,7 @@ and may be set in the config file.
--swift-key string API key or password (OS_PASSWORD)
--swift-leave-parts-on-error If true avoid calling abort upload on a failure
--swift-no-chunk Don't chunk files during streaming upload
+ --swift-no-large-objects Disable support for static and dynamic large objects
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
@@ -15722,7 +15912,7 @@ Most of these events come up due to a error status from an internal call.
On such a critical error the `{...}.path1.lst` and `{...}.path2.lst`
listing files are renamed to extension `.lst-err`, which blocks any future
bisync runs (since the normal `.lst` files are not found).
-Bisync keeps them under `bisync` subdirectory of the rclone cache direcory,
+Bisync keeps them under `bisync` subdirectory of the rclone cache directory,
typically at `${HOME}/.cache/rclone/bisync/` on Linux.
Some errors are considered temporary and re-running the bisync is not blocked.
@@ -15820,7 +16010,7 @@ don't have spelling case differences (`Smile.jpg` vs. `smile.jpg`).
## Windows support {#windows}
Bisync has been tested on Windows 8.1, Windows 10 Pro 64-bit and on Windows
-Github runners.
+GitHub runners.
Drive letters are allowed, including drive letters mapped to network drives
(`rclone bisync J:\localsync GDrive:`).
@@ -16328,7 +16518,7 @@ test command flags can be equally prefixed by a single `-` or double dash.
synched tree even if there are check file mismatches in the test tree.
- Some Dropbox tests can fail, notably printing the following message:
`src and dst identical but can't set mod time without deleting and re-uploading`
- This is expected and happens due a way Dropbox handles modificaion times.
+ This is expected and happens due a way Dropbox handles modification times.
You should use the `-refresh-times` test flag to make up for this.
- If Dropbox tests hit request limit for you and print error message
`too_many_requests/...: Too many requests or write operations.`
@@ -16338,7 +16528,7 @@ test command flags can be equally prefixed by a single `-` or double dash.
### Updating golden results
Sometimes even a slight change in the bisync source can cause little changes
-spread around many log files. Updating them manually would be a nighmare.
+spread around many log files. Updating them manually would be a nightmare.
The `-golden` flag will store the `test.log` and `*.lst` listings from each
test case into respective golden directories. Golden results will
@@ -16699,6 +16889,11 @@ Invoking `rclone mkdir backup:../desktop` is exactly the same as invoking
The empty path is not allowed as a remote. To alias the current directory
use `.` instead.
+The target remote can also be a [connection string](https://rclone.org/docs/#connection-strings).
+This can be used to modify the config of a remote for different uses, e.g.
+the alias `myDriveTrash` with the target remote `myDrive,trashed_only:`
+can be used to only show the trashed files in `myDrive`.
+
## Configuration
Here is an example of how to make an alias called `remote` for local folder.
@@ -17133,7 +17328,9 @@ The S3 backend can be used with a number of different providers:
- Huawei OBS
- IBM COS S3
- IDrive e2
+- IONOS Cloud
- Minio
+- Qiniu Cloud Object Storage (Kodo)
- RackCorp Object Storage
- Scaleway
- Seagate Lyve Cloud
@@ -17446,7 +17643,7 @@ upload.
Rclone's default directory traversal is to process each directory
individually. This takes one API call per directory. Using the
-`--fast-list` flag will read all info about the the objects into
+`--fast-list` flag will read all info about the objects into
memory first using a smaller number of API calls (one per 1000
objects). See the [rclone docs](https://rclone.org/docs/#fast-list) for more details.
@@ -17498,6 +17695,74 @@ This will mean that these objects do not have an MD5 checksum.
Note that reading this from the object takes an additional `HEAD`
request as the metadata isn't returned in object listings.
+### Versions
+
+When bucket versioning is enabled (this can be done with rclone with
+the [`rclone backend versioning`](#versioning) command) when rclone
+uploads a new version of a file it creates a
+[new version of it](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html)
+Likewise when you delete a file, the old version will be marked hidden
+and still be available.
+
+Old versions of files, where available, are visible using the
+[`--s3-versions`](#s3-versions) flag.
+
+It is also possible to view a bucket as it was at a certain point in
+time, using the [`--s3-version-at`](#s3-version-at) flag. This will
+show the file versions as they were at that time, showing files that
+have been deleted afterwards, and hiding files that were created
+since.
+
+If you wish to remove all the old versions then you can use the
+[`rclone backend cleanup-hidden remote:bucket`](#cleanup-hidden)
+command which will delete all the old hidden versions of files,
+leaving the current ones intact. You can also supply a path and only
+old versions under that path will be deleted, e.g.
+`rclone backend cleanup-hidden remote:bucket/path/to/stuff`.
+
+When you `purge` a bucket, the current and the old versions will be
+deleted then the bucket will be deleted.
+
+However `delete` will cause the current versions of the files to
+become hidden old versions.
+
+Here is a session showing the listing and retrieval of an old
+version followed by a `cleanup` of the old versions.
+
+Show current version and all the versions with `--s3-versions` flag.
+
+```
+$ rclone -q ls s3:cleanup-test
+ 9 one.txt
+
+$ rclone -q --s3-versions ls s3:cleanup-test
+ 9 one.txt
+ 8 one-v2016-07-04-141032-000.txt
+ 16 one-v2016-07-04-141003-000.txt
+ 15 one-v2016-07-02-155621-000.txt
+```
+
+Retrieve an old version
+
+```
+$ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
+
+$ ls -l /tmp/one-v2016-07-04-141003-000.txt
+-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
+```
+
+Clean up all the old versions and show that they've gone.
+
+```
+$ rclone -q backend cleanup-hidden s3:cleanup-test
+
+$ rclone -q ls s3:cleanup-test
+ 9 one.txt
+
+$ rclone -q --s3-versions ls s3:cleanup-test
+ 9 one.txt
+```
+
### Cleanup
If you run `rclone cleanup s3:bucket` then it will remove all pending
@@ -17685,7 +17950,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t
### Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).
#### --s3-provider
@@ -17720,6 +17985,8 @@ Properties:
- IBM COS S3
- "IDrive"
- IDrive e2
+ - "IONOS"
+ - IONOS Cloud
- "LyveCloud"
- Seagate Lyve Cloud
- "Minio"
@@ -17740,6 +18007,8 @@ Properties:
- Tencent Cloud Object Storage (COS)
- "Wasabi"
- Wasabi Object Storage
+ - "Qiniu"
+ - Qiniu Object Storage (Kodo)
- "Other"
- Any other S3 compatible provider
@@ -18010,13 +18279,68 @@ Properties:
Region to connect to.
+Properties:
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+ - "cn-east-1"
+ - The default endpoint - a good choice if you are unsure.
+ - East China Region 1.
+ - Needs location constraint cn-east-1.
+ - "cn-east-2"
+ - East China Region 2.
+ - Needs location constraint cn-east-2.
+ - "cn-north-1"
+ - North China Region 1.
+ - Needs location constraint cn-north-1.
+ - "cn-south-1"
+ - South China Region 1.
+ - Needs location constraint cn-south-1.
+ - "us-north-1"
+ - North America Region.
+ - Needs location constraint us-north-1.
+ - "ap-southeast-1"
+ - Southeast Asia Region 1.
+ - Needs location constraint ap-southeast-1.
+ - "ap-northeast-1"
+ - Northeast Asia Region 1.
+ - Needs location constraint ap-northeast-1.
+
+#### --s3-region
+
+Region where your bucket will be created and your data stored.
+
+
+Properties:
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Provider: IONOS
+- Type: string
+- Required: false
+- Examples:
+ - "de"
+ - Frankfurt, Germany
+ - "eu-central-2"
+ - Berlin, Germany
+ - "eu-south-2"
+ - Logrono, Spain
+
+#### --s3-region
+
+Region to connect to.
+
Leave blank if you are using an S3 clone and you don't have a region.
Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
-- Provider: !AWS,Alibaba,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
+- Provider: !AWS,Alibaba,ChinaMobile,Cloudflare,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
- Type: string
- Required: false
- Examples:
@@ -18274,6 +18598,27 @@ Properties:
#### --s3-endpoint
+Endpoint for IONOS S3 Object Storage.
+
+Specify the endpoint from the same region.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Provider: IONOS
+- Type: string
+- Required: false
+- Examples:
+ - "s3-eu-central-1.ionoscloud.com"
+ - Frankfurt, Germany
+ - "s3-eu-central-2.ionoscloud.com"
+ - Berlin, Germany
+ - "s3-eu-south-2.ionoscloud.com"
+ - Logrono, Spain
+
+#### --s3-endpoint
+
Endpoint for OSS API.
Properties:
@@ -18539,6 +18884,33 @@ Properties:
#### --s3-endpoint
+Endpoint for Qiniu Object Storage.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+ - "s3-cn-east-1.qiniucs.com"
+ - East China Endpoint 1
+ - "s3-cn-east-2.qiniucs.com"
+ - East China Endpoint 2
+ - "s3-cn-north-1.qiniucs.com"
+ - North China Endpoint 1
+ - "s3-cn-south-1.qiniucs.com"
+ - South China Endpoint 1
+ - "s3-us-north-1.qiniucs.com"
+ - North America Endpoint 1
+ - "s3-ap-southeast-1.qiniucs.com"
+ - Southeast Asia Endpoint 1
+ - "s3-ap-northeast-1.qiniucs.com"
+ - Northeast Asia Endpoint 1
+
+#### --s3-endpoint
+
Endpoint for S3 API.
Required when using an S3 clone.
@@ -18547,7 +18919,7 @@ Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
-- Provider: !AWS,IBMCOS,IDrive,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp
+- Provider: !AWS,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp,Qiniu
- Type: string
- Required: false
- Examples:
@@ -18874,13 +19246,42 @@ Properties:
Location constraint - must be set to match the Region.
+Used when creating buckets only.
+
+Properties:
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+ - "cn-east-1"
+ - East China Region 1
+ - "cn-east-2"
+ - East China Region 2
+ - "cn-north-1"
+ - North China Region 1
+ - "cn-south-1"
+ - South China Region 1
+ - "us-north-1"
+ - North America Region 1
+ - "ap-southeast-1"
+ - Southeast Asia Region 1
+ - "ap-northeast-1"
+ - Northeast Asia Region 1
+
+#### --s3-location-constraint
+
+Location constraint - must be set to match the Region.
+
Leave blank if not sure. Used when creating buckets only.
Properties:
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: !AWS,IBMCOS,IDrive,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS
+- Provider: !AWS,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS
- Type: string
- Required: false
@@ -19110,9 +19511,30 @@ Properties:
- Archived storage.
- Prices are lower, but it needs to be restored first to be accessed.
+#### --s3-storage-class
+
+The storage class to use when storing new objects in Qiniu.
+
+Properties:
+
+- Config: storage_class
+- Env Var: RCLONE_S3_STORAGE_CLASS
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+ - "STANDARD"
+ - Standard storage class
+ - "LINE"
+ - Infrequent access storage mode
+ - "GLACIER"
+ - Archive storage mode
+ - "DEEP_ARCHIVE"
+ - Deep archive storage mode
+
### Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).
#### --s3-bucket-acl
@@ -19175,7 +19597,9 @@ Properties:
#### --s3-sse-customer-key
-If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.
+To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data.
+
+Alternatively you can provide --sse-customer-key-base64.
Properties:
@@ -19188,6 +19612,23 @@ Properties:
- ""
- None
+#### --s3-sse-customer-key-base64
+
+If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data.
+
+Alternatively you can provide --sse-customer-key.
+
+Properties:
+
+- Config: sse_customer_key_base64
+- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_BASE64
+- Provider: AWS,Ceph,ChinaMobile,Minio
+- Type: string
+- Required: false
+- Examples:
+ - ""
+ - None
+
#### --s3-sse-customer-key-md5
If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
@@ -19676,6 +20117,67 @@ Properties:
- Type: bool
- Default: false
+#### --s3-versions
+
+Include old versions in directory listings.
+
+Properties:
+
+- Config: versions
+- Env Var: RCLONE_S3_VERSIONS
+- Type: bool
+- Default: false
+
+#### --s3-version-at
+
+Show file versions as they were at the specified time.
+
+The parameter should be a date, "2006-01-02", datetime "2006-01-02
+15:04:05" or a duration for that long ago, eg "100d" or "1h".
+
+Note that when using this no file write operations are permitted,
+so you can't upload files or delete them.
+
+See [the time option docs](https://rclone.org/docs/#time-option) for valid formats.
+
+
+Properties:
+
+- Config: version_at
+- Env Var: RCLONE_S3_VERSION_AT
+- Type: Time
+- Default: off
+
+#### --s3-decompress
+
+If set this will decompress gzip encoded objects.
+
+It is possible to upload objects to S3 with "Content-Encoding: gzip"
+set. Normally rclone will download these files as compressed objects.
+
+If this flag is set then rclone will decompress these files with
+"Content-Encoding: gzip" as they are received. This means that rclone
+can't check the size and hash but the file contents will be decompressed.
+
+
+Properties:
+
+- Config: decompress
+- Env Var: RCLONE_S3_DECOMPRESS
+- Type: bool
+- Default: false
+
+#### --s3-no-system-metadata
+
+Suppress setting and reading of system metadata
+
+Properties:
+
+- Config: no_system_metadata
+- Env Var: RCLONE_S3_NO_SYSTEM_METADATA
+- Type: bool
+- Default: false
+
### Metadata
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
@@ -19818,6 +20320,39 @@ Options:
- "max-age": Max age of upload to delete
+### cleanup-hidden
+
+Remove old versions of files.
+
+ rclone backend cleanup-hidden remote: [options] [+]
+
+This command removes any old hidden versions of files
+on a versions enabled bucket.
+
+Note that you can use -i/--dry-run with this command to see what it
+would do.
+
+ rclone backend cleanup-hidden s3:bucket/path/to/dir
+
+
+### versioning
+
+Set/get versioning support for a bucket.
+
+ rclone backend versioning remote: [options] [+]
+
+This command sets versioning support if a parameter is
+passed and then returns the current versioning status for the bucket
+supplied.
+
+ rclone backend versioning s3:bucket # read status only
+ rclone backend versioning s3:bucket Enabled
+ rclone backend versioning s3:bucket Suspended
+
+It may return "Enabled", "Suspended" or "Unversioned". Note that once versioning
+has been enabled the status can't be set back to "Unversioned".
+
+
### Anonymous access to public buckets
@@ -20515,6 +21050,169 @@ d) Delete this remote
y/e/d> y
```
+### IONOS Cloud {#ionos}
+
+[IONOS S3 Object Storage](https://cloud.ionos.com/storage/object-storage) is a service offered by IONOS for storing and accessing unstructured data.
+To connect to the service, you will need an access key and a secret key. These can be found in the [Data Center Designer](https://dcd.ionos.com/), by selecting **Manager resources** > **Object Storage Key Manager**.
+
+
+Here is an example of a configuration. First, run `rclone config`. This will walk you through an interactive setup process. Type `n` to add the new remote, and then enter a name:
+
+```
+Enter name for new remote.
+name> ionos-fra
+```
+
+Type `s3` to choose the connection type:
+```
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
+ \ (s3)
+[snip]
+Storage> s3
+```
+
+Type `IONOS`:
+```
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / IONOS Cloud
+ \ (IONOS)
+[snip]
+provider> IONOS
+```
+
+Press Enter to choose the default option `Enter AWS credentials in the next step`:
+```
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth>
+```
+
+Enter your Access Key and Secret key. These can be retrieved in the [Data Center Designer](https://dcd.ionos.com/), click on the menu “Manager resources” / "Object Storage Key Manager".
+```
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> YOUR_ACCESS_KEY
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> YOUR_SECRET_KEY
+```
+
+Choose the region where your bucket is located:
+```
+Option region.
+Region where your bucket will be created and your data stored.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Frankfurt, Germany
+ \ (de)
+ 2 / Berlin, Germany
+ \ (eu-central-2)
+ 3 / Logrono, Spain
+ \ (eu-south-2)
+region> 2
+```
+
+Choose the endpoint from the same region:
+```
+Option endpoint.
+Endpoint for IONOS S3 Object Storage.
+Specify the endpoint from the same region.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Frankfurt, Germany
+ \ (s3-eu-central-1.ionoscloud.com)
+ 2 / Berlin, Germany
+ \ (s3-eu-central-2.ionoscloud.com)
+ 3 / Logrono, Spain
+ \ (s3-eu-south-2.ionoscloud.com)
+endpoint> 1
+```
+
+Press Enter to choose the default option or choose the desired ACL setting:
+```
+Option acl.
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn't copy the ACL from the source but rather writes a fresh one.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+ / Owner gets FULL_CONTROL.
+[snip]
+acl>
+```
+
+Press Enter to skip the advanced config:
+```
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n>
+```
+
+Press Enter to save the configuration, and then `q` to quit the configuration process:
+```
+Configuration complete.
+Options:
+- type: s3
+- provider: IONOS
+- access_key_id: YOUR_ACCESS_KEY
+- secret_access_key: YOUR_SECRET_KEY
+- endpoint: s3-eu-central-1.ionoscloud.com
+Keep this "ionos-fra" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+Done! Now you can try some commands (for macOS, use `./rclone` instead of `rclone`).
+
+1) Create a bucket (the name must be unique within the whole IONOS S3)
+```
+rclone mkdir ionos-fra:my-bucket
+```
+2) List available buckets
+```
+rclone lsd ionos-fra:
+```
+4) Copy a file from local to remote
+```
+rclone copy /Users/file.txt ionos-fra:my-bucket
+```
+3) List contents of a bucket
+```
+rclone ls ionos-fra:my-bucket
+```
+5) Copy a file from remote to local
+```
+rclone copy ionos-fra:my-bucket/file.txt
+```
+
### Minio
[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops.
@@ -20582,6 +21280,207 @@ So once set up, for example, to copy files into a bucket
rclone copy /path/to/files minio:bucket
```
+### Qiniu Cloud Object Storage (Kodo) {#qiniu}
+
+[Qiniu Cloud Object Storage (Kodo)](https://www.qiniu.com/en/products/kodo), a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management.
+
+To configure access to Qiniu Kodo, follow the steps below:
+
+1. Run `rclone config` and select `n` for a new remote.
+
+```
+rclone config
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+```
+
+2. Give the name of the configuration. For example, name it 'qiniu'.
+
+```
+name> qiniu
+```
+
+3. Select `s3` storage.
+
+```
+Choose a number from below, or type in your own value
+ 1 / 1Fichier
+ \ (fichier)
+ 2 / Akamai NetStorage
+ \ (netstorage)
+ 3 / Alias for an existing remote
+ \ (alias)
+ 4 / Amazon Drive
+ \ (amazon cloud drive)
+ 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi
+ \ (s3)
+[snip]
+Storage> s3
+```
+
+4. Select `Qiniu` provider.
+```
+Choose a number from below, or type in your own value
+1 / Amazon Web Services (AWS) S3
+ \ "AWS"
+[snip]
+22 / Qiniu Object Storage (Kodo)
+ \ (Qiniu)
+[snip]
+provider> Qiniu
+```
+
+5. Enter your SecretId and SecretKey of Qiniu Kodo.
+
+```
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Enter a boolean value (true or false). Press Enter for the default ("false").
+Choose a number from below, or type in your own value
+ 1 / Enter AWS credentials in the next step
+ \ "false"
+ 2 / Get AWS credentials from the environment (env vars or IAM)
+ \ "true"
+env_auth> 1
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a string value. Press Enter for the default ("").
+access_key_id> AKIDxxxxxxxxxx
+AWS Secret Access Key (password)
+Leave blank for anonymous access or runtime credentials.
+Enter a string value. Press Enter for the default ("").
+secret_access_key> xxxxxxxxxxx
+```
+
+6. Select endpoint for Qiniu Kodo. This is the standard endpoint for different region.
+
+```
+ / The default endpoint - a good choice if you are unsure.
+ 1 | East China Region 1.
+ | Needs location constraint cn-east-1.
+ \ (cn-east-1)
+ / East China Region 2.
+ 2 | Needs location constraint cn-east-2.
+ \ (cn-east-2)
+ / North China Region 1.
+ 3 | Needs location constraint cn-north-1.
+ \ (cn-north-1)
+ / South China Region 1.
+ 4 | Needs location constraint cn-south-1.
+ \ (cn-south-1)
+ / North America Region.
+ 5 | Needs location constraint us-north-1.
+ \ (us-north-1)
+ / Southeast Asia Region 1.
+ 6 | Needs location constraint ap-southeast-1.
+ \ (ap-southeast-1)
+ / Northeast Asia Region 1.
+ 7 | Needs location constraint ap-northeast-1.
+ \ (ap-northeast-1)
+[snip]
+endpoint> 1
+
+Option endpoint.
+Endpoint for Qiniu Object Storage.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / East China Endpoint 1
+ \ (s3-cn-east-1.qiniucs.com)
+ 2 / East China Endpoint 2
+ \ (s3-cn-east-2.qiniucs.com)
+ 3 / North China Endpoint 1
+ \ (s3-cn-north-1.qiniucs.com)
+ 4 / South China Endpoint 1
+ \ (s3-cn-south-1.qiniucs.com)
+ 5 / North America Endpoint 1
+ \ (s3-us-north-1.qiniucs.com)
+ 6 / Southeast Asia Endpoint 1
+ \ (s3-ap-southeast-1.qiniucs.com)
+ 7 / Northeast Asia Endpoint 1
+ \ (s3-ap-northeast-1.qiniucs.com)
+endpoint> 1
+
+Option location_constraint.
+Location constraint - must be set to match the Region.
+Used when creating buckets only.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / East China Region 1
+ \ (cn-east-1)
+ 2 / East China Region 2
+ \ (cn-east-2)
+ 3 / North China Region 1
+ \ (cn-north-1)
+ 4 / South China Region 1
+ \ (cn-south-1)
+ 5 / North America Region 1
+ \ (us-north-1)
+ 6 / Southeast Asia Region 1
+ \ (ap-southeast-1)
+ 7 / Northeast Asia Region 1
+ \ (ap-northeast-1)
+location_constraint> 1
+```
+
+7. Choose acl and storage class.
+
+```
+Note that this ACL is applied when server-side copying objects as S3
+doesn't copy the ACL from the source but rather writes a fresh one.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+ / Owner gets FULL_CONTROL.
+ 2 | The AllUsers group gets READ access.
+ \ (public-read)
+[snip]
+acl> 2
+The storage class to use when storing new objects in Tencent COS.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+ 1 / Standard storage class
+ \ (STANDARD)
+ 2 / Infrequent access storage mode
+ \ (LINE)
+ 3 / Archive storage mode
+ \ (GLACIER)
+ 4 / Deep archive storage mode
+ \ (DEEP_ARCHIVE)
+[snip]
+storage_class> 1
+Edit advanced config? (y/n)
+y) Yes
+n) No (default)
+y/n> n
+Remote config
+--------------------
+[qiniu]
+- type: s3
+- provider: Qiniu
+- access_key_id: xxx
+- secret_access_key: xxx
+- region: cn-east-1
+- endpoint: s3-cn-east-1.qiniucs.com
+- location_constraint: cn-east-1
+- acl: public-read
+- storage_class: STANDARD
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+qiniu s3
+```
+
### RackCorp {#RackCorp}
[RackCorp Object Storage](https://www.rackcorp.com/storage/s3storage) is an S3 compatible object storage platform from your friendly cloud provider RackCorp.
@@ -24318,7 +25217,7 @@ If you intend to use the wrapped remote both directly for keeping
unencrypted content, as well as through a crypt remote for encrypted
content, it is recommended to point the crypt remote to a separate
directory within the wrapped remote. If you use a bucket-based storage
-system (e.g. Swift, S3, Google Compute Storage, B2, Hubic) it is generally
+system (e.g. Swift, S3, Google Compute Storage, B2) it is generally
advisable to wrap the crypt remote around a specific bucket (`s3:bucket`).
If wrapping around the entire root of the storage (`s3:`), and use the
optional file name encryption, rclone will encrypt the bucket name.
@@ -24334,7 +25233,7 @@ the password configured for an existing crypt remote means you will no longer
able to decrypt any of the previously encrypted content. The only possibility
is to re-upload everything via a crypt remote configured with your new password.
-Depending on the size of your data, your bandwith, storage quota etc, there are
+Depending on the size of your data, your bandwidth, storage quota etc, there are
different approaches you can take:
- If you have everything in a different location, for example on your local system,
you could remove all of the prior encrypted files, change the password for your
@@ -24347,7 +25246,7 @@ effectively decrypting everything on the fly using the old password and
re-encrypting using the new password. When done, delete the original crypt
remote directory and finally the rclone crypt configuration with the old password.
All data will be streamed from the storage system and back, so you will
-get half the bandwith and be charged twice if you have upload and download quota
+get half the bandwidth and be charged twice if you have upload and download quota
on the storage system.
**Note**: A security problem related to the random password generator
@@ -24660,7 +25559,7 @@ How to encode the encrypted filename to text string.
This option could help with shortening the encrypted filename. The
suitable option would depend on the way your remote count the filename
-length and if it's case sensitve.
+length and if it's case sensitive.
Properties:
@@ -24988,7 +25887,7 @@ Generally -1 (default, equivalent to 5) is recommended.
Levels 1 to 9 increase compression at the cost of speed. Going past 6
generally offers very little return.
-Level -2 uses Huffmann encoding only. Only use if you know what you
+Level -2 uses Huffman encoding only. Only use if you know what you
are doing.
Level 0 turns off compression.
@@ -25136,7 +26035,7 @@ This would produce something like this:
[AllDrives]
type = combine
- remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
+ upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
If you then add that config to your config file (find it with `rclone
config file`) then you can access all the shared drives in one place
@@ -25587,7 +26486,7 @@ Properties:
#### --dropbox-batch-commit-timeout
-Max time to wait for a batch to finish comitting
+Max time to wait for a batch to finish committing
Properties:
@@ -25672,7 +26571,7 @@ through a global file system.
## Configuration
The initial setup for the Enterprise File Fabric backend involves
-getting a token from the the Enterprise File Fabric which you need to
+getting a token from the Enterprise File Fabric which you need to
do in your browser. `rclone config` walks you through it.
Here is an example of how to make a remote called `remote`. First run:
@@ -25954,8 +26853,7 @@ To create an FTP configuration named `remote`, run
Rclone config guides you through an interactive setup process. A minimal
rclone FTP remote definition only requires host, username and password.
-For an anonymous FTP server, use `anonymous` as username and your email
-address as password.
+For an anonymous FTP server, see [below](#anonymous-ftp).
```
No remotes found, make a new one?
@@ -26032,11 +26930,33 @@ excess files in the directory.
rclone sync -i /home/local/directory remote:directory
-### Example without a config file ###
+### Anonymous FTP
- rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=`rclone obscure dummy`
+When connecting to a FTP server that allows anonymous login, you can use the
+special "anonymous" username. Traditionally, this user account accepts any
+string as a password, although it is common to use either the password
+"anonymous" or "guest". Some servers require the use of a valid e-mail
+address as password.
-### Implicit TLS ###
+Using [on-the-fly](#backend-path-to-dir) or
+[connection string](https://rclone.org/docs/#connection-strings) remotes makes it easy to access
+such servers, without requiring any configuration in advance. The following
+are examples of that:
+
+ rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy)
+ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy):
+
+The above examples work in Linux shells and in PowerShell, but not Windows
+Command Prompt. They execute the [rclone obscure](https://rclone.org/commands/rclone_obscure/)
+command to create a password string in the format required by the
+[pass](#ftp-pass) option. The following examples are exactly the same, except use
+an already obscured string representation of the same password "dummy", and
+therefore works even in Windows Command Prompt:
+
+ rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM
+ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM:
+
+### Implicit TLS
Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to
be enabled in the FTP backend config for the remote, or with
@@ -26048,7 +26968,7 @@ can be set with [`--ftp-port`](#ftp-port).
In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
the following characters are also replaced:
-File names cannot end with the following characters. Repacement is
+File names cannot end with the following characters. Replacement is
limited to the last character in a file name:
| Character | Value | Replacement |
@@ -26158,6 +27078,20 @@ Here are the Advanced options specific to ftp (FTP).
Maximum number of FTP simultaneous connections, 0 for unlimited.
+Note that setting this is very likely to cause deadlocks so it should
+be used with care.
+
+If you are doing a sync or copy then make sure concurrency is one more
+than the sum of `--transfers` and `--checkers`.
+
+If you use `--check-first` then it just needs to be one more than the
+maximum of `--checkers` and `--transfers`.
+
+So for `concurrency 3` you'd use `--checkers 2 --transfers 2
+--check-first` or `--checkers 1 --transfers 1`.
+
+
+
Properties:
- Config: concurrency
@@ -26220,6 +27154,17 @@ Properties:
- Type: bool
- Default: false
+#### --ftp-force-list-hidden
+
+Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD.
+
+Properties:
+
+- Config: force_list_hidden
+- Env Var: RCLONE_FTP_FORCE_LIST_HIDDEN
+- Type: bool
+- Default: false
+
#### --ftp-idle-timeout
Max time before closing idle connections.
@@ -26976,7 +27921,7 @@ Properties:
If set this will decompress gzip encoded objects.
It is possible to upload objects to GCS with "Content-Encoding: gzip"
-set. Normally rclone will download these files files as compressed objects.
+set. Normally rclone will download these files as compressed objects.
If this flag is set then rclone will decompress these files with
"Content-Encoding: gzip" as they are received. This means that rclone
@@ -26990,6 +27935,19 @@ Properties:
- Type: bool
- Default: false
+#### --gcs-endpoint
+
+Endpoint for the service.
+
+Leave blank normally.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_GCS_ENDPOINT
+- Type: string
+- Required: false
+
#### --gcs-encoding
The encoding for the backend.
@@ -28343,10 +29301,10 @@ drives found and a combined drive.
[AllDrives]
type = combine
- remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
+ upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
Adding this to the rclone config file will cause those team drives to
-be accessible with the aliases shown. Any illegal charactes will be
+be accessible with the aliases shown. Any illegal characters will be
substituted with "_" and duplicate names will have numbers suffixed.
It will also add a remote called AllDrives which shows all the shared
drives combined into one directory tree.
@@ -29716,7 +30674,7 @@ the process is very similar to the process of initial setup exemplified before.
HiDrive allows modification times to be set on objects accurate to 1 second.
HiDrive supports [its own hash type](https://static.hidrive.com/dev/0001)
-which is used to verify the integrety of file contents after successful transfers.
+which is used to verify the integrity of file contents after successful transfers.
### Restricted filename characters
@@ -30252,240 +31210,6 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
-# Hubic
-
-Paths are specified as `remote:path`
-
-Paths are specified as `remote:container` (or `remote:` for the `lsd`
-command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir`.
-
-## Configuration
-
-The initial setup for Hubic involves getting a token from Hubic which
-you need to do in your browser. `rclone config` walks you through it.
-
-Here is an example of how to make a remote called `remote`. First run:
-
- rclone config
-
-This will guide you through an interactive setup process:
-
-```
-n) New remote
-s) Set configuration password
-n/s> n
-name> remote
-Type of storage to configure.
-Choose a number from below, or type in your own value
-[snip]
-XX / Hubic
- \ "hubic"
-[snip]
-Storage> hubic
-Hubic Client Id - leave blank normally.
-client_id>
-Hubic Client Secret - leave blank normally.
-client_secret>
-Remote config
-Use auto config?
- * Say Y if not sure
- * Say N if you are working on a remote or headless machine
-y) Yes
-n) No
-y/n> y
-If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
-Log in and authorize rclone for access
-Waiting for code...
-Got code
---------------------
-[remote]
-client_id =
-client_secret =
-token = {"access_token":"XXXXXX"}
---------------------
-y) Yes this is OK
-e) Edit this remote
-d) Delete this remote
-y/e/d> y
-```
-
-See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
-machine with no Internet browser available.
-
-Note that rclone runs a webserver on your local machine to collect the
-token as returned from Hubic. This only runs from the moment it opens
-your browser to the moment you get back the verification code. This
-is on `http://127.0.0.1:53682/` and this it may require you to unblock
-it temporarily if you are running a host firewall.
-
-Once configured you can then use `rclone` like this,
-
-List containers in the top level of your Hubic
-
- rclone lsd remote:
-
-List all the files in your Hubic
-
- rclone ls remote:
-
-To copy a local directory to an Hubic directory called backup
-
- rclone copy /home/source remote:backup
-
-If you want the directory to be visible in the official *Hubic
-browser*, you need to copy your files to the `default` directory
-
- rclone copy /home/source remote:default/backup
-
-### --fast-list ###
-
-This remote supports `--fast-list` which allows you to use fewer
-transactions in exchange for more memory. See the [rclone
-docs](https://rclone.org/docs/#fast-list) for more details.
-
-### Modified time ###
-
-The modified time is stored as metadata on the object as
-`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
-ns.
-
-This is a de facto standard (used in the official python-swiftclient
-amongst others) for storing the modification time for an object.
-
-Note that Hubic wraps the Swift backend, so most of the properties of
-are the same.
-
-
-### Standard options
-
-Here are the Standard options specific to hubic (Hubic).
-
-#### --hubic-client-id
-
-OAuth Client Id.
-
-Leave blank normally.
-
-Properties:
-
-- Config: client_id
-- Env Var: RCLONE_HUBIC_CLIENT_ID
-- Type: string
-- Required: false
-
-#### --hubic-client-secret
-
-OAuth Client Secret.
-
-Leave blank normally.
-
-Properties:
-
-- Config: client_secret
-- Env Var: RCLONE_HUBIC_CLIENT_SECRET
-- Type: string
-- Required: false
-
-### Advanced options
-
-Here are the Advanced options specific to hubic (Hubic).
-
-#### --hubic-token
-
-OAuth Access Token as a JSON blob.
-
-Properties:
-
-- Config: token
-- Env Var: RCLONE_HUBIC_TOKEN
-- Type: string
-- Required: false
-
-#### --hubic-auth-url
-
-Auth server URL.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: auth_url
-- Env Var: RCLONE_HUBIC_AUTH_URL
-- Type: string
-- Required: false
-
-#### --hubic-token-url
-
-Token server url.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: token_url
-- Env Var: RCLONE_HUBIC_TOKEN_URL
-- Type: string
-- Required: false
-
-#### --hubic-chunk-size
-
-Above this size files will be chunked into a _segments container.
-
-Above this size files will be chunked into a _segments container. The
-default for this is 5 GiB which is its maximum value.
-
-Properties:
-
-- Config: chunk_size
-- Env Var: RCLONE_HUBIC_CHUNK_SIZE
-- Type: SizeSuffix
-- Default: 5Gi
-
-#### --hubic-no-chunk
-
-Don't chunk files during streaming upload.
-
-When doing streaming uploads (e.g. using rcat or mount) setting this
-flag will cause the swift backend to not upload chunked files.
-
-This will limit the maximum upload size to 5 GiB. However non chunked
-files are easier to deal with and have an MD5SUM.
-
-Rclone will still chunk files bigger than chunk_size when doing normal
-copy operations.
-
-Properties:
-
-- Config: no_chunk
-- Env Var: RCLONE_HUBIC_NO_CHUNK
-- Type: bool
-- Default: false
-
-#### --hubic-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_HUBIC_ENCODING
-- Type: MultiEncoder
-- Default: Slash,InvalidUtf8
-
-
-
-## Limitations
-
-This uses the normal OpenStack Swift mechanism to refresh the Swift
-API credentials and ignores the expires field returned by the Hubic
-API.
-
-The Swift API doesn't return a correct MD5SUM for segmented files
-(Dynamic or Static Large Objects) so rclone won't check or use the
-MD5SUM for these.
-
# Internet Archive
The Internet Archive backend utilizes Items on [archive.org](https://archive.org/)
@@ -30495,11 +31219,10 @@ Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.htm
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:item/path/to/dir`.
-Once you have made a remote (see the provider specific section above)
-you can use it like this:
-
Unlike S3, listing up all items uploaded by you isn't supported.
+Once you have made a remote, you can use it like this:
+
Make a new item
rclone mkdir remote:item
@@ -30536,6 +31259,7 @@ The following are reserved by Internet Archive:
- `format`
- `old_version`
- `viruscheck`
+- `summation`
Trying to set values to these keys is ignored with a warning.
Only setting `mtime` is an exception. Doing so make it the identical behavior as setting ModTime.
@@ -30741,19 +31465,20 @@ Here are the possible system metadata items for the internetarchive backend.
| Name | Help | Type | Example | Read Only |
|------|------|------|---------|-----------|
-| crc32 | CRC32 calculated by Internet Archive | string | 01234567 | N |
-| format | Name of format identified by Internet Archive | string | Comma-Separated Values | N |
-| md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | N |
-| mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
-| name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | N |
-| old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | N |
+| crc32 | CRC32 calculated by Internet Archive | string | 01234567 | **Y** |
+| format | Name of format identified by Internet Archive | string | Comma-Separated Values | **Y** |
+| md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | **Y** |
+| mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | **Y** |
+| name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | **Y** |
+| old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | **Y** |
| rclone-ia-mtime | Time of last modification, managed by Internet Archive | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| rclone-mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| rclone-update-track | Random value used by Rclone for tracking changes inside Internet Archive | string | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | N |
-| sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | N |
-| size | File size in bytes | decimal number | 123456 | N |
-| source | The source of the file | string | original | N |
-| viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | N |
+| sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | **Y** |
+| size | File size in bytes | decimal number | 123456 | **Y** |
+| source | The source of the file | string | original | **Y** |
+| summation | Check https://forum.rclone.org/t/31922 for how it is used | string | md5 | **Y** |
+| viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | **Y** |
See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
@@ -30774,7 +31499,7 @@ it also provides white-label solutions to different companies, such as:
* Elgiganten Sweden (cloud.elgiganten.se)
* Elgiganten Denmark (cloud.elgiganten.dk)
* Giganti Cloud (cloud.gigantti.fi)
- * ELKO Clouud (cloud.elko.is)
+ * ELKO Cloud (cloud.elko.is)
Most of the white-label versions are supported by this backend, although may require different
authentication setup - described below.
@@ -30790,10 +31515,33 @@ and you have to choose the correct one when setting up the remote.
### Standard authentication
-To configure Jottacloud you will need to generate a personal security token in the Jottacloud web interface.
-You will the option to do in your [account security settings](https://www.jottacloud.com/web/secure)
-(for whitelabel version you need to find this page in its web interface).
-Note that the web interface may refer to this token as a JottaCli token.
+The standard authentication method used by the official service (jottacloud.com), as well as
+some of the whitelabel services, requires you to generate a single-use personal login token
+from the account security settings in the service's web interface. Log in to your account,
+go to "Settings" and then "Security", or use the direct link presented to you by rclone when
+configuring the remote: . Scroll down to the section
+"Personal login token", and click the "Generate" button. Note that if you are using a
+whitelabel service you probably can't use the direct link, you need to find the same page in
+their dedicated web interface, and also it may be in a different location than described above.
+
+To access your account from multiple instances of rclone, you need to configure each of them
+with a separate personal login token. E.g. you create a Jottacloud remote with rclone in one
+location, and copy the configuration file to a second location where you also want to run
+rclone and access the same remote. Then you need to replace the token for one of them, using
+the [config reconnect](https://rclone.org/commands/rclone_config_reconnect/) command, which
+requires you to generate a new personal login token and supply as input. If you do not
+do this, the token may easily end up being invalidated, resulting in both instances failing
+with an error message something along the lines of:
+
+ oauth2: cannot fetch token: 400 Bad Request
+ Response: {"error":"invalid_grant","error_description":"Stale token"}
+
+When this happens, you need to replace the token as described above to be able to use your
+remote again.
+
+All personal login tokens you have taken into use will be listed in the web interface under
+"My logged in devices", and from the right side of that list you can click the "X" button to
+revoke individual tokens.
### Legacy authentication
@@ -31947,7 +32695,7 @@ Use `rclone dedupe` to fix duplicated files.
#### Object not found
If you are connecting to your Mega remote for the first time,
-to test access and syncronisation, you may receive an error such as
+to test access and synchronization, you may receive an error such as
```
Failed to create file system for "my-mega-remote:":
@@ -32323,7 +33071,7 @@ Individual symlink files on the remote can be used with the commands like "cat"
With NetStorage, directories can exist in one of two forms:
1. **Explicit Directory**. This is an actual, physical directory that you have created in a storage group.
-2. **Implicit Directory**. This refers to a directory within a path that has not been physically created. For example, during upload of a file, non-existent subdirectories can be specified in the target path. NetStorage creates these as "implicit." While the directories aren't physically created, they exist implicitly and the noted path is connected with the uploaded file.
+2. **Implicit Directory**. This refers to a directory within a path that has not been physically created. For example, during upload of a file, nonexistent subdirectories can be specified in the target path. NetStorage creates these as "implicit." While the directories aren't physically created, they exist implicitly and the noted path is connected with the uploaded file.
Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly.
@@ -33100,7 +33848,7 @@ rclone uses a default Client ID when talking to OneDrive, unless a custom `clien
The default Client ID and Key are shared by all rclone users when performing requests.
You may choose to create and use your own Client ID, in case the default one does not work well for you.
-For example, you might see throtting.
+For example, you might see throttling.
#### Creating Client ID for OneDrive Personal
@@ -33128,7 +33876,7 @@ A common error is that the publisher of the App is not verified.
You may try to [verify you account](https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview), or try to limit the App to your organization only, as shown below.
1. Make sure to create the App with your business account.
-2. Follow the steps above to create an App. However, we need a different account type here: `Accounts in this organizational directory only (*** - Single tenant)`. Note that you can also change the account type aftering creating the App.
+2. Follow the steps above to create an App. However, we need a different account type here: `Accounts in this organizational directory only (*** - Single tenant)`. Note that you can also change the account type after creating the App.
3. Find the [tenant ID](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant) of your organization.
4. In the rclone config, set `auth_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize`.
5. In the rclone config, set `token_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token`.
@@ -33239,7 +33987,7 @@ Properties:
- "de"
- Microsoft Cloud Germany
- "cn"
- - Azure and Office 365 operated by 21Vianet in China
+ - Azure and Office 365 operated by Vnet Group in China
### Advanced options
@@ -33544,7 +34292,7 @@ An official document about the limitations for different types of OneDrive can b
## Versions
Every change in a file OneDrive causes the service to create a new
-version of the the file. This counts against a users quota. For
+version of the file. This counts against a users quota. For
example changing the modification time of a file creates a second
version, so the file apparently uses twice the space.
@@ -33865,6 +34613,535 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
+# Oracle Object Storage
+
+[Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm)
+
+[Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/)
+
+Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
+command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
+
+## Configuration
+
+Here is an example of making an oracle object storage configuration. `rclone config` walks you
+through it.
+
+Here is an example of how to make a remote called `remote`. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+
+```
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> n
+
+Enter name for new remote.
+name> remote
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Oracle Cloud Infrastructure Object Storage
+ \ (oracleobjectstorage)
+Storage> oracleobjectstorage
+
+Option provider.
+Choose your Auth Provider
+Choose a number from below, or type in your own string value.
+Press Enter for the default (env_auth).
+ 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins
+ \ (env_auth)
+ / use an OCI user and an API key for authentication.
+ 2 | you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
+ | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
+ \ (user_principal_auth)
+ / use instance principals to authorize an instance to make API calls.
+ 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata.
+ | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
+ \ (instance_principal_auth)
+ 4 / use resource principals to make API calls
+ \ (resource_principal_auth)
+ 5 / no credentials needed, this is typically for reading public buckets
+ \ (no_auth)
+provider> 2
+
+Option namespace.
+Object storage namespace
+Enter a value.
+namespace> idbamagbg734
+
+Option compartment.
+Object storage compartment OCID
+Enter a value.
+compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
+
+Option region.
+Object storage Region
+Enter a value.
+region> us-ashburn-1
+
+Option endpoint.
+Endpoint for Object storage API.
+Leave blank to use the default endpoint for the region.
+Enter a value. Press Enter to leave empty.
+endpoint>
+
+Option config_file.
+Path to OCI config file
+Choose a number from below, or type in your own string value.
+Press Enter for the default (~/.oci/config).
+ 1 / oci configuration file location
+ \ (~/.oci/config)
+config_file> /etc/oci/dev.conf
+
+Option config_profile.
+Profile name inside OCI config file
+Choose a number from below, or type in your own string value.
+Press Enter for the default (Default).
+ 1 / Use the default profile
+ \ (Default)
+config_profile> Test
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: oracleobjectstorage
+- namespace: idbamagbg734
+- compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
+- region: us-ashburn-1
+- provider: user_principal_auth
+- config_file: /etc/oci/dev.conf
+- config_profile: Test
+Keep this "remote" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+See all buckets
+
+ rclone lsd remote:
+
+Create a new bucket
+
+ rclone mkdir remote:bucket
+
+List the contents of a bucket
+
+ rclone ls remote:bucket
+ rclone ls remote:bucket --max-depth 1
+
+### Modified time
+
+The modified time is stored as metadata on the object as
+`opc-meta-mtime` as floating point since the epoch, accurate to 1 ns.
+
+If the modification time needs to be updated rclone will attempt to perform a server
+side copy to update the modification if the object can be copied in a single part.
+In the case the object is larger than 5Gb, the object will be uploaded rather than copied.
+
+Note that reading this from the object takes an additional `HEAD` request as the metadata
+isn't returned in object listings.
+
+### Multipart uploads
+
+rclone supports multipart uploads with OOS which means that it can
+upload files bigger than 5 GiB.
+
+Note that files uploaded *both* with multipart upload *and* through
+crypt remotes do not have MD5 sums.
+
+rclone switches from single part uploads to multipart uploads at the
+point specified by `--oos-upload-cutoff`. This can be a maximum of 5 GiB
+and a minimum of 0 (ie always upload multipart files).
+
+The chunk sizes used in the multipart upload are specified by
+`--oos-chunk-size` and the number of chunks uploaded concurrently is
+specified by `--oos-upload-concurrency`.
+
+Multipart uploads will use `--transfers` * `--oos-upload-concurrency` *
+`--oos-chunk-size` extra memory. Single part uploads to not use extra
+memory.
+
+Single part transfers can be faster than multipart transfers or slower
+depending on your latency from oos - the more latency, the more likely
+single part transfers will be faster.
+
+Increasing `--oos-upload-concurrency` will increase throughput (8 would
+be a sensible value) and increasing `--oos-chunk-size` also increases
+throughput (16M would be sensible). Increasing either of these will
+use more memory. The default values are high enough to gain most of
+the possible performance without using too much memory.
+
+
+### Standard options
+
+Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
+
+#### --oos-provider
+
+Choose your Auth Provider
+
+Properties:
+
+- Config: provider
+- Env Var: RCLONE_OOS_PROVIDER
+- Type: string
+- Default: "env_auth"
+- Examples:
+ - "env_auth"
+ - automatically pickup the credentials from runtime(env), first one to provide auth wins
+ - "user_principal_auth"
+ - use an OCI user and an API key for authentication.
+ - you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
+ - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
+ - "instance_principal_auth"
+ - use instance principals to authorize an instance to make API calls.
+ - each instance has its own identity, and authenticates using the certificates that are read from instance metadata.
+ - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
+ - "resource_principal_auth"
+ - use resource principals to make API calls
+ - "no_auth"
+ - no credentials needed, this is typically for reading public buckets
+
+#### --oos-namespace
+
+Object storage namespace
+
+Properties:
+
+- Config: namespace
+- Env Var: RCLONE_OOS_NAMESPACE
+- Type: string
+- Required: true
+
+#### --oos-compartment
+
+Object storage compartment OCID
+
+Properties:
+
+- Config: compartment
+- Env Var: RCLONE_OOS_COMPARTMENT
+- Provider: !no_auth
+- Type: string
+- Required: true
+
+#### --oos-region
+
+Object storage Region
+
+Properties:
+
+- Config: region
+- Env Var: RCLONE_OOS_REGION
+- Type: string
+- Required: true
+
+#### --oos-endpoint
+
+Endpoint for Object storage API.
+
+Leave blank to use the default endpoint for the region.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_OOS_ENDPOINT
+- Type: string
+- Required: false
+
+#### --oos-config-file
+
+Path to OCI config file
+
+Properties:
+
+- Config: config_file
+- Env Var: RCLONE_OOS_CONFIG_FILE
+- Provider: user_principal_auth
+- Type: string
+- Default: "~/.oci/config"
+- Examples:
+ - "~/.oci/config"
+ - oci configuration file location
+
+#### --oos-config-profile
+
+Profile name inside the oci config file
+
+Properties:
+
+- Config: config_profile
+- Env Var: RCLONE_OOS_CONFIG_PROFILE
+- Provider: user_principal_auth
+- Type: string
+- Default: "Default"
+- Examples:
+ - "Default"
+ - Use the default profile
+
+### Advanced options
+
+Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
+
+#### --oos-upload-cutoff
+
+Cutoff for switching to chunked upload.
+
+Any files larger than this will be uploaded in chunks of chunk_size.
+The minimum is 0 and the maximum is 5 GiB.
+
+Properties:
+
+- Config: upload_cutoff
+- Env Var: RCLONE_OOS_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 200Mi
+
+#### --oos-chunk-size
+
+Chunk size to use for uploading.
+
+When uploading files larger than upload_cutoff or files with unknown
+size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google
+photos or google docs) they will be uploaded as multipart uploads
+using this chunk size.
+
+Note that "upload_concurrency" chunks of this size are buffered
+in memory per transfer.
+
+If you are transferring large files over high-speed links and you have
+enough memory, then increasing this will speed up the transfers.
+
+Rclone will automatically increase the chunk size when uploading a
+large file of known size to stay below the 10,000 chunks limit.
+
+Files of unknown size are uploaded with the configured
+chunk_size. Since the default chunk size is 5 MiB and there can be at
+most 10,000 chunks, this means that by default the maximum size of
+a file you can stream upload is 48 GiB. If you wish to stream upload
+larger files then you will need to increase chunk_size.
+
+Increasing the chunk size decreases the accuracy of the progress
+statistics displayed with "-P" flag.
+
+
+Properties:
+
+- Config: chunk_size
+- Env Var: RCLONE_OOS_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5Mi
+
+#### --oos-upload-concurrency
+
+Concurrency for multipart uploads.
+
+This is the number of chunks of the same file that are uploaded
+concurrently.
+
+If you are uploading small numbers of large files over high-speed links
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 10
+
+#### --oos-copy-cutoff
+
+Cutoff for switching to multipart copy.
+
+Any files larger than this that need to be server-side copied will be
+copied in chunks of this size.
+
+The minimum is 0 and the maximum is 5 GiB.
+
+Properties:
+
+- Config: copy_cutoff
+- Env Var: RCLONE_OOS_COPY_CUTOFF
+- Type: SizeSuffix
+- Default: 4.656Gi
+
+#### --oos-copy-timeout
+
+Timeout for copy.
+
+Copy is an asynchronous operation, specify timeout to wait for copy to succeed
+
+
+Properties:
+
+- Config: copy_timeout
+- Env Var: RCLONE_OOS_COPY_TIMEOUT
+- Type: Duration
+- Default: 1m0s
+
+#### --oos-disable-checksum
+
+Don't store MD5 checksum with object metadata.
+
+Normally rclone will calculate the MD5 checksum of the input before
+uploading it so it can add it to metadata on the object. This is great
+for data integrity checking but can cause long delays for large files
+to start uploading.
+
+Properties:
+
+- Config: disable_checksum
+- Env Var: RCLONE_OOS_DISABLE_CHECKSUM
+- Type: bool
+- Default: false
+
+#### --oos-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_OOS_ENCODING
+- Type: MultiEncoder
+- Default: Slash,InvalidUtf8,Dot
+
+#### --oos-leave-parts-on-error
+
+If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
+
+It should be set to true for resuming uploads across different sessions.
+
+WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and will add
+additional costs if not cleaned up.
+
+
+Properties:
+
+- Config: leave_parts_on_error
+- Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR
+- Type: bool
+- Default: false
+
+#### --oos-no-check-bucket
+
+If set, don't attempt to check the bucket exists or create it.
+
+This can be useful when trying to minimise the number of transactions
+rclone does if you know the bucket exists already.
+
+It can also be needed if the user you are using does not have bucket
+creation permissions.
+
+
+Properties:
+
+- Config: no_check_bucket
+- Env Var: RCLONE_OOS_NO_CHECK_BUCKET
+- Type: bool
+- Default: false
+
+## Backend commands
+
+Here are the commands specific to the oracleobjectstorage backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the [backend](https://rclone.org/commands/rclone_backend/) command for more
+info on how to pass options and arguments.
+
+These can be run on a running backend using the rc command
+[backend/command](https://rclone.org/rc/#backend-command).
+
+### rename
+
+change the name of an object
+
+ rclone backend rename remote: [options] [+]
+
+This command can be used to rename a object.
+
+Usage Examples:
+
+ rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
+
+
+### list-multipart-uploads
+
+List the unfinished multipart uploads
+
+ rclone backend list-multipart-uploads remote: [options] [+]
+
+This command lists the unfinished multipart uploads in JSON format.
+
+ rclone backend list-multipart-uploads oos:bucket/path/to/object
+
+It returns a dictionary of buckets with values as lists of unfinished
+multipart uploads.
+
+You can call it with no bucket in which case it lists all bucket, with
+a bucket or with a bucket and path.
+
+ {
+ "test-bucket": [
+ {
+ "namespace": "test-namespace",
+ "bucket": "test-bucket",
+ "object": "600m.bin",
+ "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
+ "timeCreated": "2022-07-29T06:21:16.595Z",
+ "storageTier": "Standard"
+ }
+ ]
+
+
+### cleanup
+
+Remove unfinished multipart uploads.
+
+ rclone backend cleanup remote: [options] [+]
+
+This command removes unfinished multipart uploads of age greater than
+max-age which defaults to 24 hours.
+
+Note that you can use -i/--dry-run with this command to see what it
+would do.
+
+ rclone backend cleanup oos:bucket/path/to/object
+ rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
+
+Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
+
+
+Options:
+
+- "max-age": Max age of upload to delete
+
+
+
# QingStor
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
@@ -34392,7 +35669,7 @@ Commercial implementations of that being:
* [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/)
* [Memset Memstore](https://www.memset.com/cloud/storage/)
* [OVH Object Storage](https://www.ovh.co.uk/public-cloud/storage/object-storage/)
- * [Oracle Cloud Storage](https://cloud.oracle.com/object-storage/buckets)
+ * [Oracle Cloud Storage](https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html)
* [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html)
Paths are specified as `remote:container` (or `remote:` for the `lsd`
@@ -34915,6 +36192,38 @@ Properties:
- Type: bool
- Default: false
+#### --swift-no-large-objects
+
+Disable support for static and dynamic large objects
+
+Swift cannot transparently store files bigger than 5 GiB. There are
+two schemes for doing that, static or dynamic large objects, and the
+API does not allow rclone to determine whether a file is a static or
+dynamic large object without doing a HEAD on the object. Since these
+need to be treated differently, this means rclone has to issue HEAD
+requests for objects for example when reading checksums.
+
+When `no_large_objects` is set, rclone will assume that there are no
+static or dynamic large objects stored. This means it can stop doing
+the extra HEAD calls which in turn increases performance greatly
+especially when doing a swift to swift transfer with `--checksum` set.
+
+Setting this option implies `no_chunk` and also that no files will be
+uploaded in chunks, so files bigger than 5 GiB will just fail on
+upload.
+
+If you set this option and there *are* static or dynamic large objects,
+then this will give incorrect hashes for them. Downloads will succeed,
+but other operations such as Remove and Copy will fail.
+
+
+Properties:
+
+- Config: no_large_objects
+- Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS
+- Type: bool
+- Default: false
+
#### --swift-encoding
The encoding for the backend.
@@ -35938,7 +37247,7 @@ SSH installations.
Paths are specified as `remote:path`. If the path does not begin with
a `/` it is relative to the home directory of the user. An empty path
`remote:` refers to the user's home directory. For example, `rclone lsd remote:`
-would list the home directory of the user cofigured in the rclone remote config
+would list the home directory of the user configured in the rclone remote config
(`i.e /home/sftpuser`). However, `rclone lsd remote:/` would list the root
directory for remote machine (i.e. `/`)
@@ -36181,7 +37490,7 @@ can also run a SSH server, which is a port of OpenSSH (see official
[installation guide](https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse)). On a Windows server the shell handling is different: Although it can also
be set up to use a Unix type shell, e.g. Cygwin bash, the default is to
use Windows Command Prompt (cmd.exe), and PowerShell is a recommended
-alternative. All of these have bahave differently, which rclone must handle.
+alternative. All of these have behave differently, which rclone must handle.
Rclone tries to auto-detect what type of shell is used on the server,
first time you access the SFTP remote. If a remote shell session is
@@ -36213,7 +37522,7 @@ a new sftp remote is accessed. If you configure a sftp remote
without a config file, e.g. an [on the fly](https://rclone.org/docs/#backend-path-to-dir])
remote, rclone will have nowhere to store the result, and it
will re-run the command on every access. To avoid this you should
-explicitely set the `shell_type` option to the correct value,
+explicitly set the `shell_type` option to the correct value,
or to `none` if you want to prevent rclone from executing any
remote shell commands.
@@ -36221,7 +37530,7 @@ It is also important to note that, since the shell type decides
how quoting and escaping of file paths used as command-line arguments
are performed, configuring the wrong shell type may leave you exposed
to command injection exploits. Make sure to confirm the auto-detected
-shell type, or explicitely set the shell type you know is correct,
+shell type, or explicitly set the shell type you know is correct,
or disable shell access until you know.
### Checksum
@@ -36706,19 +38015,24 @@ Properties:
Upload and download chunk size.
-This controls the maximum packet size used in the SFTP protocol. The
-RFC limits this to 32768 bytes (32k), however a lot of servers
-support larger sizes and setting it larger will increase transfer
-speed dramatically on high latency links.
+This controls the maximum size of payload in SFTP protocol packets.
+The RFC limits this to 32768 bytes (32k), which is the default. However,
+a lot of servers support larger sizes, typically limited to a maximum
+total package size of 256k, and setting it larger will increase transfer
+speed dramatically on high latency links. This includes OpenSSH, and,
+for example, using the value of 255k works well, leaving plenty of room
+for overhead while still being within a total packet size of 256k.
-Only use a setting higher than 32k if you always connect to the same
-server or after sufficiently broad testing.
-
-For example using the value of 252k with OpenSSH works well with its
-maximum packet size of 256k.
-
-If you get the error "failed to send packet header: EOF" when copying
-a large file, try lowering this number.
+Make sure to test thoroughly before using a value higher than 32k,
+and only use it if you always connect to the same server or after
+sufficiently broad testing. If you get errors such as
+"failed to send packet payload: EOF", lots of "connection lost",
+or "corrupted on transfer", when copying a larger file, try lowering
+the value. The server run by [rclone serve sftp](/commands/rclone_serve_sftp)
+sends packets with standard 32k maximum payload so you must not
+set a different chunk_size when downloading files, but it accepts
+packets up to the 256k total size, so for uploads the chunk_size
+can be set as for the OpenSSH example above.
Properties:
@@ -36808,6 +38122,233 @@ Hetzner Storage Boxes are supported through the SFTP backend on port 23.
See [Hetzner's documentation for details](https://docs.hetzner.com/robot/storage-box/access/access-ssh-rsync-borg#rclone)
+# SMB
+
+SMB is [a communication protocol to share files over network](https://en.wikipedia.org/wiki/Server_Message_Block).
+
+This relies on [go-smb2 library](https://github.com/hirochachacha/go-smb2/) for communication with SMB protocol.
+
+Paths are specified as `remote:sharename` (or `remote:` for the `lsd`
+command.) You may put subdirectories in too, e.g. `remote:item/path/to/dir`.
+
+## Notes
+
+The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in `smb.conf` (usually in `/etc/samba/`) file.
+You can find shares by quering the root if you're unsure (e.g. `rclone lsd remote:`).
+
+You can't access to the shared printers from rclone, obviously.
+
+You can't use Anonymous access for logging in. You have to use the `guest` user with an empty password instead.
+The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and periods.
+Alternatively, [the local backend](https://rclone.org/local/#paths-on-windows) on Windows can access SMB servers using UNC paths, by `\\server\share`. This doesn't apply to non-Windows OSes, such as Linux and macOS.
+
+## Configuration
+
+Here is an example of making a SMB configuration.
+
+First run
+
+ rclone config
+
+This will guide you through an interactive setup process.
+
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+XX / SMB / CIFS
+ \ (smb)
+Storage> smb
+
+Option host.
+Samba hostname to connect to.
+E.g. "example.com".
+Enter a value.
+host> localhost
+
+Option user.
+Samba username.
+Enter a string value. Press Enter for the default (lesmi).
+user> guest
+
+Option port.
+Samba port number.
+Enter a signed integer. Press Enter for the default (445).
+port>
+
+Option pass.
+Samba password.
+Choose an alternative below. Press Enter for the default (n).
+y) Yes, type in my own password
+g) Generate random password
+n) No, leave this optional password blank (default)
+y/g/n> g
+Password strength in bits.
+64 is just about memorable
+128 is secure
+1024 is the maximum
+Bits> 64
+Your password is: XXXX
+Use this password? Please note that an obscured version of this
+password (and not the password itself) will be stored under your
+configuration file, so keep this generated password in a safe place.
+y) Yes (default)
+n) No
+y/n> y
+
+Option domain.
+Domain name for NTLM authentication.
+Enter a string value. Press Enter for the default (WORKGROUP).
+domain>
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: samba
+- host: localhost
+- user: guest
+- pass: *** ENCRYPTED ***
+Keep this "remote" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> d
+```
+
+
+### Standard options
+
+Here are the Standard options specific to smb (SMB / CIFS).
+
+#### --smb-host
+
+SMB server hostname to connect to.
+
+E.g. "example.com".
+
+Properties:
+
+- Config: host
+- Env Var: RCLONE_SMB_HOST
+- Type: string
+- Required: true
+
+#### --smb-user
+
+SMB username.
+
+Properties:
+
+- Config: user
+- Env Var: RCLONE_SMB_USER
+- Type: string
+- Default: "$USER"
+
+#### --smb-port
+
+SMB port number.
+
+Properties:
+
+- Config: port
+- Env Var: RCLONE_SMB_PORT
+- Type: int
+- Default: 445
+
+#### --smb-pass
+
+SMB password.
+
+**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+Properties:
+
+- Config: pass
+- Env Var: RCLONE_SMB_PASS
+- Type: string
+- Required: false
+
+#### --smb-domain
+
+Domain name for NTLM authentication.
+
+Properties:
+
+- Config: domain
+- Env Var: RCLONE_SMB_DOMAIN
+- Type: string
+- Default: "WORKGROUP"
+
+### Advanced options
+
+Here are the Advanced options specific to smb (SMB / CIFS).
+
+#### --smb-idle-timeout
+
+Max time before closing idle connections.
+
+If no connections have been returned to the connection pool in the time
+given, rclone will empty the connection pool.
+
+Set to 0 to keep connections indefinitely.
+
+
+Properties:
+
+- Config: idle_timeout
+- Env Var: RCLONE_SMB_IDLE_TIMEOUT
+- Type: Duration
+- Default: 1m0s
+
+#### --smb-hide-special-share
+
+Hide special shares (e.g. print$) which users aren't supposed to access.
+
+Properties:
+
+- Config: hide_special_share
+- Env Var: RCLONE_SMB_HIDE_SPECIAL_SHARE
+- Type: bool
+- Default: true
+
+#### --smb-case-insensitive
+
+Whether the server is configured to be case-insensitive.
+
+Always true on Windows shares.
+
+Properties:
+
+- Config: case_insensitive
+- Env Var: RCLONE_SMB_CASE_INSENSITIVE
+- Type: bool
+- Default: true
+
+#### --smb-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_SMB_ENCODING
+- Type: MultiEncoder
+- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
+
+
+
# Storj
[Storj](https://storj.io) is an encrypted, secure, and
@@ -39413,6 +40954,134 @@ Options:
# Changelog
+## v1.60.0 - 2022-10-21
+
+[See commits](https://github.com/rclone/rclone/compare/v1.59.0...v1.60.0)
+
+* New backends
+ * [Oracle object storage](https://rclone.org/oracleobjectstorage/) (Manoj Ghosh)
+ * [SMB](https://rclone.org/smb/) / CIFS (Windows file sharing) (Lesmiscore)
+ * New S3 providers
+ * [IONOS Cloud Storage](https://rclone.org/s3/#ionos) (Dmitry Deniskin)
+ * [Qiniu KODO](https://rclone.org/s3/#qiniu) (Bachue Zhou)
+* New Features
+ * build
+ * Update to go1.19 and make go1.17 the minimum required version (Nick Craig-Wood)
+ * Install.sh: fix arm-v7 download (Ole Frost)
+ * fs: Warn the user when using an existing remote name without a colon (Nick Craig-Wood)
+ * httplib: Add `--xxx-min-tls-version` option to select minimum TLS version for HTTP servers (Robert Newson)
+ * librclone: Add PHP bindings and test program (Jordi Gonzalez Muñoz)
+ * operations
+ * Add `--server-side-across-configs` global flag for any backend (Nick Craig-Wood)
+ * Optimise `--copy-dest` and `--compare-dest` (Nick Craig-Wood)
+ * rc: add `job/stopgroup` to stop group (Evan Spensley)
+ * serve dlna
+ * Add `--announce-interval` to control SSDP Announce Interval (YanceyChiew)
+ * Add `--interface` to Specify SSDP interface names line (Simon Bos)
+ * Add support for more external subtitles (YanceyChiew)
+ * Add verification of addresses (YanceyChiew)
+ * sync: Optimise `--copy-dest` and `--compare-dest` (Nick Craig-Wood)
+ * doc updates (albertony, Alexander Knorr, anonion, João Henrique Franco, Josh Soref, Lorenzo Milesi, Marco Molteni, Mark Trolley, Ole Frost, partev, Ryan Morey, Tom Mombourquette, YFdyh000)
+* Bug Fixes
+ * filter
+ * Fix incorrect filtering with `UseFilter` context flag and wrapping backends (Nick Craig-Wood)
+ * Make sure we check `--files-from` when looking for a single file (Nick Craig-Wood)
+ * rc
+ * Fix `mount/listmounts` not returning the full Fs entered in `mount/mount` (Tom Mombourquette)
+ * Handle external unmount when mounting (Isaac Aymerich)
+ * Validate Daemon option is not set when mounting a volume via RC (Isaac Aymerich)
+ * sync: Update docs and error messages to reflect fixes to overlap checks (Nick Naumann)
+* VFS
+ * Reduce memory use by embedding `sync.Cond` (Nick Craig-Wood)
+ * Reduce memory usage by re-ordering commonly used structures (Nick Craig-Wood)
+ * Fix excess CPU used by VFS cache cleaner looping (Nick Craig-Wood)
+* Local
+ * Obey file filters in listing to fix errors on excluded files (Nick Craig-Wood)
+ * Fix "Failed to read metadata: function not implemented" on old Linux kernels (Nick Craig-Wood)
+* Compress
+ * Fix crash due to nil metadata (Nick Craig-Wood)
+ * Fix error handling to not use or return nil objects (Nick Craig-Wood)
+* Drive
+ * Make `--drive-stop-on-upload-limit` obey quota exceeded error (Steve Kowalik)
+* FTP
+ * Add `--ftp-force-list-hidden` option to show hidden items (Øyvind Heddeland Instefjord)
+ * Fix hang when using ExplicitTLS to certain servers. (Nick Craig-Wood)
+* Google Cloud Storage
+ * Add `--gcs-endpoint` flag and config parameter (Nick Craig-Wood)
+* Hubic
+ * Remove backend as service has now shut down (Nick Craig-Wood)
+* Onedrive
+ * Rename Onedrive(cn) 21Vianet to Vnet Group (Yen Hu)
+ * Disable change notify in China region since it is not supported (Nick Craig-Wood)
+* S3
+ * Implement `--s3-versions` flag to show old versions of objects if enabled (Nick Craig-Wood)
+ * Implement `--s3-version-at` flag to show versions of objects at a particular time (Nick Craig-Wood)
+ * Implement `backend versioning` command to get/set bucket versioning (Nick Craig-Wood)
+ * Implement `Purge` to purge versions and `backend cleanup-hidden` (Nick Craig-Wood)
+ * Add `--s3-decompress` flag to decompress gzip-encoded files (Nick Craig-Wood)
+ * Add `--s3-sse-customer-key-base64` to supply keys with binary data (Richard Bateman)
+ * Try to keep the maximum precision in ModTime with `--user-server-modtime` (Nick Craig-Wood)
+ * Drop binary metadata with an ERROR message as it can't be stored (Nick Craig-Wood)
+ * Add `--s3-no-system-metadata` to suppress read and write of system metadata (Nick Craig-Wood)
+* SFTP
+ * Fix directory creation races (Lesmiscore)
+* Swift
+ * Add `--swift-no-large-objects` to reduce HEAD requests (Nick Craig-Wood)
+* Union
+ * Propagate SlowHash feature to fix hasher interaction (Lesmiscore)
+
+## v1.59.2 - 2022-09-15
+
+[See commits](https://github.com/rclone/rclone/compare/v1.59.1...v1.59.2)
+
+* Bug Fixes
+ * config: Move locking to fix fatal error: concurrent map read and map write (Nick Craig-Wood)
+* Local
+ * Disable xattr support if the filesystems indicates it is not supported (Nick Craig-Wood)
+* Azure Blob
+ * Fix chunksize calculations producing too many parts (Nick Craig-Wood)
+* B2
+ * Fix chunksize calculations producing too many parts (Nick Craig-Wood)
+* S3
+ * Fix chunksize calculations producing too many parts (Nick Craig-Wood)
+
+## v1.59.1 - 2022-08-08
+
+[See commits](https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1)
+
+* Bug Fixes
+ * accounting: Fix panic in core/stats-reset with unknown group (Nick Craig-Wood)
+ * build: Fix android build after GitHub actions change (Nick Craig-Wood)
+ * dlna: Fix SOAP action header parsing (Joram Schrijver)
+ * docs: Fix links to mount command from install docs (albertony)
+ * dropbox: Fix ChangeNotify was unable to decrypt errors (Nick Craig-Wood)
+ * fs: Fix parsing of times and durations of the form "YYYY-MM-DD HH:MM:SS" (Nick Craig-Wood)
+ * serve sftp: Fix checksum detection (Nick Craig-Wood)
+ * sync: Add accidentally missed filter-sensitivity to --backup-dir option (Nick Naumann)
+* Combine
+ * Fix docs showing `remote=` instead of `upstreams=` (Nick Craig-Wood)
+ * Throw error if duplicate directory name is specified (Nick Craig-Wood)
+ * Fix errors with backends shutting down while in use (Nick Craig-Wood)
+* Dropbox
+ * Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)
+ * Fix infinite loop on uploading a corrupted file (Nick Craig-Wood)
+* Internetarchive
+ * Ignore checksums for files using the different method (Lesmiscore)
+ * Handle hash symbol in the middle of filename (Lesmiscore)
+* Jottacloud
+ * Fix working with whitelabel Elgiganten Cloud
+ * Do not store username in config when using standard auth (albertony)
+* Mega
+ * Fix nil pointer exception when bad node received (Nick Craig-Wood)
+* S3
+ * Fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput (Nick Craig-Wood)
+* SFTP
+ * Fix issue with WS_FTP by working around failing RealPath (albertony)
+* Union
+ * Fix duplicated files when using directories with leading / (Nick Craig-Wood)
+ * Fix multiple files being uploaded when roots don't exist (Nick Craig-Wood)
+ * Fix panic due to misalignment of struct field in 32 bit architectures (r-ricci)
+
## v1.59.0 - 2022-07-09
[See commits](https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0)
@@ -39645,7 +41314,7 @@ Options:
* build
* Fix ARM architecture version in .deb packages after nfpm change (Nick Craig-Wood)
* Hard fork `github.com/jlaffaye/ftp` to fix `go get github.com/rclone/rclone` (Nick Craig-Wood)
- * oauthutil: Fix crash when webrowser requests `/robots.txt` (Nick Craig-Wood)
+ * oauthutil: Fix crash when webbrowser requests `/robots.txt` (Nick Craig-Wood)
* operations: Fix goroutine leak in case of copy retry (Ankur Gupta)
* rc:
* Fix `operations/publiclink` default for `expires` parameter (Nick Craig-Wood)
@@ -39731,7 +41400,7 @@ Options:
* Add rclone to list of supported `md5sum`/`sha1sum` commands to look for (albertony)
* Refactor so we only have one way of running remote commands (Nick Craig-Wood)
* Fix timeout on hashing large files by sending keepalives (Nick Craig-Wood)
- * Fix unecessary seeking when uploading and downloading files (Nick Craig-Wood)
+ * Fix unnecessary seeking when uploading and downloading files (Nick Craig-Wood)
* Update docs on how to create `known_hosts` file (Nick Craig-Wood)
* Storj
* Rename tardigrade backend to storj backend (Nick Craig-Wood)
@@ -40332,8 +42001,8 @@ Options:
* Add sort by average size in directory (Adam Plánský)
* Add toggle option for average s3ize in directory - key 'a' (Adam Plánský)
* Add empty folder flag into ncdu browser (Adam Plánský)
- * Add `!` (errror) and `.` (unreadable) file flags to go with `e` (empty) (Nick Craig-Wood)
- * obscure: Make `rclone osbcure -` ignore newline at end of line (Nick Craig-Wood)
+ * Add `!` (error) and `.` (unreadable) file flags to go with `e` (empty) (Nick Craig-Wood)
+ * obscure: Make `rclone obscure -` ignore newline at end of line (Nick Craig-Wood)
* operations
* Add logs when need to upload files to set mod times (Nick Craig-Wood)
* Move and copy log name of the destination object in verbose (Adam Plánský)
@@ -40358,7 +42027,7 @@ Options:
* Make the error count match up in the log message (Nick Craig-Wood)
* move: Fix data loss when source and destination are the same object (Nick Craig-Wood)
* operations
- * Fix `--cutof-mode` hard not cutting off immediately (Nick Craig-Wood)
+ * Fix `--cutoff-mode` hard not cutting off immediately (Nick Craig-Wood)
* Fix `--immutable` error message (Nick Craig-Wood)
* sync
* Fix `--cutoff-mode` soft & cautious so it doesn't end the transfer early (Nick Craig-Wood)
@@ -40406,7 +42075,7 @@ Options:
* Fixed crash on an empty file name (lluuaapp)
* Box
* Fix NewObject for files that differ in case (Nick Craig-Wood)
- * Fix finding directories in a case insentive way (Nick Craig-Wood)
+ * Fix finding directories in a case insensitive way (Nick Craig-Wood)
* Chunker
* Skip long local hashing, hash in-transit (fixes) (Ivan Andreev)
* Set Features ReadMimeType to false as Object.MimeType not supported (Nick Craig-Wood)
@@ -40487,7 +42156,7 @@ Options:
* Implement `--sftp-use-fstat` for unusual SFTP servers (Nick Craig-Wood)
* Sugarsync
* Fix NewObject for files that differ in case (Nick Craig-Wood)
- * Fix finding directories in a case insentive way (Nick Craig-Wood)
+ * Fix finding directories in a case insensitive way (Nick Craig-Wood)
* Swift
* Fix deletion of parts of Static Large Object (SLO) (Nguyễn Hữu Luân)
* Ensure partially uploaded large files are uploaded unless `--swift-leave-parts-on-error` (Nguyễn Hữu Luân)
@@ -40561,7 +42230,7 @@ Options:
[See commits](https://github.com/rclone/rclone/compare/v1.53.1...v1.53.2)
* Bug Fixes
- * acounting
+ * accounting
* Fix incorrect speed and transferTime in core/stats (Nick Craig-Wood)
* Stabilize display order of transfers on Windows (Nick Craig-Wood)
* operations
@@ -41531,7 +43200,7 @@ all the docs and Edward Barker for helping re-write the front page.
* rcat: Fix slowdown on systems with multiple hashes (Nick Craig-Wood)
* rcd: Fix permissions problems on cache directory with web gui download (Nick Craig-Wood)
* Mount
- * Default `--daemon-timout` to 15 minutes on macOS and FreeBSD (Nick Craig-Wood)
+ * Default `--daemon-timeout` to 15 minutes on macOS and FreeBSD (Nick Craig-Wood)
* Update docs to show mounting from root OK for bucket-based (Nick Craig-Wood)
* Remove nonseekable flag from write files (Nick Craig-Wood)
* VFS
@@ -41839,7 +43508,7 @@ all the docs and Edward Barker for helping re-write the front page.
* Update google cloud storage endpoints (weetmuts)
* HTTP
* Add an example with username and password which is supported but wasn't documented (Nick Craig-Wood)
- * Fix backend with `--files-from` and non-existent files (Nick Craig-Wood)
+ * Fix backend with `--files-from` and nonexistent files (Nick Craig-Wood)
* Hubic
* Make error message more informative if authentication fails (Nick Craig-Wood)
* Jottacloud
@@ -42323,7 +43992,7 @@ Point release to fix hubic and azureblob backends.
* FTP
* Work around strange response from box FTP server
* More workarounds for FTP servers to fix mkParentDir error
- * Fix no error on listing non-existent directory
+ * Fix no error on listing nonexistent directory
* Google Cloud Storage
* Add service_account_credentials (Matt Holt)
* Detect bucket presence by listing it - minimises permissions needed
@@ -42396,7 +44065,7 @@ Point release to fix hubic and azureblob backends.
* Add .deb and .rpm packages as part of the build
* Make a beta release for all branches on the main repo (but not pull requests)
* Bug Fixes
- * config: fixes errors on non existing config by loading config file only on first access
+ * config: fixes errors on nonexistent config by loading config file only on first access
* config: retry saving the config after failure (Mateusz)
* sync: when using `--backup-dir` don't delete files if we can't set their modtime
* this fixes odd behaviour with Dropbox and `--backup-dir`
@@ -42931,7 +44600,7 @@ Point release to fix hubic and azureblob backends.
* Update B2 docs with Data usage, and Crypt section - thanks Tomasz Mazur
* S3
* Command line and config file support for
- * Setting/overriding ACL - thanks Radek Senfeld
+ * Setting/overriding ACL - thanks Radek Šenfeld
* Setting storage class - thanks Asko Tamm
* Drive
* Make exponential backoff work exactly as per Google specification
@@ -44364,6 +46033,30 @@ put them back in again.` >}}
* Lorenzo Maiorfi
* Claudio Maradonna
* Ovidiu Victor Tatar
+ * Evan Spensley
+ * Yen Hu <61753151+0x59656e@users.noreply.github.com>
+ * Steve Kowalik
+ * Jordi Gonzalez Muñoz
+ * Joram Schrijver
+ * Mark Trolley
+ * João Henrique Franco
+ * anonion
+ * Ryan Morey <4590343+rmorey@users.noreply.github.com>
+ * Simon Bos
+ * YFdyh000 * Josh Soref <2119212+jsoref@users.noreply.github.com>
+ * Øyvind Heddeland Instefjord
+ * Dmitry Deniskin <110819396+ddeniskin@users.noreply.github.com>
+ * Alexander Knorr <106825+opexxx@users.noreply.github.com>
+ * Richard Bateman
+ * Dimitri Papadopoulos Orfanos <3234522+DimitriPapadopoulos@users.noreply.github.com>
+ * Lorenzo Milesi
+ * Isaac Aymerich
+ * YanceyChiew <35898533+YanceyChiew@users.noreply.github.com>
+ * Manoj Ghosh
+ * Bachue Zhou
+ * Manoj Ghosh
+ * Tom Mombourquette
+ * Robert Newson
# Contact the rclone project #
diff --git a/MANUAL.txt b/MANUAL.txt
index d8828a065..6cdd6ee0a 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
-Jul 09, 2022
+Oct 21, 2022
Rclone syncs your files to cloud storage
@@ -101,7 +101,6 @@ S3, that work out of the box.)
- China Mobile Ecloud Elastic Object Storage (EOS)
- Arvan Cloud Object Storage (AOS)
- Citrix ShareFile
-- C14
- Cloudflare R2
- DigitalOcean Spaces
- Digi Storage
@@ -116,11 +115,11 @@ S3, that work out of the box.)
- Hetzner Storage Box
- HiDrive
- HTTP
-- Hubic
- Internet Archive
- Jottacloud
- IBM COS S3
- IDrive e2
+- IONOS Cloud
- Koofr
- Mail.ru Cloud
- Memset Memstore
@@ -133,12 +132,14 @@ S3, that work out of the box.)
- OVH
- OpenDrive
- OpenStack Swift
-- Oracle Cloud Storage
+- Oracle Cloud Storage Swift
+- Oracle Object Storage
- ownCloud
- pCloud
- premiumize.me
- put.io
- QingStor
+- Qiniu Cloud Object Storage (Kodo)
- Rackspace Cloud Files
- rsync.net
- Scaleway
@@ -147,6 +148,7 @@ S3, that work out of the box.)
- SeaweedFS
- SFTP
- Sia
+- SMB / CIFS
- StackPath
- Storj
- SugarSync
@@ -190,7 +192,7 @@ Quickstart
- Run rclone config to setup. See rclone config docs for more details.
- Optionally configure automatic execution.
-See below for some expanded Linux / macOS instructions.
+See below for some expanded Linux / macOS / Windows instructions.
See the usage docs for how to use rclone, or run rclone -h.
@@ -210,7 +212,9 @@ For beta installation, run:
Note that this script checks the version of rclone installed first and
won't re-download if not needed.
-Linux installation from precompiled binary
+Linux installation
+
+Precompiled binary
Fetch and unpack
@@ -234,7 +238,9 @@ Run rclone config to setup. See rclone config docs for more details.
rclone config
-macOS installation with brew
+macOS installation
+
+Installation with brew
brew install rclone
@@ -242,7 +248,12 @@ NOTE: This version of rclone will not support mount any more (see
#5373). If mounting is wanted on macOS, either install a precompiled
binary or enable the relevant option when installing from source.
-macOS installation from precompiled binary, using curl
+Note that this is a third party installer not controlled by the rclone
+developers so it may be out of date. Its current version is as below.
+
+[Homebrew package]
+
+Precompiled binary, using curl
To avoid problems with macOS gatekeeper enforcing the binary to be
signed and notarized it is enough to download with curl.
@@ -271,7 +282,7 @@ Run rclone config to setup. See rclone config docs for more details.
rclone config
-macOS installation from precompiled binary, using a web browser
+Precompiled binary, using a web browser
When downloading a binary with a web browser, the browser will set the
macOS gatekeeper quarantine attribute. Starting from Catalina, when
@@ -284,11 +295,69 @@ The simplest fix is to run
xattr -d com.apple.quarantine rclone
-Install with docker
+Windows installation
-The rclone maintains a docker image for rclone. These images are
-autobuilt by docker hub from the rclone source based on a minimal Alpine
-linux image.
+Precompiled binary
+
+Fetch the correct binary for your processor type by clicking on these
+links. If not sure, use the first link.
+
+- Intel/AMD - 64 Bit
+- Intel/AMD - 32 Bit
+- ARM - 64 Bit
+
+Open this file in the Explorer and extract rclone.exe. Rclone is a
+portable executable so you can place it wherever is convenient.
+
+Open a CMD window (or powershell) and run the binary. Note that rclone
+does not launch a GUI by default, it runs in the CMD Window.
+
+- Run rclone.exe config to setup. See rclone config docs for more
+ details.
+- Optionally configure automatic execution.
+
+If you are planning to use the rclone mount feature then you will need
+to install the third party utility WinFsp also.
+
+Chocolatey package manager
+
+Make sure you have Choco installed
+
+ choco search rclone
+ choco install rclone
+
+This will install rclone on your Windows machine. If you are planning to
+use rclone mount then
+
+ choco install winfsp
+
+will install that too.
+
+Note that this is a third party installer not controlled by the rclone
+developers so it may be out of date. Its current version is as below.
+
+[Chocolatey package]
+
+Package manager installation
+
+Many Linux, Windows, macOS and other OS distributions package and
+distribute rclone.
+
+The distributed versions of rclone are often quite out of date and for
+this reason we recommend one of the other installation methods if
+possible.
+
+You can get an idea of how up to date or not your OS distribution's
+package is here.
+
+[Packaging status]
+
+Docker installation
+
+The rclone developers maintain a docker image for rclone.
+
+These images are built as part of the release process based on a minimal
+Alpine Linux.
The :latest tag will always point to the latest stable release. You can
use the :beta tag to get the latest build from master. You can also use
@@ -364,9 +433,9 @@ Here are some commands tested on an Ubuntu 18.04.3 host:
ls ~/data/mount
kill %1
-Install from source
+Source installation
-Make sure you have git and Go installed. Go version 1.16 or newer is
+Make sure you have git and Go installed. Go version 1.17 or newer is
required, latest release is recommended. You can get it from your
package manager, or download it from golang.org/dl. Then you can run the
following:
@@ -381,7 +450,7 @@ executable in the same folder. As an initial check you can now run
./rclone version (.\rclone version on Windows).
Note that on macOS and Windows the mount command will not be available
-unless you specify additional build tag cmount.
+unless you specify an additional build tag cmount.
go build -tags cmount
@@ -409,9 +478,10 @@ This is how the official rclone releases are built.
go build -trimpath -ldflags -s -tags cmount
Instead of executing the go build command directly, you can run it via
-the Makefile, which also sets version information and copies the
-resulting rclone executable into your GOPATH bin folder
-($(go env GOPATH)/bin, which corresponds to ~/go/bin/rclone by default).
+the Makefile. It changes the version number suffix from "-DEV" to
+"-beta" and appends commit details. It also copies the resulting rclone
+executable into your GOPATH bin folder ($(go env GOPATH)/bin, which
+corresponds to ~/go/bin/rclone by default).
make
@@ -419,7 +489,13 @@ To include mount command on macOS and Windows with Makefile build:
make GOTAGS=cmount
-As an alternative you can download the source, build and install rclone
+There are other make targets that can be used for more advanced builds,
+such as cross-compiling for all supported os/architectures, embedding
+icon and version info resources into windows executable, and packaging
+results into release artifacts. See Makefile and cross-compile.go for
+details.
+
+Another alternative is to download the source, build and install rclone
in one operation, as a regular Go package. The source will be stored it
in the Go module cache, and the resulting executable will be in your
GOPATH bin folder ($(go env GOPATH)/bin, which corresponds to
@@ -435,7 +511,7 @@ don't work with the current version):
go get github.com/rclone/rclone
-Installation with Ansible
+Ansible installation
This can be done with Stefan Weichinger's ansible role.
@@ -518,7 +594,7 @@ NOTE: Remember that when rclone runs as the SYSTEM user, the user
profile that it sees will not be yours. This means that if you normally
run rclone with configuration file in the default location, to be able
to use the same configuration when running as the system user you must
-explicitely tell rclone where to find it with the --config option, or
+explicitly tell rclone where to find it with the --config option, or
else it will look in the system users profile path
(C:\Windows\System32\config\systemprofile). To test your command
manually from a Command Prompt, you can run it with the PsExec utility
@@ -583,7 +659,7 @@ Third-party service integration
To Windows service running any rclone command, the excellent third-party
utility NSSM, the "Non-Sucking Service Manager", can be used. It
-includes some advanced features such as adjusting process periority,
+includes some advanced features such as adjusting process priority,
defining process environment variables, redirect to file anything
written to stdout, and customized response to different exit codes, with
a GUI to configure everything from (although it can also be used from
@@ -662,7 +738,6 @@ See the following for detailed instructions for
- HDFS
- HiDrive
- HTTP
-- Hubic
- Internet Archive
- Jottacloud
- Koofr
@@ -673,6 +748,7 @@ See the following for detailed instructions for
- Microsoft OneDrive
- OpenStack Swift / Rackspace Cloudfiles / Memset Memstore
- OpenDrive
+- Oracle Object Storage
- Pcloud
- premiumize.me
- put.io
@@ -680,6 +756,7 @@ See the following for detailed instructions for
- Seafile
- SFTP
- Sia
+- SMB
- Storj
- SugarSync
- Union
@@ -857,6 +934,11 @@ extended explanation in the copy command if unsure.
If dest:path doesn't exist, it is created and the source:path contents
go there.
+It is not possible to sync overlapping remotes. However, you may exclude
+the destination from the sync with a filter rule or by putting an
+exclude-if-present file inside the destination directory and sync to a
+destination that is inside the source directory.
+
Note: Use the -P/--progress flag to view real-time transfer statistics
Note: Use the rclone dedupe command to deal with "Duplicate
@@ -1141,8 +1223,8 @@ recursion.
The other list commands lsd,lsf,lsjson do not recurse by default - use
-R to make them recurse.
-Listing a non-existent directory will produce an error except for
-remotes which can't have empty directories (e.g. s3, swift, or gcs - the
+Listing a nonexistent directory will produce an error except for remotes
+which can't have empty directories (e.g. s3, swift, or gcs - the
bucket-based remotes).
rclone ls remote:path [flags]
@@ -1203,8 +1285,8 @@ recursion.
The other list commands lsd,lsf,lsjson do not recurse by default - use
-R to make them recurse.
-Listing a non-existent directory will produce an error except for
-remotes which can't have empty directories (e.g. s3, swift, or gcs - the
+Listing a nonexistent directory will produce an error except for remotes
+which can't have empty directories (e.g. s3, swift, or gcs - the
bucket-based remotes).
rclone lsd remote:path [flags]
@@ -1257,8 +1339,8 @@ recursion.
The other list commands lsd,lsf,lsjson do not recurse by default - use
-R to make them recurse.
-Listing a non-existent directory will produce an error except for
-remotes which can't have empty directories (e.g. s3, swift, or gcs - the
+Listing a nonexistent directory will produce an error except for remotes
+which can't have empty directories (e.g. s3, swift, or gcs - the
bucket-based remotes).
rclone lsl remote:path [flags]
@@ -1293,8 +1375,8 @@ rclone hashsum MD5 remote:path.
This command can also hash data received on standard input (stdin), by
not passing a remote:path, or by passing a hyphen as remote:path when
-there is data to read (if not, the hypen will be treated literaly, as a
-relative path).
+there is data to read (if not, the hyphen will be treated literally, as
+a relative path).
rclone md5sum remote:path [flags]
@@ -1332,8 +1414,8 @@ rclone hashsum SHA1 remote:path.
This command can also hash data received on standard input (stdin), by
not passing a remote:path, or by passing a hyphen as remote:path when
-there is data to read (if not, the hypen will be treated literaly, as a
-relative path).
+there is data to read (if not, the hyphen will be treated literally, as
+a relative path).
This command can also hash data received on STDIN, if not passing a
remote:path.
@@ -1739,11 +1821,11 @@ SEE ALSO
rclone bisync
-Perform bidirectonal synchronization between two paths.
+Perform bidirectional synchronization between two paths.
Synopsis
-Perform bidirectonal synchronization between two paths.
+Perform bidirectional synchronization between two paths.
Bisync provides a bidirectional cloud sync solution in rclone. It
retains the Path1 and Path2 filesystem listings from the prior run. On
@@ -1928,7 +2010,7 @@ Linux:
macOS:
- rclone completion bash > /usr/local/etc/bash_completion.d/rclone
+ rclone completion bash > $(brew --prefix)/etc/bash_completion.d/rclone
You will need to start a new shell for this setup to take effect.
@@ -2020,6 +2102,10 @@ need to enable it. You can execute the following once:
echo "autoload -U compinit; compinit" >> ~/.zshrc
+To load completions in your current shell session:
+
+ source <(rclone completion zsh); compdef _rclone rclone
+
To load completions for every new session, execute once:
Linux:
@@ -2028,7 +2114,7 @@ Linux:
macOS:
- rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
+ rclone completion zsh > $(brew --prefix)/share/zsh/site-functions/_rclone
You will need to start a new shell for this setup to take effect.
@@ -2084,7 +2170,7 @@ already passing obscured passwords then use --no-obscure. You can also
set obscured passwords using the rclone config password command.
The flag --non-interactive is for use by applications that wish to
-configure rclone themeselves, rather than using rclone's text based
+configure rclone themselves, rather than using rclone's text based
configuration questions. If this flag is set, and rclone needs to ask
the user a question, a JSON blob will be returned with the question in
it.
@@ -2421,7 +2507,7 @@ already passing obscured passwords then use --no-obscure. You can also
set obscured passwords using the rclone config password command.
The flag --non-interactive is for use by applications that wish to
-configure rclone themeselves, rather than using rclone's text based
+configure rclone themselves, rather than using rclone's text based
configuration questions. If this flag is set, and rclone needs to ask
the user a question, a JSON blob will be returned with the question in
it.
@@ -2914,8 +3000,8 @@ md5sum and sha1sum.
This command can also hash data received on standard input (stdin), by
not passing a remote:path, or by passing a hyphen as remote:path when
-there is data to read (if not, the hypen will be treated literaly, as a
-relative path).
+there is data to read (if not, the hyphen will be treated literally, as
+a relative path).
Run without a hash to see the list of all supported hashes, e.g.
@@ -3137,8 +3223,8 @@ recursion.
The other list commands lsd,lsf,lsjson do not recurse by default - use
-R to make them recurse.
-Listing a non-existent directory will produce an error except for
-remotes which can't have empty directories (e.g. s3, swift, or gcs - the
+Listing a nonexistent directory will produce an error except for remotes
+which can't have empty directories (e.g. s3, swift, or gcs - the
bucket-based remotes).
rclone lsf remote:path [flags]
@@ -3213,7 +3299,7 @@ If --files-only is not specified directories in addition to the files
will be returned.
If --metadata is set then an additional Metadata key will be returned.
-This will have metdata in rclone standard format as a JSON object.
+This will have metadata in rclone standard format as a JSON object.
if --stat is set then a single JSON blob will be returned about the item
pointed to. This will return an error if the item isn't found. However
@@ -3260,8 +3346,8 @@ recursion.
The other list commands lsd,lsf,lsjson do not recurse by default - use
-R to make them recurse.
-Listing a non-existent directory will produce an error except for
-remotes which can't have empty directories (e.g. s3, swift, or gcs - the
+Listing a nonexistent directory will produce an error except for remotes
+which can't have empty directories (e.g. s3, swift, or gcs - the
bucket-based remotes).
rclone lsjson remote:path [flags]
@@ -3383,10 +3469,10 @@ unexpected program errors, freezes or other issues, consider mounting as
a network drive instead.
When mounting as a fixed disk drive you can either mount to an unused
-drive letter, or to a path representing a non-existent subdirectory of
-an existing parent directory or drive. Using the special value * will
-tell rclone to automatically assign the next available drive letter,
-starting with Z: and moving backward. Examples:
+drive letter, or to a path representing a nonexistent subdirectory of an
+existing parent directory or drive. Using the special value * will tell
+rclone to automatically assign the next available drive letter, starting
+with Z: and moving backward. Examples:
rclone mount remote:path/to/files *
rclone mount remote:path/to/files X:
@@ -3416,7 +3502,7 @@ remote UNC path by net use etc, just like a normal network drive
mapping.
If you specify a full network share UNC path with --volname, this will
-implicitely set the --network-mode option, so the following two examples
+implicitly set the --network-mode option, so the following two examples
have same result:
rclone mount remote:path/to/files X: --network-mode
@@ -3426,7 +3512,7 @@ You may also specify the network share UNC path as the mountpoint
itself. Then rclone will automatically assign a drive letter, same as
with * and use that as mountpoint, and instead use the UNC path
specified as the volume name, as if it were specified with the --volname
-option. This will also implicitely set the --network-mode option. This
+option. This will also implicitly set the --network-mode option. This
means the following two examples have same result:
rclone mount remote:path/to/files \\cloud\remote
@@ -3462,7 +3548,7 @@ on each entry will be set according to options --dir-perms and
The default permissions corresponds to
--file-perms 0666 --dir-perms 0777, i.e. read and write permissions to
everyone. This means you will not be able to start any programs from the
-the mount. To be able to do that you must add execute permissions, e.g.
+mount. To be able to do that you must add execute permissions, e.g.
--file-perms 0777 --dir-perms 0777 to add it to everyone. If the program
needs to write files, chances are you will have to enable VFS File
Caching as well (see also limitations).
@@ -3532,10 +3618,9 @@ applications won't work with their files on an rclone mount without
--vfs-cache-mode writes or --vfs-cache-mode full. See the VFS File
Caching section for more info.
-The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2,
-Hubic) do not support the concept of empty directories, so empty
-directories will have a tendency to disappear once they fall out of the
-directory cache.
+The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2) do
+not support the concept of empty directories, so empty directories will
+have a tendency to disappear once they fall out of the directory cache.
When rclone mount is invoked on Unix with --daemon flag, the main rclone
program will wait for the background mount to become ready or until the
@@ -4135,7 +4220,7 @@ toggle the help on and off. The supported keys are:
q/ESC/^c to quit
Listed files/directories may be prefixed by a one-character flag, some
-of them combined with a description in brackes at end of line. These
+of them combined with a description in brackets at end of line. These
flags have the following meaning:
e means this is an empty directory, i.e. contains no files (but
@@ -4852,11 +4937,13 @@ only with caching.
Options
--addr string The ip:port or :port to bind the DLNA http server to (default ":7879")
+ --announce-interval duration The interval between SSDP announcements (default 12m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for dlna
+ --interface stringArray The interface to use for SSDP (repeat as necessary)
--log-trace Enable trace logging of SOAP traffic
--name string Name of DLNA server
--no-checksum Don't compare checksums on up/download
@@ -5827,6 +5914,9 @@ of that with the CA certificate. --key should be the PEM encoded private
key and --client-ca should be the PEM encoded client certificate
authority certificate.
+--min-tls-version is minimum TLS version that is acceptable. Valid
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
+
Template
--template allows a user to specify a custom markup template for HTTP
@@ -6237,6 +6327,7 @@ Options
--htpasswd string A htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
@@ -6465,6 +6556,9 @@ that with the CA certificate. --key should be the PEM encoded private
key and --client-ca should be the PEM encoded client certificate
authority certificate.
+--min-tls-version is minimum TLS version that is acceptable. Valid
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
+
rclone serve restic remote:path [flags]
Options
@@ -6479,6 +6573,7 @@ Options
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
--private-repos Users can only access their private repo
--realm string Realm for authentication (default "rclone")
@@ -6500,12 +6595,20 @@ Serve the remote over SFTP.
Synopsis
-Run a SFTP server to serve a remote over SFTP. This can be used with an
+Run an SFTP server to serve a remote over SFTP. This can be used with an
SFTP client or you can make a remote of type sftp to use with it.
You can use the filter flags (e.g. --include, --exclude) to control what
is served.
+The server will respond to a small number of shell commands, mainly
+md5sum, sha1sum and df, which enable it to provide support for checksums
+and the about feature when accessed from an sftp remote.
+
+Note that this server uses standard 32 KiB packet payload size, which
+means you must not configure the client to expect anything else, e.g.
+with the chunk_size option on an sftp remote.
+
The server will log errors. Use -v to see access logs.
--bwlimit will be respected for file transfers. Use --stats to control
@@ -6516,11 +6619,6 @@ You must provide some means of authentication, either with
--authorized-keys - the default is the same as ssh), an --auth-proxy, or
set the --no-auth flag for no authentication when logging in.
-Note that this also implements a small number of shell commands so that
-it can provide md5sum/sha1sum/df information for the rclone sftp
-backend. This means that is can support SHA1SUMs, MD5SUMs and the about
-command when paired with the rclone sftp backend.
-
If you don't supply a host --key then rclone will generate rsa, ecdsa
and ed25519 variants, and cache them for later use in rclone's cache
directory (see rclone help flags cache-dir) in the "serve-sftp"
@@ -6959,7 +7057,7 @@ Options
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
- --stdio Run an sftp server on run stdin/stdout
+ --stdio Run an sftp server on stdin/stdout
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
@@ -7113,6 +7211,9 @@ that with the CA certificate. --key should be the PEM encoded private
key and --client-ca should be the PEM encoded client certificate
authority certificate.
+--min-tls-version is minimum TLS version that is acceptable. Valid
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
+
VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -7522,6 +7623,7 @@ Options
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
@@ -8227,7 +8329,7 @@ backends can also store arbitrary user metadata.
Where possible the key names are standardized, so, for example, it is
possible to copy object metadata from s3 to azureblob for example and
-metadata will be translated apropriately.
+metadata will be translated appropriately.
Some backends have limits on the size of the metadata and rclone will
give errors on upload if they are exceeded.
@@ -8310,10 +8412,34 @@ also possible to specify --boolean=false or --boolean=true. Note that
--boolean false is not valid - this is parsed as --boolean and the false
is parsed as an extra command line argument for rclone.
-Options which use TIME use the go time parser. A duration string is a
-possibly signed sequence of decimal numbers, each with optional fraction
-and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units
-are "ns", "us" (or "µs"), "ms", "s", "m", "h".
+Time or duration options
+
+TIME or DURATION options can be specified as a duration string or a time
+string.
+
+A duration string is a possibly signed sequence of decimal numbers, each
+with optional fraction and a unit suffix, such as "300ms", "-1.5h" or
+"2h45m". Default units are seconds or the following abbreviations are
+valid:
+
+- ms - Milliseconds
+- s - Seconds
+- m - Minutes
+- h - Hours
+- d - Days
+- w - Weeks
+- M - Months
+- y - Years
+
+These can also be specified as an absolute time in the following
+formats:
+
+- RFC3339 - e.g. 2006-01-02T15:04:05Z or 2006-01-02T15:04:05+07:00
+- ISO8601 Date and time, local timezone - 2006-01-02T15:04:05
+- ISO8601 Date and time, local timezone - 2006-01-02 15:04:05
+- ISO8601 Date - 2006-01-02 (YYYY-MM-DD)
+
+Size options
Options which use SIZE use KiB (multiples of 1024 bytes) by default.
However, a suffix of B for Byte, K for KiB, M for MiB, G for GiB, T for
@@ -8332,7 +8458,8 @@ added) in DIR, then it will be overwritten.
The remote in use must support server-side move or copy and you must use
the same remote as the destination of the sync. The backup directory
-must not overlap the destination directory.
+must not overlap the destination directory without it being excluded by
+a filter rule.
For example
@@ -8365,7 +8492,7 @@ is bytes per second not bits per second. To use a single limit, specify
the desired bandwidth in KiB/s, or use a suffix B|K|M|G|T|P. The default
is 0 which means to not limit bandwidth.
-The upload and download bandwidth can be specified seperately, as
+The upload and download bandwidth can be specified separately, as
--bwlimit UP:DOWN, so
--bwlimit 10M:100k
@@ -9380,6 +9507,17 @@ This sets the interval between each retry specified by --retries
The default is 0. Use 0 to disable.
+--server-side-across-configs
+
+Allow server-side operations (e.g. copy or move) to work across
+different configurations.
+
+This can be useful if you wish to do a server-side copy or move between
+two remotes which use the same backend but are configured differently.
+
+Note that this isn't enabled by default because it isn't easy for rclone
+to tell if it will work between any two configurations.
+
--size-only
Normally rclone will look at modification time and size of files to see
@@ -9567,13 +9705,21 @@ By default, rclone doesn't keep track of renamed files, so if you rename
a file locally then sync it to a remote, rclone will delete the old file
on the remote and upload a new copy.
-If you use this flag, and the remote supports server-side copy or
-server-side move, and the source and destination have a compatible hash,
-then this will track renames during sync operations and perform renaming
-server-side.
+An rclone sync with --track-renames runs like a normal sync, but keeps
+track of objects which exist in the destination but not in the source
+(which would normally be deleted), and which objects exist in the source
+but not the destination (which would normally be transferred). These
+objects are then candidates for renaming.
-Files will be matched by size and hash - if both match then a rename
-will be considered.
+After the sync, rclone matches up the source only and destination only
+objects using the --track-renames-strategy specified and either renames
+the destination object or transfers the source and deletes the
+destination object. --track-renames is stateless like all of rclone's
+syncs.
+
+To use this flag the destination must support server-side copy or
+server-side move, and to use a hash based --track-renames-strategy (the
+default) the source and the destination must have a compatible hash.
If the destination does not support server-side copy or move, rclone
will fall back to the default behaviour and log an error level message
@@ -9590,7 +9736,7 @@ will select --delete-after instead of --delete-during.
--track-renames-strategy (hash,modtime,leaf,size)
-This option changes the matching criteria for --track-renames.
+This option changes the file matching criteria for --track-renames.
The matching is controlled by a comma separated selection of these
tokens:
@@ -9601,14 +9747,14 @@ tokens:
- leaf - the name of the file not including its directory name
- size - the size of the file (this is always enabled)
-So using --track-renames-strategy modtime,leaf would match files based
-on modification time, the leaf of the file name and the size only.
+The default option is hash.
+
+Using --track-renames-strategy modtime,leaf would match files based on
+modification time, the leaf of the file name and the size only.
Using --track-renames-strategy modtime or leaf can enable
--track-renames support for encrypted destinations.
-If nothing is specified, the default option is matching by hashes.
-
Note that the hash strategy is not supported with encrypted
destinations.
@@ -9645,7 +9791,7 @@ the least amount of memory.
However, some remotes have a way of listing all files beneath a
directory in one (or a small number) of transactions. These tend to be
-the bucket-based remotes (e.g. S3, B2, GCS, Swift, Hubic).
+the bucket-based remotes (e.g. S3, B2, GCS, Swift).
If you use the --fast-list flag then rclone will use this method for
listing directories. This will have the following consequences for the
@@ -9712,7 +9858,7 @@ In all other cases the file will not be updated.
Consider using the --modify-window flag to compensate for time skews
between the source and the backend, for backends that do not support mod
times, and instead use uploaded times. However, if the backend does not
-support checksums, note that sync'ing or copying within the time skew
+support checksums, note that syncing or copying within the time skew
window may still result in additional transfers for safety.
--use-mmap
@@ -10488,7 +10634,7 @@ Filter pattern examples
Rooted /*.jpg /file.jpg /file.png
/file2.jpg /dir/file.jpg
Alternates *.{jpg,png} /file.jpg /file.gif
- /dir/file.gif /dir/file.gif
+ /dir/file.png /dir/file.gif
Path Wildcard dir/** /dir/anyfile file.png
/subdir/dir/subsubdir/anyfile /subdir/file.png
Any Char *.t?t /file.txt /file.qxt
@@ -10983,6 +11129,8 @@ Default units are KiB but abbreviations K, M, G, T or P are valid.
E.g. rclone ls remote: --min-size 50k lists files on remote: of 50 KiB
size or larger.
+See the size option docs for more info.
+
--max-size - Don't transfer any file larger than this
Controls the maximum size file within the scope of an rclone command.
@@ -10991,33 +11139,19 @@ Default units are KiB but abbreviations K, M, G, T or P are valid.
E.g. rclone ls remote: --max-size 1G lists files on remote: of 1 GiB
size or smaller.
+See the size option docs for more info.
+
--max-age - Don't transfer any file older than this
Controls the maximum age of files within the scope of an rclone command.
-Default units are seconds or the following abbreviations are valid:
-
-- ms - Milliseconds
-- s - Seconds
-- m - Minutes
-- h - Hours
-- d - Days
-- w - Weeks
-- M - Months
-- y - Years
-
---max-age can also be specified as an absolute time in the following
-formats:
-
-- RFC3339 - e.g. 2006-01-02T15:04:05Z or 2006-01-02T15:04:05+07:00
-- ISO8601 Date and time, local timezone - 2006-01-02T15:04:05
-- ISO8601 Date and time, local timezone - 2006-01-02 15:04:05
-- ISO8601 Date - 2006-01-02 (YYYY-MM-DD)
--max-age applies only to files and not to directories.
E.g. rclone ls remote: --max-age 2d lists files on remote: of 2 days old
or less.
+See the time option docs for valid formats.
+
--min-age - Don't transfer any file younger than this
Controls the minimum age of files within the scope of an rclone command.
@@ -11028,6 +11162,8 @@ Controls the minimum age of files within the scope of an rclone command.
E.g. rclone ls remote: --min-age 2d lists files on remote: of 2 days old
or more.
+See the time option docs for valid formats.
+
Other flags
--delete-excluded - Delete files on dest excluded from sync
@@ -11228,6 +11364,11 @@ SSL PEM Private key
Maximum size of request header (default 4096)
+--rc-min-tls-version=VALUE
+
+The minimum TLS version that is acceptable. Valid values are "tls1.0",
+"tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
+
--rc-user=VALUE
User name for authentication.
@@ -11571,7 +11712,7 @@ The parameters can be a string as per the rest of rclone, eg
s3:bucket/path or :sftp:/my/dir. They can also be specified as JSON
blobs.
-If specifyng a JSON blob it should be a object mapping strings to
+If specifying a JSON blob it should be a object mapping strings to
strings. These values will be used to configure the remote. There are 3
special values which may be set:
@@ -12135,6 +12276,12 @@ Parameters:
- jobid - id of the job (integer).
+job/stopgroup: Stop all running jobs in a group
+
+Parameters:
+
+- group - name of the group (string).
+
mount/listmounts: Show current mount points
This shows currently mounted points, which can be used for performing an
@@ -12214,10 +12361,10 @@ Example:
Authentication is required for this call.
-mount/unmountall: Show current mount points
+mount/unmountall: Unmount all active mounts
-This shows currently mounted points, which can be used for performing an
-unmount.
+rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's
+cloud storage systems as a file system with FUSE.
This takes no parameters and returns error if unmount does not succeed.
@@ -12742,7 +12889,7 @@ check that parameter passing is working properly.
Authentication is required for this call.
-sync/bisync: Perform bidirectonal synchronization between two paths.
+sync/bisync: Perform bidirectional synchronization between two paths.
This takes the following parameters
@@ -13142,7 +13289,6 @@ Here is an overview of the major features of each cloud storage system.
HDFS - R/W No No - -
HiDrive HiDrive ¹² R/W No No - -
HTTP - R No No R -
- Hubic MD5 R/W No No R/W -
Internet Archive MD5, SHA1, CRC32 R/W ¹¹ No No - RWU
Jottacloud MD5 R/W Yes No R -
Koofr MD5 - Yes No - -
@@ -13153,6 +13299,7 @@ Here is an overview of the major features of each cloud storage system.
Microsoft OneDrive SHA1 ⁵ R/W Yes No R -
OpenDrive MD5 R/W Yes Partial ⁸ - -
OpenStack Swift MD5 R/W No No R/W -
+ Oracle Object Storage MD5 R/W No No R/W -
pCloud MD5, SHA1 ⁷ R No No W -
premiumize.me - - Yes No R -
put.io CRC-32 R/W No Yes R -
@@ -13160,6 +13307,7 @@ Here is an overview of the major features of each cloud storage system.
Seafile - - No No - -
SFTP MD5, SHA1 ² R/W Depends No - -
Sia - - No No - -
+ SMB - - Yes No - -
SugarSync - - No No - -
Storj - R No No - -
Uptobox - - No Yes - -
@@ -13216,7 +13364,7 @@ systems they must support a common hash type.
ModTime
-Allmost all cloud storage systems store some sort of timestamp on
+Almost all cloud storage systems store some sort of timestamp on
objects, but several of them not something that is appropriate to use
for syncing. E.g. some backends will only write a timestamp that
represent the time of the upload. To be relevant for syncing it should
@@ -13623,7 +13771,6 @@ upon backend-specific capabilities.
HDFS Yes No Yes Yes No No Yes No Yes Yes
HiDrive Yes Yes Yes Yes No No Yes No No Yes
HTTP No No No No No No No No No Yes
- Hubic Yes † Yes No No No Yes Yes No Yes No
Internet Archive No Yes No No Yes Yes No Yes Yes No
Jottacloud Yes Yes Yes Yes Yes Yes No Yes Yes Yes
Koofr Yes Yes Yes Yes No No Yes Yes Yes Yes
@@ -13634,6 +13781,7 @@ upon backend-specific capabilities.
Microsoft OneDrive Yes Yes Yes Yes Yes No No Yes Yes Yes
OpenDrive Yes Yes Yes Yes No No No No No Yes
OpenStack Swift Yes † Yes No No No Yes Yes No Yes No
+ Oracle Object Storage Yes Yes No No Yes Yes No No No No
pCloud Yes Yes Yes Yes Yes No No Yes Yes Yes
premiumize.me Yes No Yes Yes No No No Yes Yes Yes
put.io Yes No Yes Yes Yes No Yes No Yes Yes
@@ -13641,6 +13789,7 @@ upon backend-specific capabilities.
Seafile Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
SFTP No No Yes Yes No No Yes No Yes Yes
Sia No No No No No No Yes No No Yes
+ SMB No No Yes Yes No No Yes No No Yes
SugarSync Yes Yes Yes Yes No No Yes Yes No Yes
Storj Yes † No Yes No No Yes Yes No No No
Uptobox No Yes Yes Yes No No No No No No
@@ -13654,9 +13803,9 @@ Purge
This deletes a directory quicker than just deleting all the files in the
directory.
-† Note Swift, Hubic, and Storj implement this in order to delete
-directory markers but they don't actually have a quicker way of deleting
-files other than deleting them individually.
+† Note Swift and Storj implement this in order to delete directory
+markers but they don't actually have a quicker way of deleting files
+other than deleting them individually.
‡ StreamUpload is not supported with Nextcloud
@@ -13846,6 +13995,7 @@ These flags are available for every command.
--rc-job-expire-interval duration Interval to check for expired async jobs (default 10s)
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--rc-no-auth Don't require auth for certain methods
--rc-pass string Password for authentication
--rc-realm string Realm for authentication (default "rclone")
@@ -13862,6 +14012,7 @@ These flags are available for every command.
--refresh-times Refresh the modtime of remote files
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable)
+ --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
@@ -13887,7 +14038,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.60.0")
-v, --verbose count Print lots more stuff (repeat for more)
Backend Flags
@@ -14038,7 +14189,7 @@ and may be set in the config file.
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
- --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish comitting (default 10m0s)
+ --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default "sync")
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -14072,6 +14223,7 @@ and may be set in the config file.
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
--ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
+ --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
@@ -14090,6 +14242,7 @@ and may be set in the config file.
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gcs-endpoint string Endpoint for the service
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
--gcs-object-acl string Access Control List for new objects
@@ -14135,14 +14288,6 @@ and may be set in the config file.
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
- --hubic-auth-url string Auth server URL
- --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
- --hubic-client-id string OAuth Client Id
- --hubic-client-secret string OAuth Client Secret
- --hubic-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
- --hubic-no-chunk Don't chunk files during streaming upload
- --hubic-token string OAuth Access Token as a JSON blob
- --hubic-token-url string Token server url
--internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
--internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
@@ -14211,6 +14356,22 @@ and may be set in the config file.
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
+ --oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
+ --oos-compartment string Object storage compartment OCID
+ --oos-config-file string Path to OCI config file (default "~/.oci/config")
+ --oos-config-profile string Profile name inside the oci config file (default "Default")
+ --oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
+ --oos-copy-timeout Duration Timeout for copy (default 1m0s)
+ --oos-disable-checksum Don't store MD5 checksum with object metadata
+ --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --oos-endpoint string Endpoint for Object storage API
+ --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
+ --oos-namespace string Object storage namespace
+ --oos-no-check-bucket If set, don't attempt to check the bucket exists or create it
+ --oos-provider string Choose your Auth Provider (default "env_auth")
+ --oos-region string Object storage Region
+ --oos-upload-concurrency int Concurrency for multipart uploads (default 10)
+ --oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
@@ -14242,6 +14403,7 @@ and may be set in the config file.
--s3-bucket-acl string Canned ACL used when creating buckets
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
+ --s3-decompress If set this will decompress gzip encoded objects
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
@@ -14260,6 +14422,7 @@ and may be set in the config file.
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity
--s3-no-head-object If set, do not do HEAD before GET when getting objects
+ --s3-no-system-metadata Suppress setting and reading of system metadata
--s3-profile string Profile to use in the shared credentials file
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
@@ -14269,7 +14432,8 @@ and may be set in the config file.
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
- --s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data
+ --s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
+ --s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
@@ -14279,6 +14443,8 @@ and may be set in the config file.
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
+ --s3-version-at Time Show file versions as they were at the specified time (default off)
+ --s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
--seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
@@ -14325,6 +14491,15 @@ and may be set in the config file.
--sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent")
--skip-links Don't warn about skipped symlinks
+ --smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
+ --smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
+ --smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true)
+ --smb-host string SMB server hostname to connect to
+ --smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
+ --smb-pass string SMB password (obscured)
+ --smb-port int SMB port number (default 445)
+ --smb-user string SMB username (default "$USER")
--storj-access-grant string Access grant
--storj-api-key string API key
--storj-passphrase string Encryption passphrase
@@ -14355,6 +14530,7 @@ and may be set in the config file.
--swift-key string API key or password (OS_PASSWORD)
--swift-leave-parts-on-error If true avoid calling abort upload on a failure
--swift-no-chunk Don't chunk files during streaming upload
+ --swift-no-large-objects Disable support for static and dynamic large objects
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
@@ -15251,7 +15427,7 @@ Most of these events come up due to a error status from an internal
call. On such a critical error the {...}.path1.lst and {...}.path2.lst
listing files are renamed to extension .lst-err, which blocks any future
bisync runs (since the normal .lst files are not found). Bisync keeps
-them under bisync subdirectory of the rclone cache direcory, typically
+them under bisync subdirectory of the rclone cache directory, typically
at ${HOME}/.cache/rclone/bisync/ on Linux.
Some errors are considered temporary and re-running the bisync is not
@@ -15341,7 +15517,7 @@ have spelling case differences (Smile.jpg vs. smile.jpg).
Windows support
Bisync has been tested on Windows 8.1, Windows 10 Pro 64-bit and on
-Windows Github runners.
+Windows GitHub runners.
Drive letters are allowed, including drive letters mapped to network
drives (rclone bisync J:\localsync GDrive:). If a drive letter is
@@ -15838,7 +16014,7 @@ Notes about testing
check file mismatches in the test tree.
- Some Dropbox tests can fail, notably printing the following message:
src and dst identical but can't set mod time without deleting and re-uploading
- This is expected and happens due a way Dropbox handles modificaion
+ This is expected and happens due a way Dropbox handles modification
times. You should use the -refresh-times test flag to make up for
this.
- If Dropbox tests hit request limit for you and print error message
@@ -15849,7 +16025,7 @@ Updating golden results
Sometimes even a slight change in the bisync source can cause little
changes spread around many log files. Updating them manually would be a
-nighmare.
+nightmare.
The -golden flag will store the test.log and *.lst listings from each
test case into respective golden directories. Golden results will
@@ -16209,6 +16385,11 @@ Invoking rclone mkdir backup:../desktop is exactly the same as invoking
rclone mkdir mydrive:private/backup/../desktop. The empty path is not
allowed as a remote. To alias the current directory use . instead.
+The target remote can also be a connection string. This can be used to
+modify the config of a remote for different uses, e.g. the alias
+myDriveTrash with the target remote myDrive,trashed_only: can be used to
+only show the trashed files in myDrive.
+
Configuration
Here is an example of how to make an alias called remote for local
@@ -16633,7 +16814,9 @@ The S3 backend can be used with a number of different providers:
- Huawei OBS
- IBM COS S3
- IDrive e2
+- IONOS Cloud
- Minio
+- Qiniu Cloud Object Storage (Kodo)
- RackCorp Object Storage
- Scaleway
- Seagate Lyve Cloud
@@ -16944,9 +17127,9 @@ Avoiding GET requests to read directory listings
Rclone's default directory traversal is to process each directory
individually. This takes one API call per directory. Using the
---fast-list flag will read all info about the the objects into memory
-first using a smaller number of API calls (one per 1000 objects). See
-the rclone docs for more details.
+--fast-list flag will read all info about the objects into memory first
+using a smaller number of API calls (one per 1000 objects). See the
+rclone docs for more details.
rclone sync --fast-list --checksum /path/to/source s3:bucket
@@ -16995,6 +17178,64 @@ will mean that these objects do not have an MD5 checksum.
Note that reading this from the object takes an additional HEAD request
as the metadata isn't returned in object listings.
+Versions
+
+When bucket versioning is enabled (this can be done with rclone with the
+rclone backend versioning command) when rclone uploads a new version of
+a file it creates a new version of it Likewise when you delete a file,
+the old version will be marked hidden and still be available.
+
+Old versions of files, where available, are visible using the
+--s3-versions flag.
+
+It is also possible to view a bucket as it was at a certain point in
+time, using the --s3-version-at flag. This will show the file versions
+as they were at that time, showing files that have been deleted
+afterwards, and hiding files that were created since.
+
+If you wish to remove all the old versions then you can use the
+rclone backend cleanup-hidden remote:bucket command which will delete
+all the old hidden versions of files, leaving the current ones intact.
+You can also supply a path and only old versions under that path will be
+deleted, e.g. rclone backend cleanup-hidden remote:bucket/path/to/stuff.
+
+When you purge a bucket, the current and the old versions will be
+deleted then the bucket will be deleted.
+
+However delete will cause the current versions of the files to become
+hidden old versions.
+
+Here is a session showing the listing and retrieval of an old version
+followed by a cleanup of the old versions.
+
+Show current version and all the versions with --s3-versions flag.
+
+ $ rclone -q ls s3:cleanup-test
+ 9 one.txt
+
+ $ rclone -q --s3-versions ls s3:cleanup-test
+ 9 one.txt
+ 8 one-v2016-07-04-141032-000.txt
+ 16 one-v2016-07-04-141003-000.txt
+ 15 one-v2016-07-02-155621-000.txt
+
+Retrieve an old version
+
+ $ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
+
+ $ ls -l /tmp/one-v2016-07-04-141003-000.txt
+ -rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
+
+Clean up all the old versions and show that they've gone.
+
+ $ rclone -q backend cleanup-hidden s3:cleanup-test
+
+ $ rclone -q ls s3:cleanup-test
+ 9 one.txt
+
+ $ rclone -q --s3-versions ls s3:cleanup-test
+ 9 one.txt
+
Cleanup
If you run rclone cleanup s3:bucket then it will remove all pending
@@ -17190,8 +17431,8 @@ Standard options
Here are the Standard options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, Ceph, China Mobile,
Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS,
-IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS,
-StackPath, Storj, Tencent COS and Wasabi).
+IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway,
+SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).
--s3-provider
@@ -17226,6 +17467,8 @@ Properties:
- IBM COS S3
- "IDrive"
- IDrive e2
+ - "IONOS"
+ - IONOS Cloud
- "LyveCloud"
- Seagate Lyve Cloud
- "Minio"
@@ -17246,6 +17489,8 @@ Properties:
- Tencent Cloud Object Storage (COS)
- "Wasabi"
- Wasabi Object Storage
+ - "Qiniu"
+ - Qiniu Object Storage (Kodo)
- "Other"
- Any other S3 compatible provider
@@ -17518,6 +17763,60 @@ Properties:
Region to connect to.
+Properties:
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+ - "cn-east-1"
+ - The default endpoint - a good choice if you are unsure.
+ - East China Region 1.
+ - Needs location constraint cn-east-1.
+ - "cn-east-2"
+ - East China Region 2.
+ - Needs location constraint cn-east-2.
+ - "cn-north-1"
+ - North China Region 1.
+ - Needs location constraint cn-north-1.
+ - "cn-south-1"
+ - South China Region 1.
+ - Needs location constraint cn-south-1.
+ - "us-north-1"
+ - North America Region.
+ - Needs location constraint us-north-1.
+ - "ap-southeast-1"
+ - Southeast Asia Region 1.
+ - Needs location constraint ap-southeast-1.
+ - "ap-northeast-1"
+ - Northeast Asia Region 1.
+ - Needs location constraint ap-northeast-1.
+
+--s3-region
+
+Region where your bucket will be created and your data stored.
+
+Properties:
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Provider: IONOS
+- Type: string
+- Required: false
+- Examples:
+ - "de"
+ - Frankfurt, Germany
+ - "eu-central-2"
+ - Berlin, Germany
+ - "eu-south-2"
+ - Logrono, Spain
+
+--s3-region
+
+Region to connect to.
+
Leave blank if you are using an S3 clone and you don't have a region.
Properties:
@@ -17525,7 +17824,7 @@ Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
- Provider:
- !AWS,Alibaba,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
+ !AWS,Alibaba,ChinaMobile,Cloudflare,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
- Type: string
- Required: false
- Examples:
@@ -17783,6 +18082,27 @@ Properties:
--s3-endpoint
+Endpoint for IONOS S3 Object Storage.
+
+Specify the endpoint from the same region.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Provider: IONOS
+- Type: string
+- Required: false
+- Examples:
+ - "s3-eu-central-1.ionoscloud.com"
+ - Frankfurt, Germany
+ - "s3-eu-central-2.ionoscloud.com"
+ - Berlin, Germany
+ - "s3-eu-south-2.ionoscloud.com"
+ - Logrono, Spain
+
+--s3-endpoint
+
Endpoint for OSS API.
Properties:
@@ -18048,6 +18368,33 @@ Properties:
--s3-endpoint
+Endpoint for Qiniu Object Storage.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+ - "s3-cn-east-1.qiniucs.com"
+ - East China Endpoint 1
+ - "s3-cn-east-2.qiniucs.com"
+ - East China Endpoint 2
+ - "s3-cn-north-1.qiniucs.com"
+ - North China Endpoint 1
+ - "s3-cn-south-1.qiniucs.com"
+ - South China Endpoint 1
+ - "s3-us-north-1.qiniucs.com"
+ - North America Endpoint 1
+ - "s3-ap-southeast-1.qiniucs.com"
+ - Southeast Asia Endpoint 1
+ - "s3-ap-northeast-1.qiniucs.com"
+ - Northeast Asia Endpoint 1
+
+--s3-endpoint
+
Endpoint for S3 API.
Required when using an S3 clone.
@@ -18057,7 +18404,7 @@ Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Provider:
- !AWS,IBMCOS,IDrive,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp
+ !AWS,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp,Qiniu
- Type: string
- Required: false
- Examples:
@@ -18384,6 +18731,35 @@ Properties:
Location constraint - must be set to match the Region.
+Used when creating buckets only.
+
+Properties:
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+ - "cn-east-1"
+ - East China Region 1
+ - "cn-east-2"
+ - East China Region 2
+ - "cn-north-1"
+ - North China Region 1
+ - "cn-south-1"
+ - South China Region 1
+ - "us-north-1"
+ - North America Region 1
+ - "ap-southeast-1"
+ - Southeast Asia Region 1
+ - "ap-northeast-1"
+ - Northeast Asia Region 1
+
+--s3-location-constraint
+
+Location constraint - must be set to match the Region.
+
Leave blank if not sure. Used when creating buckets only.
Properties:
@@ -18391,7 +18767,7 @@ Properties:
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
- Provider:
- !AWS,IBMCOS,IDrive,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS
+ !AWS,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS
- Type: string
- Required: false
@@ -18632,13 +19008,34 @@ Properties:
- Prices are lower, but it needs to be restored first to be
accessed.
+--s3-storage-class
+
+The storage class to use when storing new objects in Qiniu.
+
+Properties:
+
+- Config: storage_class
+- Env Var: RCLONE_S3_STORAGE_CLASS
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+ - "STANDARD"
+ - Standard storage class
+ - "LINE"
+ - Infrequent access storage mode
+ - "GLACIER"
+ - Archive storage mode
+ - "DEEP_ARCHIVE"
+ - Deep archive storage mode
+
Advanced options
Here are the Advanced options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, Ceph, China Mobile,
Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS,
-IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS,
-StackPath, Storj, Tencent COS and Wasabi).
+IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway,
+SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).
--s3-bucket-acl
@@ -18703,9 +19100,11 @@ Properties:
--s3-sse-customer-key
-If using SSE-C you must provide the secret encryption key used to
+To use SSE-C you may provide the secret encryption key used to
encrypt/decrypt your data.
+Alternatively you can provide --sse-customer-key-base64.
+
Properties:
- Config: sse_customer_key
@@ -18717,6 +19116,24 @@ Properties:
- ""
- None
+--s3-sse-customer-key-base64
+
+If using SSE-C you must provide the secret encryption key encoded in
+base64 format to encrypt/decrypt your data.
+
+Alternatively you can provide --sse-customer-key.
+
+Properties:
+
+- Config: sse_customer_key_base64
+- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_BASE64
+- Provider: AWS,Ceph,ChinaMobile,Minio
+- Type: string
+- Required: false
+- Examples:
+ - ""
+ - None
+
--s3-sse-customer-key-md5
If using SSE-C you may provide the secret encryption key MD5 checksum
@@ -19199,6 +19616,66 @@ Properties:
- Type: bool
- Default: false
+--s3-versions
+
+Include old versions in directory listings.
+
+Properties:
+
+- Config: versions
+- Env Var: RCLONE_S3_VERSIONS
+- Type: bool
+- Default: false
+
+--s3-version-at
+
+Show file versions as they were at the specified time.
+
+The parameter should be a date, "2006-01-02", datetime "2006-01-02
+15:04:05" or a duration for that long ago, eg "100d" or "1h".
+
+Note that when using this no file write operations are permitted, so you
+can't upload files or delete them.
+
+See the time option docs for valid formats.
+
+Properties:
+
+- Config: version_at
+- Env Var: RCLONE_S3_VERSION_AT
+- Type: Time
+- Default: off
+
+--s3-decompress
+
+If set this will decompress gzip encoded objects.
+
+It is possible to upload objects to S3 with "Content-Encoding: gzip"
+set. Normally rclone will download these files as compressed objects.
+
+If this flag is set then rclone will decompress these files with
+"Content-Encoding: gzip" as they are received. This means that rclone
+can't check the size and hash but the file contents will be
+decompressed.
+
+Properties:
+
+- Config: decompress
+- Env Var: RCLONE_S3_DECOMPRESS
+- Type: bool
+- Default: false
+
+--s3-no-system-metadata
+
+Suppress setting and reading of system metadata
+
+Properties:
+
+- Config: no_system_metadata
+- Env Var: RCLONE_S3_NO_SYSTEM_METADATA
+- Type: bool
+- Default: false
+
Metadata
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case
@@ -19353,6 +19830,37 @@ Options:
- "max-age": Max age of upload to delete
+cleanup-hidden
+
+Remove old versions of files.
+
+ rclone backend cleanup-hidden remote: [options] [+]
+
+This command removes any old hidden versions of files on a versions
+enabled bucket.
+
+Note that you can use -i/--dry-run with this command to see what it
+would do.
+
+ rclone backend cleanup-hidden s3:bucket/path/to/dir
+
+versioning
+
+Set/get versioning support for a bucket.
+
+ rclone backend versioning remote: [options] [+]
+
+This command sets versioning support if a parameter is passed and then
+returns the current versioning status for the bucket supplied.
+
+ rclone backend versioning s3:bucket # read status only
+ rclone backend versioning s3:bucket Enabled
+ rclone backend versioning s3:bucket Suspended
+
+It may return "Enabled", "Suspended" or "Unversioned". Note that once
+versioning has been enabled the status can't be set back to
+"Unversioned".
+
Anonymous access to public buckets
If you want to use rclone to access a public bucket, configure with a
@@ -20033,6 +20541,166 @@ This will guide you through an interactive setup process.
d) Delete this remote
y/e/d> y
+IONOS Cloud
+
+IONOS S3 Object Storage is a service offered by IONOS for storing and
+accessing unstructured data. To connect to the service, you will need an
+access key and a secret key. These can be found in the Data Center
+Designer, by selecting Manager resources > Object Storage Key Manager.
+
+Here is an example of a configuration. First, run rclone config. This
+will walk you through an interactive setup process. Type n to add the
+new remote, and then enter a name:
+
+ Enter name for new remote.
+ name> ionos-fra
+
+Type s3 to choose the connection type:
+
+ Option Storage.
+ Type of storage to configure.
+ Choose a number from below, or type in your own value.
+ [snip]
+ XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
+ \ (s3)
+ [snip]
+ Storage> s3
+
+Type IONOS:
+
+ Option provider.
+ Choose your S3 provider.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ [snip]
+ XX / IONOS Cloud
+ \ (IONOS)
+ [snip]
+ provider> IONOS
+
+Press Enter to choose the default option
+Enter AWS credentials in the next step:
+
+ Option env_auth.
+ Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ Only applies if access_key_id and secret_access_key is blank.
+ Choose a number from below, or type in your own boolean value (true or false).
+ Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+ env_auth>
+
+Enter your Access Key and Secret key. These can be retrieved in the Data
+Center Designer, click on the menu “Manager resources” / "Object Storage
+Key Manager".
+
+ Option access_key_id.
+ AWS Access Key ID.
+ Leave blank for anonymous access or runtime credentials.
+ Enter a value. Press Enter to leave empty.
+ access_key_id> YOUR_ACCESS_KEY
+
+ Option secret_access_key.
+ AWS Secret Access Key (password).
+ Leave blank for anonymous access or runtime credentials.
+ Enter a value. Press Enter to leave empty.
+ secret_access_key> YOUR_SECRET_KEY
+
+Choose the region where your bucket is located:
+
+ Option region.
+ Region where your bucket will be created and your data stored.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ 1 / Frankfurt, Germany
+ \ (de)
+ 2 / Berlin, Germany
+ \ (eu-central-2)
+ 3 / Logrono, Spain
+ \ (eu-south-2)
+ region> 2
+
+Choose the endpoint from the same region:
+
+ Option endpoint.
+ Endpoint for IONOS S3 Object Storage.
+ Specify the endpoint from the same region.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ 1 / Frankfurt, Germany
+ \ (s3-eu-central-1.ionoscloud.com)
+ 2 / Berlin, Germany
+ \ (s3-eu-central-2.ionoscloud.com)
+ 3 / Logrono, Spain
+ \ (s3-eu-south-2.ionoscloud.com)
+ endpoint> 1
+
+Press Enter to choose the default option or choose the desired ACL
+setting:
+
+ Option acl.
+ Canned ACL used when creating buckets and storing or copying objects.
+ This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+ For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+ Note that this ACL is applied when server-side copying objects as S3
+ doesn't copy the ACL from the source but rather writes a fresh one.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+ / Owner gets FULL_CONTROL.
+ [snip]
+ acl>
+
+Press Enter to skip the advanced config:
+
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n>
+
+Press Enter to save the configuration, and then q to quit the
+configuration process:
+
+ Configuration complete.
+ Options:
+ - type: s3
+ - provider: IONOS
+ - access_key_id: YOUR_ACCESS_KEY
+ - secret_access_key: YOUR_SECRET_KEY
+ - endpoint: s3-eu-central-1.ionoscloud.com
+ Keep this "ionos-fra" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+Done! Now you can try some commands (for macOS, use ./rclone instead of
+rclone).
+
+1) Create a bucket (the name must be unique within the whole IONOS S3)
+
+ rclone mkdir ionos-fra:my-bucket
+
+2) List available buckets
+
+ rclone lsd ionos-fra:
+
+4) Copy a file from local to remote
+
+ rclone copy /Users/file.txt ionos-fra:my-bucket
+
+3) List contents of a bucket
+
+ rclone ls ionos-fra:my-bucket
+
+5) Copy a file from remote to local
+
+ rclone copy ionos-fra:my-bucket/file.txt
+
Minio
Minio is an object storage server built for cloud application developers
@@ -20094,6 +20762,198 @@ So once set up, for example, to copy files into a bucket
rclone copy /path/to/files minio:bucket
+Qiniu Cloud Object Storage (Kodo)
+
+Qiniu Cloud Object Storage (Kodo), a completely independent-researched
+core technology which is proven by repeated customer experience has
+occupied absolute leading market leader position. Kodo can be widely
+applied to mass data management.
+
+To configure access to Qiniu Kodo, follow the steps below:
+
+1. Run rclone config and select n for a new remote.
+
+ rclone config
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+
+2. Give the name of the configuration. For example, name it 'qiniu'.
+
+ name> qiniu
+
+3. Select s3 storage.
+
+ Choose a number from below, or type in your own value
+ 1 / 1Fichier
+ \ (fichier)
+ 2 / Akamai NetStorage
+ \ (netstorage)
+ 3 / Alias for an existing remote
+ \ (alias)
+ 4 / Amazon Drive
+ \ (amazon cloud drive)
+ 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi
+ \ (s3)
+ [snip]
+ Storage> s3
+
+4. Select Qiniu provider.
+
+ Choose a number from below, or type in your own value
+ 1 / Amazon Web Services (AWS) S3
+ \ "AWS"
+ [snip]
+ 22 / Qiniu Object Storage (Kodo)
+ \ (Qiniu)
+ [snip]
+ provider> Qiniu
+
+5. Enter your SecretId and SecretKey of Qiniu Kodo.
+
+ Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ Only applies if access_key_id and secret_access_key is blank.
+ Enter a boolean value (true or false). Press Enter for the default ("false").
+ Choose a number from below, or type in your own value
+ 1 / Enter AWS credentials in the next step
+ \ "false"
+ 2 / Get AWS credentials from the environment (env vars or IAM)
+ \ "true"
+ env_auth> 1
+ AWS Access Key ID.
+ Leave blank for anonymous access or runtime credentials.
+ Enter a string value. Press Enter for the default ("").
+ access_key_id> AKIDxxxxxxxxxx
+ AWS Secret Access Key (password)
+ Leave blank for anonymous access or runtime credentials.
+ Enter a string value. Press Enter for the default ("").
+ secret_access_key> xxxxxxxxxxx
+
+6. Select endpoint for Qiniu Kodo. This is the standard endpoint for
+ different region.
+
+ / The default endpoint - a good choice if you are unsure.
+ 1 | East China Region 1.
+ | Needs location constraint cn-east-1.
+ \ (cn-east-1)
+ / East China Region 2.
+ 2 | Needs location constraint cn-east-2.
+ \ (cn-east-2)
+ / North China Region 1.
+ 3 | Needs location constraint cn-north-1.
+ \ (cn-north-1)
+ / South China Region 1.
+ 4 | Needs location constraint cn-south-1.
+ \ (cn-south-1)
+ / North America Region.
+ 5 | Needs location constraint us-north-1.
+ \ (us-north-1)
+ / Southeast Asia Region 1.
+ 6 | Needs location constraint ap-southeast-1.
+ \ (ap-southeast-1)
+ / Northeast Asia Region 1.
+ 7 | Needs location constraint ap-northeast-1.
+ \ (ap-northeast-1)
+ [snip]
+ endpoint> 1
+
+ Option endpoint.
+ Endpoint for Qiniu Object Storage.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ 1 / East China Endpoint 1
+ \ (s3-cn-east-1.qiniucs.com)
+ 2 / East China Endpoint 2
+ \ (s3-cn-east-2.qiniucs.com)
+ 3 / North China Endpoint 1
+ \ (s3-cn-north-1.qiniucs.com)
+ 4 / South China Endpoint 1
+ \ (s3-cn-south-1.qiniucs.com)
+ 5 / North America Endpoint 1
+ \ (s3-us-north-1.qiniucs.com)
+ 6 / Southeast Asia Endpoint 1
+ \ (s3-ap-southeast-1.qiniucs.com)
+ 7 / Northeast Asia Endpoint 1
+ \ (s3-ap-northeast-1.qiniucs.com)
+ endpoint> 1
+
+ Option location_constraint.
+ Location constraint - must be set to match the Region.
+ Used when creating buckets only.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ 1 / East China Region 1
+ \ (cn-east-1)
+ 2 / East China Region 2
+ \ (cn-east-2)
+ 3 / North China Region 1
+ \ (cn-north-1)
+ 4 / South China Region 1
+ \ (cn-south-1)
+ 5 / North America Region 1
+ \ (us-north-1)
+ 6 / Southeast Asia Region 1
+ \ (ap-southeast-1)
+ 7 / Northeast Asia Region 1
+ \ (ap-northeast-1)
+ location_constraint> 1
+
+7. Choose acl and storage class.
+
+ Note that this ACL is applied when server-side copying objects as S3
+ doesn't copy the ACL from the source but rather writes a fresh one.
+ Enter a string value. Press Enter for the default ("").
+ Choose a number from below, or type in your own value
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+ / Owner gets FULL_CONTROL.
+ 2 | The AllUsers group gets READ access.
+ \ (public-read)
+ [snip]
+ acl> 2
+ The storage class to use when storing new objects in Tencent COS.
+ Enter a string value. Press Enter for the default ("").
+ Choose a number from below, or type in your own value
+ 1 / Standard storage class
+ \ (STANDARD)
+ 2 / Infrequent access storage mode
+ \ (LINE)
+ 3 / Archive storage mode
+ \ (GLACIER)
+ 4 / Deep archive storage mode
+ \ (DEEP_ARCHIVE)
+ [snip]
+ storage_class> 1
+ Edit advanced config? (y/n)
+ y) Yes
+ n) No (default)
+ y/n> n
+ Remote config
+ --------------------
+ [qiniu]
+ - type: s3
+ - provider: Qiniu
+ - access_key_id: xxx
+ - secret_access_key: xxx
+ - region: cn-east-1
+ - endpoint: s3-cn-east-1.qiniucs.com
+ - location_constraint: cn-east-1
+ - acl: public-read
+ - storage_class: STANDARD
+ --------------------
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+ Current remotes:
+
+ Name Type
+ ==== ====
+ qiniu s3
+
RackCorp
RackCorp Object Storage is an S3 compatible object storage platform from
@@ -23749,11 +24609,10 @@ If you intend to use the wrapped remote both directly for keeping
unencrypted content, as well as through a crypt remote for encrypted
content, it is recommended to point the crypt remote to a separate
directory within the wrapped remote. If you use a bucket-based storage
-system (e.g. Swift, S3, Google Compute Storage, B2, Hubic) it is
-generally advisable to wrap the crypt remote around a specific bucket
-(s3:bucket). If wrapping around the entire root of the storage (s3:),
-and use the optional file name encryption, rclone will encrypt the
-bucket name.
+system (e.g. Swift, S3, Google Compute Storage, B2) it is generally
+advisable to wrap the crypt remote around a specific bucket (s3:bucket).
+If wrapping around the entire root of the storage (s3:), and use the
+optional file name encryption, rclone will encrypt the bucket name.
Changing password
@@ -23767,7 +24626,7 @@ crypt remote means you will no longer able to decrypt any of the
previously encrypted content. The only possibility is to re-upload
everything via a crypt remote configured with your new password.
-Depending on the size of your data, your bandwith, storage quota etc,
+Depending on the size of your data, your bandwidth, storage quota etc,
there are different approaches you can take: - If you have everything in
a different location, for example on your local system, you could remove
all of the prior encrypted files, change the password for your
@@ -23780,7 +24639,7 @@ remote to the new, effectively decrypting everything on the fly using
the old password and re-encrypting using the new password. When done,
delete the original crypt remote directory and finally the rclone crypt
configuration with the old password. All data will be streamed from the
-storage system and back, so you will get half the bandwith and be
+storage system and back, so you will get half the bandwidth and be
charged twice if you have upload and download quota on the storage
system.
@@ -24078,7 +24937,7 @@ How to encode the encrypted filename to text string.
This option could help with shortening the encrypted filename. The
suitable option would depend on the way your remote count the filename
-length and if it's case sensitve.
+length and if it's case sensitive.
Properties:
@@ -24409,7 +25268,7 @@ Generally -1 (default, equivalent to 5) is recommended. Levels 1 to 9
increase compression at the cost of speed. Going past 6 generally offers
very little return.
-Level -2 uses Huffmann encoding only. Only use if you know what you are
+Level -2 uses Huffman encoding only. Only use if you know what you are
doing. Level 0 turns off compression.
Properties:
@@ -24544,7 +25403,7 @@ This would produce something like this:
[AllDrives]
type = combine
- remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
+ upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
If you then add that config to your config file (find it with
rclone config file) then you can access all the shared drives in one
@@ -24979,7 +25838,7 @@ Properties:
--dropbox-batch-commit-timeout
-Max time to wait for a batch to finish comitting
+Max time to wait for a batch to finish committing
Properties:
@@ -25071,8 +25930,8 @@ Storage accessible through a global file system.
Configuration
The initial setup for the Enterprise File Fabric backend involves
-getting a token from the the Enterprise File Fabric which you need to do
-in your browser. rclone config walks you through it.
+getting a token from the Enterprise File Fabric which you need to do in
+your browser. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
@@ -25344,8 +26203,7 @@ To create an FTP configuration named remote, run
Rclone config guides you through an interactive setup process. A minimal
rclone FTP remote definition only requires host, username and password.
-For an anonymous FTP server, use anonymous as username and your email
-address as password.
+For an anonymous FTP server, see below.
No remotes found, make a new one?
n) New remote
@@ -25420,9 +26278,30 @@ files in the directory.
rclone sync -i /home/local/directory remote:directory
-Example without a config file
+Anonymous FTP
- rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=`rclone obscure dummy`
+When connecting to a FTP server that allows anonymous login, you can use
+the special "anonymous" username. Traditionally, this user account
+accepts any string as a password, although it is common to use either
+the password "anonymous" or "guest". Some servers require the use of a
+valid e-mail address as password.
+
+Using on-the-fly or connection string remotes makes it easy to access
+such servers, without requiring any configuration in advance. The
+following are examples of that:
+
+ rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy)
+ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy):
+
+The above examples work in Linux shells and in PowerShell, but not
+Windows Command Prompt. They execute the rclone obscure command to
+create a password string in the format required by the pass option. The
+following examples are exactly the same, except use an already obscured
+string representation of the same password "dummy", and therefore works
+even in Windows Command Prompt:
+
+ rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM
+ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM:
Implicit TLS
@@ -25435,7 +26314,7 @@ Restricted filename characters
In addition to the default restricted characters set the following
characters are also replaced:
-File names cannot end with the following characters. Repacement is
+File names cannot end with the following characters. Replacement is
limited to the last character in a file name:
Character Value Replacement
@@ -25544,6 +26423,18 @@ Here are the Advanced options specific to ftp (FTP).
Maximum number of FTP simultaneous connections, 0 for unlimited.
+Note that setting this is very likely to cause deadlocks so it should be
+used with care.
+
+If you are doing a sync or copy then make sure concurrency is one more
+than the sum of --transfers and --checkers.
+
+If you use --check-first then it just needs to be one more than the
+maximum of --checkers and --transfers.
+
+So for concurrency 3 you'd use --checkers 2 --transfers 2 --check-first
+or --checkers 1 --transfers 1.
+
Properties:
- Config: concurrency
@@ -25606,6 +26497,18 @@ Properties:
- Type: bool
- Default: false
+--ftp-force-list-hidden
+
+Use LIST -a to force listing of hidden files and folders. This will
+disable the use of MLSD.
+
+Properties:
+
+- Config: force_list_hidden
+- Env Var: RCLONE_FTP_FORCE_LIST_HIDDEN
+- Type: bool
+- Default: false
+
--ftp-idle-timeout
Max time before closing idle connections.
@@ -26356,8 +27259,7 @@ Properties:
If set this will decompress gzip encoded objects.
It is possible to upload objects to GCS with "Content-Encoding: gzip"
-set. Normally rclone will download these files files as compressed
-objects.
+set. Normally rclone will download these files as compressed objects.
If this flag is set then rclone will decompress these files with
"Content-Encoding: gzip" as they are received. This means that rclone
@@ -26371,6 +27273,19 @@ Properties:
- Type: bool
- Default: false
+--gcs-endpoint
+
+Endpoint for the service.
+
+Leave blank normally.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_GCS_ENDPOINT
+- Type: string
+- Required: false
+
--gcs-encoding
The encoding for the backend.
@@ -27752,10 +28667,10 @@ found and a combined drive.
[AllDrives]
type = combine
- remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
+ upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
Adding this to the rclone config file will cause those team drives to be
-accessible with the aliases shown. Any illegal charactes will be
+accessible with the aliases shown. Any illegal characters will be
substituted with "_" and duplicate names will have numbers suffixed. It
will also add a remote called AllDrives which shows all the shared
drives combined into one directory tree.
@@ -29102,7 +30017,7 @@ Modified time and hashes
HiDrive allows modification times to be set on objects accurate to 1
second.
-HiDrive supports its own hash type which is used to verify the integrety
+HiDrive supports its own hash type which is used to verify the integrity
of file contents after successful transfers.
Restricted filename characters
@@ -29649,234 +30564,6 @@ mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
-Hubic
-
-Paths are specified as remote:path
-
-Paths are specified as remote:container (or remote: for the lsd
-command.) You may put subdirectories in too, e.g.
-remote:container/path/to/dir.
-
-Configuration
-
-The initial setup for Hubic involves getting a token from Hubic which
-you need to do in your browser. rclone config walks you through it.
-
-Here is an example of how to make a remote called remote. First run:
-
- rclone config
-
-This will guide you through an interactive setup process:
-
- n) New remote
- s) Set configuration password
- n/s> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- [snip]
- XX / Hubic
- \ "hubic"
- [snip]
- Storage> hubic
- Hubic Client Id - leave blank normally.
- client_id>
- Hubic Client Secret - leave blank normally.
- client_secret>
- Remote config
- Use auto config?
- * Say Y if not sure
- * Say N if you are working on a remote or headless machine
- y) Yes
- n) No
- y/n> y
- If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
- Log in and authorize rclone for access
- Waiting for code...
- Got code
- --------------------
- [remote]
- client_id =
- client_secret =
- token = {"access_token":"XXXXXX"}
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
-
-See the remote setup docs for how to set it up on a machine with no
-Internet browser available.
-
-Note that rclone runs a webserver on your local machine to collect the
-token as returned from Hubic. This only runs from the moment it opens
-your browser to the moment you get back the verification code. This is
-on http://127.0.0.1:53682/ and this it may require you to unblock it
-temporarily if you are running a host firewall.
-
-Once configured you can then use rclone like this,
-
-List containers in the top level of your Hubic
-
- rclone lsd remote:
-
-List all the files in your Hubic
-
- rclone ls remote:
-
-To copy a local directory to an Hubic directory called backup
-
- rclone copy /home/source remote:backup
-
-If you want the directory to be visible in the official Hubic browser,
-you need to copy your files to the default directory
-
- rclone copy /home/source remote:default/backup
-
---fast-list
-
-This remote supports --fast-list which allows you to use fewer
-transactions in exchange for more memory. See the rclone docs for more
-details.
-
-Modified time
-
-The modified time is stored as metadata on the object as
-X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.
-
-This is a de facto standard (used in the official python-swiftclient
-amongst others) for storing the modification time for an object.
-
-Note that Hubic wraps the Swift backend, so most of the properties of
-are the same.
-
-Standard options
-
-Here are the Standard options specific to hubic (Hubic).
-
---hubic-client-id
-
-OAuth Client Id.
-
-Leave blank normally.
-
-Properties:
-
-- Config: client_id
-- Env Var: RCLONE_HUBIC_CLIENT_ID
-- Type: string
-- Required: false
-
---hubic-client-secret
-
-OAuth Client Secret.
-
-Leave blank normally.
-
-Properties:
-
-- Config: client_secret
-- Env Var: RCLONE_HUBIC_CLIENT_SECRET
-- Type: string
-- Required: false
-
-Advanced options
-
-Here are the Advanced options specific to hubic (Hubic).
-
---hubic-token
-
-OAuth Access Token as a JSON blob.
-
-Properties:
-
-- Config: token
-- Env Var: RCLONE_HUBIC_TOKEN
-- Type: string
-- Required: false
-
---hubic-auth-url
-
-Auth server URL.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: auth_url
-- Env Var: RCLONE_HUBIC_AUTH_URL
-- Type: string
-- Required: false
-
---hubic-token-url
-
-Token server url.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: token_url
-- Env Var: RCLONE_HUBIC_TOKEN_URL
-- Type: string
-- Required: false
-
---hubic-chunk-size
-
-Above this size files will be chunked into a _segments container.
-
-Above this size files will be chunked into a _segments container. The
-default for this is 5 GiB which is its maximum value.
-
-Properties:
-
-- Config: chunk_size
-- Env Var: RCLONE_HUBIC_CHUNK_SIZE
-- Type: SizeSuffix
-- Default: 5Gi
-
---hubic-no-chunk
-
-Don't chunk files during streaming upload.
-
-When doing streaming uploads (e.g. using rcat or mount) setting this
-flag will cause the swift backend to not upload chunked files.
-
-This will limit the maximum upload size to 5 GiB. However non chunked
-files are easier to deal with and have an MD5SUM.
-
-Rclone will still chunk files bigger than chunk_size when doing normal
-copy operations.
-
-Properties:
-
-- Config: no_chunk
-- Env Var: RCLONE_HUBIC_NO_CHUNK
-- Type: bool
-- Default: false
-
---hubic-encoding
-
-The encoding for the backend.
-
-See the encoding section in the overview for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_HUBIC_ENCODING
-- Type: MultiEncoder
-- Default: Slash,InvalidUtf8
-
-Limitations
-
-This uses the normal OpenStack Swift mechanism to refresh the Swift API
-credentials and ignores the expires field returned by the Hubic API.
-
-The Swift API doesn't return a correct MD5SUM for segmented files
-(Dynamic or Static Large Objects) so rclone won't check or use the
-MD5SUM for these.
-
Internet Archive
The Internet Archive backend utilizes Items on archive.org
@@ -29886,11 +30573,10 @@ Refer to IAS3 API documentation for the API this backend uses.
Paths are specified as remote:bucket (or remote: for the lsd command.)
You may put subdirectories in too, e.g. remote:item/path/to/dir.
-Once you have made a remote (see the provider specific section above)
-you can use it like this:
-
Unlike S3, listing up all items uploaded by you isn't supported.
+Once you have made a remote, you can use it like this:
+
Make a new item
rclone mkdir remote:item
@@ -29929,7 +30615,7 @@ file. The metadata will appear as file metadata on Internet Archive.
However, some fields are reserved by both Internet Archive and rclone.
The following are reserved by Internet Archive: - name - source - size -
-md5 - crc32 - sha1 - format - old_version - viruscheck
+md5 - crc32 - sha1 - format - old_version - viruscheck - summation
Trying to set values to these keys is ignored with a warning. Only
setting mtime is an exception. Doing so make it the identical behavior
@@ -30140,65 +30826,52 @@ including them.
Here are the possible system metadata items for the internetarchive
backend.
- ----------------------------------------------------------------------------------------------------------------------
- Name Help Type Example Read Only
- --------------------- ------------------ ----------- -------------------------------------------- --------------------
- crc32 CRC32 calculated string 01234567 N
- by Internet
- Archive
+ --------------------------------------------------------------------------------------------------------------------------------------
+ Name Help Type Example Read Only
+ --------------------- ---------------------------------- ----------- -------------------------------------------- --------------------
+ crc32 CRC32 calculated by Internet string 01234567 Y
+ Archive
- format Name of format string Comma-Separated Values N
- identified by
- Internet Archive
+ format Name of format identified by string Comma-Separated Values Y
+ Internet Archive
- md5 MD5 hash string 01234567012345670123456701234567 N
- calculated by
- Internet Archive
+ md5 MD5 hash calculated by Internet string 01234567012345670123456701234567 Y
+ Archive
- mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z N
- modification,
- managed by Rclone
+ mtime Time of last modification, managed RFC 3339 2006-01-02T15:04:05.999999999Z Y
+ by Rclone
- name Full file path, filename backend/internetarchive/internetarchive.go N
- without the bucket
- part
+ name Full file path, without the bucket filename backend/internetarchive/internetarchive.go Y
+ part
- old_version Whether the file boolean true N
- was replaced and
- moved by
- keep-old-version
- flag
+ old_version Whether the file was replaced and boolean true Y
+ moved by keep-old-version flag
- rclone-ia-mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z N
- modification,
- managed by
- Internet Archive
+ rclone-ia-mtime Time of last modification, managed RFC 3339 2006-01-02T15:04:05.999999999Z N
+ by Internet Archive
- rclone-mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z N
- modification,
- managed by Rclone
+ rclone-mtime Time of last modification, managed RFC 3339 2006-01-02T15:04:05.999999999Z N
+ by Rclone
- rclone-update-track Random value used string aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa N
- by Rclone for
- tracking changes
- inside Internet
- Archive
+ rclone-update-track Random value used by Rclone for string aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa N
+ tracking changes inside Internet
+ Archive
- sha1 SHA1 hash string 0123456701234567012345670123456701234567 N
- calculated by
- Internet Archive
+ sha1 SHA1 hash calculated by Internet string 0123456701234567012345670123456701234567 Y
+ Archive
- size File size in bytes decimal 123456 N
- number
+ size File size in bytes decimal 123456 Y
+ number
- source The source of the string original N
- file
+ source The source of the file string original Y
- viruscheck The last time unixtime 1654191352 N
- viruscheck process
- was run for the
- file (?)
- ----------------------------------------------------------------------------------------------------------------------
+ summation Check string md5 Y
+ https://forum.rclone.org/t/31922
+ for how it is used
+
+ viruscheck The last time viruscheck process unixtime 1654191352 Y
+ was run for the file (?)
+ --------------------------------------------------------------------------------------------------------------------------------------
See the metadata docs for more info.
@@ -30211,7 +30884,7 @@ companies, such as: * Telia * Telia Cloud (cloud.telia.se) * Telia Sky
(sky.telia.no) * Tele2 * Tele2 Cloud (mittcloud.tele2.se) * Elkjøp (with
subsidiaries): * Elkjøp Cloud (cloud.elkjop.no) * Elgiganten Sweden
(cloud.elgiganten.se) * Elgiganten Denmark (cloud.elgiganten.dk) *
-Giganti Cloud (cloud.gigantti.fi) * ELKO Clouud (cloud.elko.is)
+Giganti Cloud (cloud.gigantti.fi) * ELKO Cloud (cloud.elko.is)
Most of the white-label versions are supported by this backend, although
may require different authentication setup - described below.
@@ -30228,11 +30901,38 @@ setting up the remote.
Standard authentication
-To configure Jottacloud you will need to generate a personal security
-token in the Jottacloud web interface. You will the option to do in your
-account security settings (for whitelabel version you need to find this
-page in its web interface). Note that the web interface may refer to
-this token as a JottaCli token.
+The standard authentication method used by the official service
+(jottacloud.com), as well as some of the whitelabel services, requires
+you to generate a single-use personal login token from the account
+security settings in the service's web interface. Log in to your
+account, go to "Settings" and then "Security", or use the direct link
+presented to you by rclone when configuring the remote:
+https://www.jottacloud.com/web/secure. Scroll down to the section
+"Personal login token", and click the "Generate" button. Note that if
+you are using a whitelabel service you probably can't use the direct
+link, you need to find the same page in their dedicated web interface,
+and also it may be in a different location than described above.
+
+To access your account from multiple instances of rclone, you need to
+configure each of them with a separate personal login token. E.g. you
+create a Jottacloud remote with rclone in one location, and copy the
+configuration file to a second location where you also want to run
+rclone and access the same remote. Then you need to replace the token
+for one of them, using the config reconnect command, which requires you
+to generate a new personal login token and supply as input. If you do
+not do this, the token may easily end up being invalidated, resulting in
+both instances failing with an error message something along the lines
+of:
+
+ oauth2: cannot fetch token: 400 Bad Request
+ Response: {"error":"invalid_grant","error_description":"Stale token"}
+
+When this happens, you need to replace the token as described above to
+be able to use your remote again.
+
+All personal login tokens you have taken into use will be listed in the
+web interface under "My logged in devices", and from the right side of
+that list you can click the "X" button to revoke individual tokens.
Legacy authentication
@@ -31403,7 +32103,7 @@ Failure to log-in
Object not found
If you are connecting to your Mega remote for the first time, to test
-access and syncronisation, you may receive an error such as
+access and synchronization, you may receive an error such as
Failed to create file system for "my-mega-remote:":
couldn't login: Object (typically, node or user) not found
@@ -31777,7 +32477,7 @@ With NetStorage, directories can exist in one of two forms:
have created in a storage group.
2. Implicit Directory. This refers to a directory within a path that
has not been physically created. For example, during upload of a
- file, non-existent subdirectories can be specified in the target
+ file, nonexistent subdirectories can be specified in the target
path. NetStorage creates these as "implicit." While the directories
aren't physically created, they exist implicitly and the noted path
is connected with the uploaded file.
@@ -32589,7 +33289,7 @@ custom client_id is specified in the config. The default Client ID and
Key are shared by all rclone users when performing requests.
You may choose to create and use your own Client ID, in case the default
-one does not work well for you. For example, you might see throtting.
+one does not work well for you. For example, you might see throttling.
Creating Client ID for OneDrive Personal
@@ -32642,7 +33342,7 @@ organization only, as shown below.
2. Follow the steps above to create an App. However, we need a
different account type here:
Accounts in this organizational directory only (*** - Single tenant).
- Note that you can also change the account type aftering creating the
+ Note that you can also change the account type after creating the
App.
3. Find the tenant ID of your organization.
4. In the rclone config, set auth_url to
@@ -32754,7 +33454,7 @@ Properties:
- "de"
- Microsoft Cloud Germany
- "cn"
- - Azure and Office 365 operated by 21Vianet in China
+ - Azure and Office 365 operated by Vnet Group in China
Advanced options
@@ -33072,7 +33772,7 @@ OneDrive can be found here.
Versions
Every change in a file OneDrive causes the service to create a new
-version of the the file. This counts against a users quota. For example
+version of the file. This counts against a users quota. For example
changing the modification time of a file creates a second version, so
the file apparently uses twice the space.
@@ -33397,6 +34097,533 @@ policy mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
+Oracle Object Storage
+
+Oracle Object Storage Overview
+
+Oracle Object Storage FAQ
+
+Paths are specified as remote:bucket (or remote: for the lsd command.)
+You may put subdirectories in too, e.g. remote:bucket/path/to/dir.
+
+Configuration
+
+Here is an example of making an oracle object storage configuration.
+rclone config walks you through it.
+
+Here is an example of how to make a remote called remote. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+ n) New remote
+ d) Delete remote
+ r) Rename remote
+ c) Copy remote
+ s) Set configuration password
+ q) Quit config
+ e/n/d/r/c/s/q> n
+
+ Enter name for new remote.
+ name> remote
+
+ Option Storage.
+ Type of storage to configure.
+ Choose a number from below, or type in your own value.
+ [snip]
+ XX / Oracle Cloud Infrastructure Object Storage
+ \ (oracleobjectstorage)
+ Storage> oracleobjectstorage
+
+ Option provider.
+ Choose your Auth Provider
+ Choose a number from below, or type in your own string value.
+ Press Enter for the default (env_auth).
+ 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins
+ \ (env_auth)
+ / use an OCI user and an API key for authentication.
+ 2 | you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
+ | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
+ \ (user_principal_auth)
+ / use instance principals to authorize an instance to make API calls.
+ 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata.
+ | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
+ \ (instance_principal_auth)
+ 4 / use resource principals to make API calls
+ \ (resource_principal_auth)
+ 5 / no credentials needed, this is typically for reading public buckets
+ \ (no_auth)
+ provider> 2
+
+ Option namespace.
+ Object storage namespace
+ Enter a value.
+ namespace> idbamagbg734
+
+ Option compartment.
+ Object storage compartment OCID
+ Enter a value.
+ compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
+
+ Option region.
+ Object storage Region
+ Enter a value.
+ region> us-ashburn-1
+
+ Option endpoint.
+ Endpoint for Object storage API.
+ Leave blank to use the default endpoint for the region.
+ Enter a value. Press Enter to leave empty.
+ endpoint>
+
+ Option config_file.
+ Path to OCI config file
+ Choose a number from below, or type in your own string value.
+ Press Enter for the default (~/.oci/config).
+ 1 / oci configuration file location
+ \ (~/.oci/config)
+ config_file> /etc/oci/dev.conf
+
+ Option config_profile.
+ Profile name inside OCI config file
+ Choose a number from below, or type in your own string value.
+ Press Enter for the default (Default).
+ 1 / Use the default profile
+ \ (Default)
+ config_profile> Test
+
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+
+ Configuration complete.
+ Options:
+ - type: oracleobjectstorage
+ - namespace: idbamagbg734
+ - compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
+ - region: us-ashburn-1
+ - provider: user_principal_auth
+ - config_file: /etc/oci/dev.conf
+ - config_profile: Test
+ Keep this "remote" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+See all buckets
+
+ rclone lsd remote:
+
+Create a new bucket
+
+ rclone mkdir remote:bucket
+
+List the contents of a bucket
+
+ rclone ls remote:bucket
+ rclone ls remote:bucket --max-depth 1
+
+Modified time
+
+The modified time is stored as metadata on the object as opc-meta-mtime
+as floating point since the epoch, accurate to 1 ns.
+
+If the modification time needs to be updated rclone will attempt to
+perform a server side copy to update the modification if the object can
+be copied in a single part. In the case the object is larger than 5Gb,
+the object will be uploaded rather than copied.
+
+Note that reading this from the object takes an additional HEAD request
+as the metadata isn't returned in object listings.
+
+Multipart uploads
+
+rclone supports multipart uploads with OOS which means that it can
+upload files bigger than 5 GiB.
+
+Note that files uploaded both with multipart upload and through crypt
+remotes do not have MD5 sums.
+
+rclone switches from single part uploads to multipart uploads at the
+point specified by --oos-upload-cutoff. This can be a maximum of 5 GiB
+and a minimum of 0 (ie always upload multipart files).
+
+The chunk sizes used in the multipart upload are specified by
+--oos-chunk-size and the number of chunks uploaded concurrently is
+specified by --oos-upload-concurrency.
+
+Multipart uploads will use --transfers * --oos-upload-concurrency *
+--oos-chunk-size extra memory. Single part uploads to not use extra
+memory.
+
+Single part transfers can be faster than multipart transfers or slower
+depending on your latency from oos - the more latency, the more likely
+single part transfers will be faster.
+
+Increasing --oos-upload-concurrency will increase throughput (8 would be
+a sensible value) and increasing --oos-chunk-size also increases
+throughput (16M would be sensible). Increasing either of these will use
+more memory. The default values are high enough to gain most of the
+possible performance without using too much memory.
+
+Standard options
+
+Here are the Standard options specific to oracleobjectstorage (Oracle
+Cloud Infrastructure Object Storage).
+
+--oos-provider
+
+Choose your Auth Provider
+
+Properties:
+
+- Config: provider
+- Env Var: RCLONE_OOS_PROVIDER
+- Type: string
+- Default: "env_auth"
+- Examples:
+ - "env_auth"
+ - automatically pickup the credentials from runtime(env),
+ first one to provide auth wins
+ - "user_principal_auth"
+ - use an OCI user and an API key for authentication.
+ - you’ll need to put in a config file your tenancy OCID, user
+ OCID, region, the path, fingerprint to an API key.
+ - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
+ - "instance_principal_auth"
+ - use instance principals to authorize an instance to make API
+ calls.
+ - each instance has its own identity, and authenticates using
+ the certificates that are read from instance metadata.
+ - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
+ - "resource_principal_auth"
+ - use resource principals to make API calls
+ - "no_auth"
+ - no credentials needed, this is typically for reading public
+ buckets
+
+--oos-namespace
+
+Object storage namespace
+
+Properties:
+
+- Config: namespace
+- Env Var: RCLONE_OOS_NAMESPACE
+- Type: string
+- Required: true
+
+--oos-compartment
+
+Object storage compartment OCID
+
+Properties:
+
+- Config: compartment
+- Env Var: RCLONE_OOS_COMPARTMENT
+- Provider: !no_auth
+- Type: string
+- Required: true
+
+--oos-region
+
+Object storage Region
+
+Properties:
+
+- Config: region
+- Env Var: RCLONE_OOS_REGION
+- Type: string
+- Required: true
+
+--oos-endpoint
+
+Endpoint for Object storage API.
+
+Leave blank to use the default endpoint for the region.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_OOS_ENDPOINT
+- Type: string
+- Required: false
+
+--oos-config-file
+
+Path to OCI config file
+
+Properties:
+
+- Config: config_file
+- Env Var: RCLONE_OOS_CONFIG_FILE
+- Provider: user_principal_auth
+- Type: string
+- Default: "~/.oci/config"
+- Examples:
+ - "~/.oci/config"
+ - oci configuration file location
+
+--oos-config-profile
+
+Profile name inside the oci config file
+
+Properties:
+
+- Config: config_profile
+- Env Var: RCLONE_OOS_CONFIG_PROFILE
+- Provider: user_principal_auth
+- Type: string
+- Default: "Default"
+- Examples:
+ - "Default"
+ - Use the default profile
+
+Advanced options
+
+Here are the Advanced options specific to oracleobjectstorage (Oracle
+Cloud Infrastructure Object Storage).
+
+--oos-upload-cutoff
+
+Cutoff for switching to chunked upload.
+
+Any files larger than this will be uploaded in chunks of chunk_size. The
+minimum is 0 and the maximum is 5 GiB.
+
+Properties:
+
+- Config: upload_cutoff
+- Env Var: RCLONE_OOS_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 200Mi
+
+--oos-chunk-size
+
+Chunk size to use for uploading.
+
+When uploading files larger than upload_cutoff or files with unknown
+size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google
+photos or google docs) they will be uploaded as multipart uploads using
+this chunk size.
+
+Note that "upload_concurrency" chunks of this size are buffered in
+memory per transfer.
+
+If you are transferring large files over high-speed links and you have
+enough memory, then increasing this will speed up the transfers.
+
+Rclone will automatically increase the chunk size when uploading a large
+file of known size to stay below the 10,000 chunks limit.
+
+Files of unknown size are uploaded with the configured chunk_size. Since
+the default chunk size is 5 MiB and there can be at most 10,000 chunks,
+this means that by default the maximum size of a file you can stream
+upload is 48 GiB. If you wish to stream upload larger files then you
+will need to increase chunk_size.
+
+Increasing the chunk size decreases the accuracy of the progress
+statistics displayed with "-P" flag.
+
+Properties:
+
+- Config: chunk_size
+- Env Var: RCLONE_OOS_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5Mi
+
+--oos-upload-concurrency
+
+Concurrency for multipart uploads.
+
+This is the number of chunks of the same file that are uploaded
+concurrently.
+
+If you are uploading small numbers of large files over high-speed links
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 10
+
+--oos-copy-cutoff
+
+Cutoff for switching to multipart copy.
+
+Any files larger than this that need to be server-side copied will be
+copied in chunks of this size.
+
+The minimum is 0 and the maximum is 5 GiB.
+
+Properties:
+
+- Config: copy_cutoff
+- Env Var: RCLONE_OOS_COPY_CUTOFF
+- Type: SizeSuffix
+- Default: 4.656Gi
+
+--oos-copy-timeout
+
+Timeout for copy.
+
+Copy is an asynchronous operation, specify timeout to wait for copy to
+succeed
+
+Properties:
+
+- Config: copy_timeout
+- Env Var: RCLONE_OOS_COPY_TIMEOUT
+- Type: Duration
+- Default: 1m0s
+
+--oos-disable-checksum
+
+Don't store MD5 checksum with object metadata.
+
+Normally rclone will calculate the MD5 checksum of the input before
+uploading it so it can add it to metadata on the object. This is great
+for data integrity checking but can cause long delays for large files to
+start uploading.
+
+Properties:
+
+- Config: disable_checksum
+- Env Var: RCLONE_OOS_DISABLE_CHECKSUM
+- Type: bool
+- Default: false
+
+--oos-encoding
+
+The encoding for the backend.
+
+See the encoding section in the overview for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_OOS_ENCODING
+- Type: MultiEncoder
+- Default: Slash,InvalidUtf8,Dot
+
+--oos-leave-parts-on-error
+
+If true avoid calling abort upload on a failure, leaving all
+successfully uploaded parts on S3 for manual recovery.
+
+It should be set to true for resuming uploads across different sessions.
+
+WARNING: Storing parts of an incomplete multipart upload counts towards
+space usage on object storage and will add additional costs if not
+cleaned up.
+
+Properties:
+
+- Config: leave_parts_on_error
+- Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR
+- Type: bool
+- Default: false
+
+--oos-no-check-bucket
+
+If set, don't attempt to check the bucket exists or create it.
+
+This can be useful when trying to minimise the number of transactions
+rclone does if you know the bucket exists already.
+
+It can also be needed if the user you are using does not have bucket
+creation permissions.
+
+Properties:
+
+- Config: no_check_bucket
+- Env Var: RCLONE_OOS_NO_CHECK_BUCKET
+- Type: bool
+- Default: false
+
+Backend commands
+
+Here are the commands specific to the oracleobjectstorage backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the backend command for more info on how to pass options and
+arguments.
+
+These can be run on a running backend using the rc command
+backend/command.
+
+rename
+
+change the name of an object
+
+ rclone backend rename remote: [options] [+]
+
+This command can be used to rename a object.
+
+Usage Examples:
+
+ rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
+
+list-multipart-uploads
+
+List the unfinished multipart uploads
+
+ rclone backend list-multipart-uploads remote: [options] [+]
+
+This command lists the unfinished multipart uploads in JSON format.
+
+ rclone backend list-multipart-uploads oos:bucket/path/to/object
+
+It returns a dictionary of buckets with values as lists of unfinished
+multipart uploads.
+
+You can call it with no bucket in which case it lists all bucket, with a
+bucket or with a bucket and path.
+
+ {
+ "test-bucket": [
+ {
+ "namespace": "test-namespace",
+ "bucket": "test-bucket",
+ "object": "600m.bin",
+ "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
+ "timeCreated": "2022-07-29T06:21:16.595Z",
+ "storageTier": "Standard"
+ }
+ ]
+
+cleanup
+
+Remove unfinished multipart uploads.
+
+ rclone backend cleanup remote: [options] [+]
+
+This command removes unfinished multipart uploads of age greater than
+max-age which defaults to 24 hours.
+
+Note that you can use -i/--dry-run with this command to see what it
+would do.
+
+ rclone backend cleanup oos:bucket/path/to/object
+ rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
+
+Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
+
+Options:
+
+- "max-age": Max age of upload to delete
+
QingStor
Paths are specified as remote:bucket (or remote: for the lsd command.)
@@ -34430,6 +35657,36 @@ Properties:
- Type: bool
- Default: false
+--swift-no-large-objects
+
+Disable support for static and dynamic large objects
+
+Swift cannot transparently store files bigger than 5 GiB. There are two
+schemes for doing that, static or dynamic large objects, and the API
+does not allow rclone to determine whether a file is a static or dynamic
+large object without doing a HEAD on the object. Since these need to be
+treated differently, this means rclone has to issue HEAD requests for
+objects for example when reading checksums.
+
+When no_large_objects is set, rclone will assume that there are no
+static or dynamic large objects stored. This means it can stop doing the
+extra HEAD calls which in turn increases performance greatly especially
+when doing a swift to swift transfer with --checksum set.
+
+Setting this option implies no_chunk and also that no files will be
+uploaded in chunks, so files bigger than 5 GiB will just fail on upload.
+
+If you set this option and there are static or dynamic large objects,
+then this will give incorrect hashes for them. Downloads will succeed,
+but other operations such as Remove and Copy will fail.
+
+Properties:
+
+- Config: no_large_objects
+- Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS
+- Type: bool
+- Default: false
+
--swift-encoding
The encoding for the backend.
@@ -35431,9 +36688,9 @@ installations.
Paths are specified as remote:path. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user's home directory. For example, rclone lsd remote:
-would list the home directory of the user cofigured in the rclone remote
-config (i.e /home/sftpuser). However, rclone lsd remote:/ would list the
-root directory for remote machine (i.e. /)
+would list the home directory of the user configured in the rclone
+remote config (i.e /home/sftpuser). However, rclone lsd remote:/ would
+list the root directory for remote machine (i.e. /)
Note that some SFTP servers will need the leading / - Synology is a good
example of this. rsync.net and Hetzner, on the other hand, requires
@@ -35669,7 +36926,7 @@ and later can also run a SSH server, which is a port of OpenSSH (see
official installation guide). On a Windows server the shell handling is
different: Although it can also be set up to use a Unix type shell, e.g.
Cygwin bash, the default is to use Windows Command Prompt (cmd.exe), and
-PowerShell is a recommended alternative. All of these have bahave
+PowerShell is a recommended alternative. All of these have behave
differently, which rclone must handle.
Rclone tries to auto-detect what type of shell is used on the server,
@@ -35700,15 +36957,15 @@ default rclone will try to run a shell command the first time a new sftp
remote is accessed. If you configure a sftp remote without a config
file, e.g. an on the fly remote, rclone will have nowhere to store the
result, and it will re-run the command on every access. To avoid this
-you should explicitely set the shell_type option to the correct value,
-or to none if you want to prevent rclone from executing any remote shell
+you should explicitly set the shell_type option to the correct value, or
+to none if you want to prevent rclone from executing any remote shell
commands.
It is also important to note that, since the shell type decides how
quoting and escaping of file paths used as command-line arguments are
performed, configuring the wrong shell type may leave you exposed to
command injection exploits. Make sure to confirm the auto-detected shell
-type, or explicitely set the shell type you know is correct, or disable
+type, or explicitly set the shell type you know is correct, or disable
shell access until you know.
Checksum
@@ -36197,19 +37454,23 @@ Properties:
Upload and download chunk size.
-This controls the maximum packet size used in the SFTP protocol. The RFC
-limits this to 32768 bytes (32k), however a lot of servers support
-larger sizes and setting it larger will increase transfer speed
-dramatically on high latency links.
+This controls the maximum size of payload in SFTP protocol packets. The
+RFC limits this to 32768 bytes (32k), which is the default. However, a
+lot of servers support larger sizes, typically limited to a maximum
+total package size of 256k, and setting it larger will increase transfer
+speed dramatically on high latency links. This includes OpenSSH, and,
+for example, using the value of 255k works well, leaving plenty of room
+for overhead while still being within a total packet size of 256k.
-Only use a setting higher than 32k if you always connect to the same
-server or after sufficiently broad testing.
-
-For example using the value of 252k with OpenSSH works well with its
-maximum packet size of 256k.
-
-If you get the error "failed to send packet header: EOF" when copying a
-large file, try lowering this number.
+Make sure to test thoroughly before using a value higher than 32k, and
+only use it if you always connect to the same server or after
+sufficiently broad testing. If you get errors such as "failed to send
+packet payload: EOF", lots of "connection lost", or "corrupted on
+transfer", when copying a larger file, try lowering the value. The
+server run by rclone serve sftp sends packets with standard 32k maximum
+payload so you must not set a different chunk_size when downloading
+files, but it accepts packets up to the 256k total size, so for uploads
+the chunk_size can be set as for the OpenSSH example above.
Properties:
@@ -36291,6 +37552,234 @@ Hetzner Storage Boxes are supported through the SFTP backend on port 23.
See Hetzner's documentation for details
+SMB
+
+SMB is a communication protocol to share files over network.
+
+This relies on go-smb2 library for communication with SMB protocol.
+
+Paths are specified as remote:sharename (or remote: for the lsd
+command.) You may put subdirectories in too, e.g.
+remote:item/path/to/dir.
+
+Notes
+
+The first path segment must be the name of the share, which you entered
+when you started to share on Windows. On smbd, it's the section title in
+smb.conf (usually in /etc/samba/) file. You can find shares by quering
+the root if you're unsure (e.g. rclone lsd remote:).
+
+You can't access to the shared printers from rclone, obviously.
+
+You can't use Anonymous access for logging in. You have to use the guest
+user with an empty password instead. The rclone client tries to avoid
+8.3 names when uploading files by encoding trailing spaces and periods.
+Alternatively, the local backend on Windows can access SMB servers using
+UNC paths, by \\server\share. This doesn't apply to non-Windows OSes,
+such as Linux and macOS.
+
+Configuration
+
+Here is an example of making a SMB configuration.
+
+First run
+
+ rclone config
+
+This will guide you through an interactive setup process.
+
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+ name> remote
+ Option Storage.
+ Type of storage to configure.
+ Choose a number from below, or type in your own value.
+ XX / SMB / CIFS
+ \ (smb)
+ Storage> smb
+
+ Option host.
+ Samba hostname to connect to.
+ E.g. "example.com".
+ Enter a value.
+ host> localhost
+
+ Option user.
+ Samba username.
+ Enter a string value. Press Enter for the default (lesmi).
+ user> guest
+
+ Option port.
+ Samba port number.
+ Enter a signed integer. Press Enter for the default (445).
+ port>
+
+ Option pass.
+ Samba password.
+ Choose an alternative below. Press Enter for the default (n).
+ y) Yes, type in my own password
+ g) Generate random password
+ n) No, leave this optional password blank (default)
+ y/g/n> g
+ Password strength in bits.
+ 64 is just about memorable
+ 128 is secure
+ 1024 is the maximum
+ Bits> 64
+ Your password is: XXXX
+ Use this password? Please note that an obscured version of this
+ password (and not the password itself) will be stored under your
+ configuration file, so keep this generated password in a safe place.
+ y) Yes (default)
+ n) No
+ y/n> y
+
+ Option domain.
+ Domain name for NTLM authentication.
+ Enter a string value. Press Enter for the default (WORKGROUP).
+ domain>
+
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+
+ Configuration complete.
+ Options:
+ - type: samba
+ - host: localhost
+ - user: guest
+ - pass: *** ENCRYPTED ***
+ Keep this "remote" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> d
+
+Standard options
+
+Here are the Standard options specific to smb (SMB / CIFS).
+
+--smb-host
+
+SMB server hostname to connect to.
+
+E.g. "example.com".
+
+Properties:
+
+- Config: host
+- Env Var: RCLONE_SMB_HOST
+- Type: string
+- Required: true
+
+--smb-user
+
+SMB username.
+
+Properties:
+
+- Config: user
+- Env Var: RCLONE_SMB_USER
+- Type: string
+- Default: "$USER"
+
+--smb-port
+
+SMB port number.
+
+Properties:
+
+- Config: port
+- Env Var: RCLONE_SMB_PORT
+- Type: int
+- Default: 445
+
+--smb-pass
+
+SMB password.
+
+NB Input to this must be obscured - see rclone obscure.
+
+Properties:
+
+- Config: pass
+- Env Var: RCLONE_SMB_PASS
+- Type: string
+- Required: false
+
+--smb-domain
+
+Domain name for NTLM authentication.
+
+Properties:
+
+- Config: domain
+- Env Var: RCLONE_SMB_DOMAIN
+- Type: string
+- Default: "WORKGROUP"
+
+Advanced options
+
+Here are the Advanced options specific to smb (SMB / CIFS).
+
+--smb-idle-timeout
+
+Max time before closing idle connections.
+
+If no connections have been returned to the connection pool in the time
+given, rclone will empty the connection pool.
+
+Set to 0 to keep connections indefinitely.
+
+Properties:
+
+- Config: idle_timeout
+- Env Var: RCLONE_SMB_IDLE_TIMEOUT
+- Type: Duration
+- Default: 1m0s
+
+--smb-hide-special-share
+
+Hide special shares (e.g. print$) which users aren't supposed to access.
+
+Properties:
+
+- Config: hide_special_share
+- Env Var: RCLONE_SMB_HIDE_SPECIAL_SHARE
+- Type: bool
+- Default: true
+
+--smb-case-insensitive
+
+Whether the server is configured to be case-insensitive.
+
+Always true on Windows shares.
+
+Properties:
+
+- Config: case_insensitive
+- Env Var: RCLONE_SMB_CASE_INSENSITIVE
+- Type: bool
+- Default: true
+
+--smb-encoding
+
+The encoding for the backend.
+
+See the encoding section in the overview for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_SMB_ENCODING
+- Type: MultiEncoder
+- Default:
+ Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
+
Storj
Storj is an encrypted, secure, and cost-effective object storage service
@@ -38939,6 +40428,189 @@ Options:
Changelog
+v1.60.0 - 2022-10-21
+
+See commits
+
+- New backends
+ - Oracle object storage (Manoj Ghosh)
+ - SMB / CIFS (Windows file sharing) (Lesmiscore)
+ - New S3 providers
+ - IONOS Cloud Storage (Dmitry Deniskin)
+ - Qiniu KODO (Bachue Zhou)
+- New Features
+ - build
+ - Update to go1.19 and make go1.17 the minimum required
+ version (Nick Craig-Wood)
+ - Install.sh: fix arm-v7 download (Ole Frost)
+ - fs: Warn the user when using an existing remote name without a
+ colon (Nick Craig-Wood)
+ - httplib: Add --xxx-min-tls-version option to select minimum TLS
+ version for HTTP servers (Robert Newson)
+ - librclone: Add PHP bindings and test program (Jordi Gonzalez
+ Muñoz)
+ - operations
+ - Add --server-side-across-configs global flag for any backend
+ (Nick Craig-Wood)
+ - Optimise --copy-dest and --compare-dest (Nick Craig-Wood)
+ - rc: add job/stopgroup to stop group (Evan Spensley)
+ - serve dlna
+ - Add --announce-interval to control SSDP Announce Interval
+ (YanceyChiew)
+ - Add --interface to Specify SSDP interface names line (Simon
+ Bos)
+ - Add support for more external subtitles (YanceyChiew)
+ - Add verification of addresses (YanceyChiew)
+ - sync: Optimise --copy-dest and --compare-dest (Nick Craig-Wood)
+ - doc updates (albertony, Alexander Knorr, anonion, João Henrique
+ Franco, Josh Soref, Lorenzo Milesi, Marco Molteni, Mark Trolley,
+ Ole Frost, partev, Ryan Morey, Tom Mombourquette, YFdyh000)
+- Bug Fixes
+ - filter
+ - Fix incorrect filtering with UseFilter context flag and
+ wrapping backends (Nick Craig-Wood)
+ - Make sure we check --files-from when looking for a single
+ file (Nick Craig-Wood)
+ - rc
+ - Fix mount/listmounts not returning the full Fs entered in
+ mount/mount (Tom Mombourquette)
+ - Handle external unmount when mounting (Isaac Aymerich)
+ - Validate Daemon option is not set when mounting a volume via
+ RC (Isaac Aymerich)
+ - sync: Update docs and error messages to reflect fixes to overlap
+ checks (Nick Naumann)
+- VFS
+ - Reduce memory use by embedding sync.Cond (Nick Craig-Wood)
+ - Reduce memory usage by re-ordering commonly used structures
+ (Nick Craig-Wood)
+ - Fix excess CPU used by VFS cache cleaner looping (Nick
+ Craig-Wood)
+- Local
+ - Obey file filters in listing to fix errors on excluded files
+ (Nick Craig-Wood)
+ - Fix "Failed to read metadata: function not implemented" on old
+ Linux kernels (Nick Craig-Wood)
+- Compress
+ - Fix crash due to nil metadata (Nick Craig-Wood)
+ - Fix error handling to not use or return nil objects (Nick
+ Craig-Wood)
+- Drive
+ - Make --drive-stop-on-upload-limit obey quota exceeded error
+ (Steve Kowalik)
+- FTP
+ - Add --ftp-force-list-hidden option to show hidden items (Øyvind
+ Heddeland Instefjord)
+ - Fix hang when using ExplicitTLS to certain servers. (Nick
+ Craig-Wood)
+- Google Cloud Storage
+ - Add --gcs-endpoint flag and config parameter (Nick Craig-Wood)
+- Hubic
+ - Remove backend as service has now shut down (Nick Craig-Wood)
+- Onedrive
+ - Rename Onedrive(cn) 21Vianet to Vnet Group (Yen Hu)
+ - Disable change notify in China region since it is not supported
+ (Nick Craig-Wood)
+- S3
+ - Implement --s3-versions flag to show old versions of objects if
+ enabled (Nick Craig-Wood)
+ - Implement --s3-version-at flag to show versions of objects at a
+ particular time (Nick Craig-Wood)
+ - Implement backend versioning command to get/set bucket
+ versioning (Nick Craig-Wood)
+ - Implement Purge to purge versions and backend cleanup-hidden
+ (Nick Craig-Wood)
+ - Add --s3-decompress flag to decompress gzip-encoded files (Nick
+ Craig-Wood)
+ - Add --s3-sse-customer-key-base64 to supply keys with binary data
+ (Richard Bateman)
+ - Try to keep the maximum precision in ModTime with
+ --user-server-modtime (Nick Craig-Wood)
+ - Drop binary metadata with an ERROR message as it can't be stored
+ (Nick Craig-Wood)
+ - Add --s3-no-system-metadata to suppress read and write of system
+ metadata (Nick Craig-Wood)
+- SFTP
+ - Fix directory creation races (Lesmiscore)
+- Swift
+ - Add --swift-no-large-objects to reduce HEAD requests (Nick
+ Craig-Wood)
+- Union
+ - Propagate SlowHash feature to fix hasher interaction
+ (Lesmiscore)
+
+v1.59.2 - 2022-09-15
+
+See commits
+
+- Bug Fixes
+ - config: Move locking to fix fatal error: concurrent map read and
+ map write (Nick Craig-Wood)
+- Local
+ - Disable xattr support if the filesystems indicates it is not
+ supported (Nick Craig-Wood)
+- Azure Blob
+ - Fix chunksize calculations producing too many parts (Nick
+ Craig-Wood)
+- B2
+ - Fix chunksize calculations producing too many parts (Nick
+ Craig-Wood)
+- S3
+ - Fix chunksize calculations producing too many parts (Nick
+ Craig-Wood)
+
+v1.59.1 - 2022-08-08
+
+See commits
+
+- Bug Fixes
+ - accounting: Fix panic in core/stats-reset with unknown group
+ (Nick Craig-Wood)
+ - build: Fix android build after GitHub actions change (Nick
+ Craig-Wood)
+ - dlna: Fix SOAP action header parsing (Joram Schrijver)
+ - docs: Fix links to mount command from install docs (albertony)
+ - dropbox: Fix ChangeNotify was unable to decrypt errors (Nick
+ Craig-Wood)
+ - fs: Fix parsing of times and durations of the form "YYYY-MM-DD
+ HH:MM:SS" (Nick Craig-Wood)
+ - serve sftp: Fix checksum detection (Nick Craig-Wood)
+ - sync: Add accidentally missed filter-sensitivity to --backup-dir
+ option (Nick Naumann)
+- Combine
+ - Fix docs showing remote= instead of upstreams= (Nick Craig-Wood)
+ - Throw error if duplicate directory name is specified (Nick
+ Craig-Wood)
+ - Fix errors with backends shutting down while in use (Nick
+ Craig-Wood)
+- Dropbox
+ - Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)
+ - Fix infinite loop on uploading a corrupted file (Nick
+ Craig-Wood)
+- Internetarchive
+ - Ignore checksums for files using the different method
+ (Lesmiscore)
+ - Handle hash symbol in the middle of filename (Lesmiscore)
+- Jottacloud
+ - Fix working with whitelabel Elgiganten Cloud
+ - Do not store username in config when using standard auth
+ (albertony)
+- Mega
+ - Fix nil pointer exception when bad node received (Nick
+ Craig-Wood)
+- S3
+ - Fix --s3-no-head panic: reflect: Elem of invalid type
+ s3.PutObjectInput (Nick Craig-Wood)
+- SFTP
+ - Fix issue with WS_FTP by working around failing RealPath
+ (albertony)
+- Union
+ - Fix duplicated files when using directories with leading / (Nick
+ Craig-Wood)
+ - Fix multiple files being uploaded when roots don't exist (Nick
+ Craig-Wood)
+ - Fix panic due to misalignment of struct field in 32 bit
+ architectures (r-ricci)
+
v1.59.0 - 2022-07-09
See commits
@@ -39252,7 +40924,7 @@ See commits
change (Nick Craig-Wood)
- Hard fork github.com/jlaffaye/ftp to fix
go get github.com/rclone/rclone (Nick Craig-Wood)
- - oauthutil: Fix crash when webrowser requests /robots.txt (Nick
+ - oauthutil: Fix crash when webbrowser requests /robots.txt (Nick
Craig-Wood)
- operations: Fix goroutine leak in case of copy retry (Ankur
Gupta)
@@ -39370,7 +41042,7 @@ See commits
(Nick Craig-Wood)
- Fix timeout on hashing large files by sending keepalives (Nick
Craig-Wood)
- - Fix unecessary seeking when uploading and downloading files
+ - Fix unnecessary seeking when uploading and downloading files
(Nick Craig-Wood)
- Update docs on how to create known_hosts file (Nick Craig-Wood)
- Storj
@@ -40170,9 +41842,9 @@ See commits
- Add toggle option for average s3ize in directory - key 'a'
(Adam Plánský)
- Add empty folder flag into ncdu browser (Adam Plánský)
- - Add ! (errror) and . (unreadable) file flags to go with e
+ - Add ! (error) and . (unreadable) file flags to go with e
(empty) (Nick Craig-Wood)
- - obscure: Make rclone osbcure - ignore newline at end of line
+ - obscure: Make rclone obscure - ignore newline at end of line
(Nick Craig-Wood)
- operations
- Add logs when need to upload files to set mod times (Nick
@@ -40209,7 +41881,7 @@ See commits
- move: Fix data loss when source and destination are the same
object (Nick Craig-Wood)
- operations
- - Fix --cutof-mode hard not cutting off immediately (Nick
+ - Fix --cutoff-mode hard not cutting off immediately (Nick
Craig-Wood)
- Fix --immutable error message (Nick Craig-Wood)
- sync
@@ -40278,7 +41950,7 @@ See commits
- Fixed crash on an empty file name (lluuaapp)
- Box
- Fix NewObject for files that differ in case (Nick Craig-Wood)
- - Fix finding directories in a case insentive way (Nick
+ - Fix finding directories in a case insensitive way (Nick
Craig-Wood)
- Chunker
- Skip long local hashing, hash in-transit (fixes) (Ivan Andreev)
@@ -40394,7 +42066,7 @@ See commits
Craig-Wood)
- Sugarsync
- Fix NewObject for files that differ in case (Nick Craig-Wood)
- - Fix finding directories in a case insentive way (Nick
+ - Fix finding directories in a case insensitive way (Nick
Craig-Wood)
- Swift
- Fix deletion of parts of Static Large Object (SLO) (Nguyễn Hữu
@@ -40490,7 +42162,7 @@ v1.53.2 - 2020-10-26
See commits
- Bug Fixes
- - acounting
+ - accounting
- Fix incorrect speed and transferTime in core/stats (Nick
Craig-Wood)
- Stabilize display order of transfers on Windows (Nick
@@ -41854,8 +43526,8 @@ v1.49.0 - 2019-08-26
- rcd: Fix permissions problems on cache directory with web gui
download (Nick Craig-Wood)
- Mount
- - Default --daemon-timout to 15 minutes on macOS and FreeBSD (Nick
- Craig-Wood)
+ - Default --daemon-timeout to 15 minutes on macOS and FreeBSD
+ (Nick Craig-Wood)
- Update docs to show mounting from root OK for bucket-based (Nick
Craig-Wood)
- Remove nonseekable flag from write files (Nick Craig-Wood)
@@ -42286,7 +43958,7 @@ v1.46 - 2019-02-09
- HTTP
- Add an example with username and password which is supported but
wasn't documented (Nick Craig-Wood)
- - Fix backend with --files-from and non-existent files (Nick
+ - Fix backend with --files-from and nonexistent files (Nick
Craig-Wood)
- Hubic
- Make error message more informative if authentication fails
@@ -42882,7 +44554,7 @@ v1.41 - 2018-04-28
- FTP
- Work around strange response from box FTP server
- More workarounds for FTP servers to fix mkParentDir error
- - Fix no error on listing non-existent directory
+ - Fix no error on listing nonexistent directory
- Google Cloud Storage
- Add service_account_credentials (Matt Holt)
- Detect bucket presence by listing it - minimises permissions
@@ -42973,7 +44645,7 @@ v1.40 - 2018-03-19
- Make a beta release for all branches on the main repo (but not
pull requests)
- Bug Fixes
- - config: fixes errors on non existing config by loading config
+ - config: fixes errors on nonexistent config by loading config
file only on first access
- config: retry saving the config after failure (Mateusz)
- sync: when using --backup-dir don't delete files if we can't set
@@ -43601,7 +45273,7 @@ v1.34 - 2016-11-06
Tomasz Mazur
- S3
- Command line and config file support for
- - Setting/overriding ACL - thanks Radek Senfeld
+ - Setting/overriding ACL - thanks Radek Šenfeld
- Setting storage class - thanks Asko Tamm
- Drive
- Make exponential backoff work exactly as per Google
@@ -45089,6 +46761,32 @@ email addresses removed from here need to be addeed to bin/.ignore-emails to mak
- Lorenzo Maiorfi maiorfi@gmail.com
- Claudio Maradonna penguyman@stronzi.org
- Ovidiu Victor Tatar ovi.tatar@googlemail.com
+- Evan Spensley epspensley@gmail.com
+- Yen Hu 61753151+0x59656e@users.noreply.github.com
+- Steve Kowalik steven@wedontsleep.org
+- Jordi Gonzalez Muñoz jordigonzm@gmail.com
+- Joram Schrijver i@joram.io
+- Mark Trolley marktrolley@gmail.com
+- João Henrique Franco joaohenrique.franco@gmail.com
+- anonion aman207@users.noreply.github.com
+- Ryan Morey 4590343+rmorey@users.noreply.github.com
+- Simon Bos simonbos9@gmail.com
+- YFdyh000 yfdyh000@gmail.com * Josh Soref
+ 2119212+jsoref@users.noreply.github.com
+- Øyvind Heddeland Instefjord instefjord@outlook.com
+- Dmitry Deniskin 110819396+ddeniskin@users.noreply.github.com
+- Alexander Knorr 106825+opexxx@users.noreply.github.com
+- Richard Bateman richard@batemansr.us
+- Dimitri Papadopoulos Orfanos
+ 3234522+DimitriPapadopoulos@users.noreply.github.com
+- Lorenzo Milesi lorenzo.milesi@yetopen.com
+- Isaac Aymerich isaac.aymerich@gmail.com
+- YanceyChiew 35898533+YanceyChiew@users.noreply.github.com
+- Manoj Ghosh msays2000@gmail.com
+- Bachue Zhou bachue.shu@gmail.com
+- Manoj Ghosh manoj.ghosh@oracle.com
+- Tom Mombourquette tom@devnode.com
+- Robert Newson rnewson@apache.org
Contact the rclone project
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index c11146b4f..f68c8f5a0 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -5,6 +5,82 @@ description: "Rclone Changelog"
# Changelog
+## v1.60.0 - 2022-10-21
+
+[See commits](https://github.com/rclone/rclone/compare/v1.59.0...v1.60.0)
+
+* New backends
+ * [Oracle object storage](/oracleobjectstorage/) (Manoj Ghosh)
+ * [SMB](/smb/) / CIFS (Windows file sharing) (Lesmiscore)
+ * New S3 providers
+ * [IONOS Cloud Storage](/s3/#ionos) (Dmitry Deniskin)
+ * [Qiniu KODO](/s3/#qiniu) (Bachue Zhou)
+* New Features
+ * build
+ * Update to go1.19 and make go1.17 the minimum required version (Nick Craig-Wood)
+ * Install.sh: fix arm-v7 download (Ole Frost)
+ * fs: Warn the user when using an existing remote name without a colon (Nick Craig-Wood)
+ * httplib: Add `--xxx-min-tls-version` option to select minimum TLS version for HTTP servers (Robert Newson)
+ * librclone: Add PHP bindings and test program (Jordi Gonzalez Muñoz)
+ * operations
+ * Add `--server-side-across-configs` global flag for any backend (Nick Craig-Wood)
+ * Optimise `--copy-dest` and `--compare-dest` (Nick Craig-Wood)
+ * rc: add `job/stopgroup` to stop group (Evan Spensley)
+ * serve dlna
+ * Add `--announce-interval` to control SSDP Announce Interval (YanceyChiew)
+ * Add `--interface` to Specify SSDP interface names line (Simon Bos)
+ * Add support for more external subtitles (YanceyChiew)
+ * Add verification of addresses (YanceyChiew)
+ * sync: Optimise `--copy-dest` and `--compare-dest` (Nick Craig-Wood)
+ * doc updates (albertony, Alexander Knorr, anonion, João Henrique Franco, Josh Soref, Lorenzo Milesi, Marco Molteni, Mark Trolley, Ole Frost, partev, Ryan Morey, Tom Mombourquette, YFdyh000)
+* Bug Fixes
+ * filter
+ * Fix incorrect filtering with `UseFilter` context flag and wrapping backends (Nick Craig-Wood)
+ * Make sure we check `--files-from` when looking for a single file (Nick Craig-Wood)
+ * rc
+ * Fix `mount/listmounts` not returning the full Fs entered in `mount/mount` (Tom Mombourquette)
+ * Handle external unmount when mounting (Isaac Aymerich)
+ * Validate Daemon option is not set when mounting a volume via RC (Isaac Aymerich)
+ * sync: Update docs and error messages to reflect fixes to overlap checks (Nick Naumann)
+* VFS
+ * Reduce memory use by embedding `sync.Cond` (Nick Craig-Wood)
+ * Reduce memory usage by re-ordering commonly used structures (Nick Craig-Wood)
+ * Fix excess CPU used by VFS cache cleaner looping (Nick Craig-Wood)
+* Local
+ * Obey file filters in listing to fix errors on excluded files (Nick Craig-Wood)
+ * Fix "Failed to read metadata: function not implemented" on old Linux kernels (Nick Craig-Wood)
+* Compress
+ * Fix crash due to nil metadata (Nick Craig-Wood)
+ * Fix error handling to not use or return nil objects (Nick Craig-Wood)
+* Drive
+ * Make `--drive-stop-on-upload-limit` obey quota exceeded error (Steve Kowalik)
+* FTP
+ * Add `--ftp-force-list-hidden` option to show hidden items (Øyvind Heddeland Instefjord)
+ * Fix hang when using ExplicitTLS to certain servers. (Nick Craig-Wood)
+* Google Cloud Storage
+ * Add `--gcs-endpoint` flag and config parameter (Nick Craig-Wood)
+* Hubic
+ * Remove backend as service has now shut down (Nick Craig-Wood)
+* Onedrive
+ * Rename Onedrive(cn) 21Vianet to Vnet Group (Yen Hu)
+ * Disable change notify in China region since it is not supported (Nick Craig-Wood)
+* S3
+ * Implement `--s3-versions` flag to show old versions of objects if enabled (Nick Craig-Wood)
+ * Implement `--s3-version-at` flag to show versions of objects at a particular time (Nick Craig-Wood)
+ * Implement `backend versioning` command to get/set bucket versioning (Nick Craig-Wood)
+ * Implement `Purge` to purge versions and `backend cleanup-hidden` (Nick Craig-Wood)
+ * Add `--s3-decompress` flag to decompress gzip-encoded files (Nick Craig-Wood)
+ * Add `--s3-sse-customer-key-base64` to supply keys with binary data (Richard Bateman)
+ * Try to keep the maximum precision in ModTime with `--user-server-modtime` (Nick Craig-Wood)
+ * Drop binary metadata with an ERROR message as it can't be stored (Nick Craig-Wood)
+ * Add `--s3-no-system-metadata` to suppress read and write of system metadata (Nick Craig-Wood)
+* SFTP
+ * Fix directory creation races (Lesmiscore)
+* Swift
+ * Add `--swift-no-large-objects` to reduce HEAD requests (Nick Craig-Wood)
+* Union
+ * Propagate SlowHash feature to fix hasher interaction (Lesmiscore)
+
## v1.59.2 - 2022-09-15
[See commits](https://github.com/rclone/rclone/compare/v1.59.1...v1.59.2)
diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md
index e98dc7443..f256d6f4f 100644
--- a/docs/content/commands/rclone.md
+++ b/docs/content/commands/rclone.md
@@ -37,7 +37,7 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone about](/commands/rclone_about/) - Get quota information from the remote.
* [rclone authorize](/commands/rclone_authorize/) - Remote authorization.
* [rclone backend](/commands/rclone_backend/) - Run a backend-specific command.
-* [rclone bisync](/commands/rclone_bisync/) - Perform bidirectonal synchronization between two paths.
+* [rclone bisync](/commands/rclone_bisync/) - Perform bidirectional synchronization between two paths.
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
* [rclone checksum](/commands/rclone_checksum/) - Checks the files in the source against a SUM file.
diff --git a/docs/content/commands/rclone_bisync.md b/docs/content/commands/rclone_bisync.md
index 76fcca809..4a7127329 100644
--- a/docs/content/commands/rclone_bisync.md
+++ b/docs/content/commands/rclone_bisync.md
@@ -1,17 +1,17 @@
---
title: "rclone bisync"
-description: "Perform bidirectonal synchronization between two paths."
+description: "Perform bidirectional synchronization between two paths."
slug: rclone_bisync
url: /commands/rclone_bisync/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/bisync/ and as part of making a release run "make commanddocs"
---
# rclone bisync
-Perform bidirectonal synchronization between two paths.
+Perform bidirectional synchronization between two paths.
## Synopsis
-Perform bidirectonal synchronization between two paths.
+Perform bidirectional synchronization between two paths.
[Bisync](https://rclone.org/bisync/) provides a
bidirectional cloud sync solution in rclone.
diff --git a/docs/content/commands/rclone_completion_bash.md b/docs/content/commands/rclone_completion_bash.md
index b3c24a6e4..b5772be3e 100644
--- a/docs/content/commands/rclone_completion_bash.md
+++ b/docs/content/commands/rclone_completion_bash.md
@@ -28,7 +28,7 @@ To load completions for every new session, execute once:
### macOS:
- rclone completion bash > /usr/local/etc/bash_completion.d/rclone
+ rclone completion bash > $(brew --prefix)/etc/bash_completion.d/rclone
You will need to start a new shell for this setup to take effect.
diff --git a/docs/content/commands/rclone_completion_zsh.md b/docs/content/commands/rclone_completion_zsh.md
index b48faa25a..1490817f7 100644
--- a/docs/content/commands/rclone_completion_zsh.md
+++ b/docs/content/commands/rclone_completion_zsh.md
@@ -18,6 +18,10 @@ to enable it. You can execute the following once:
echo "autoload -U compinit; compinit" >> ~/.zshrc
+To load completions in your current shell session:
+
+ source <(rclone completion zsh); compdef _rclone rclone
+
To load completions for every new session, execute once:
### Linux:
@@ -26,7 +30,7 @@ To load completions for every new session, execute once:
### macOS:
- rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
+ rclone completion zsh > $(brew --prefix)/share/zsh/site-functions/_rclone
You will need to start a new shell for this setup to take effect.
diff --git a/docs/content/commands/rclone_config_create.md b/docs/content/commands/rclone_config_create.md
index e1b5572c1..8d12ea86b 100644
--- a/docs/content/commands/rclone_config_create.md
+++ b/docs/content/commands/rclone_config_create.md
@@ -45,7 +45,7 @@ are 100% certain you are already passing obscured passwords then use
`rclone config password` command.
The flag `--non-interactive` is for use by applications that wish to
-configure rclone themeselves, rather than using rclone's text based
+configure rclone themselves, rather than using rclone's text based
configuration questions. If this flag is set, and rclone needs to ask
the user a question, a JSON blob will be returned with the question in
it.
diff --git a/docs/content/commands/rclone_config_update.md b/docs/content/commands/rclone_config_update.md
index c4fc4e26e..468a68307 100644
--- a/docs/content/commands/rclone_config_update.md
+++ b/docs/content/commands/rclone_config_update.md
@@ -45,7 +45,7 @@ are 100% certain you are already passing obscured passwords then use
`rclone config password` command.
The flag `--non-interactive` is for use by applications that wish to
-configure rclone themeselves, rather than using rclone's text based
+configure rclone themselves, rather than using rclone's text based
configuration questions. If this flag is set, and rclone needs to ask
the user a question, a JSON blob will be returned with the question in
it.
diff --git a/docs/content/commands/rclone_hashsum.md b/docs/content/commands/rclone_hashsum.md
index 0c34c0cd6..f9b96e6a4 100644
--- a/docs/content/commands/rclone_hashsum.md
+++ b/docs/content/commands/rclone_hashsum.md
@@ -26,7 +26,7 @@ For the MD5 and SHA1 algorithms there are also dedicated commands,
This command can also hash data received on standard input (stdin),
by not passing a remote:path, or by passing a hyphen as remote:path
-when there is data to read (if not, the hypen will be treated literaly,
+when there is data to read (if not, the hyphen will be treated literally,
as a relative path).
Run without a hash to see the list of all supported hashes, e.g.
diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md
index 6a47d0c39..a1bfc8cfe 100644
--- a/docs/content/commands/rclone_ls.md
+++ b/docs/content/commands/rclone_ls.md
@@ -42,7 +42,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
-Listing a non-existent directory will produce an error except for
+Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md
index fbd9b2c92..86a1f47c7 100644
--- a/docs/content/commands/rclone_lsd.md
+++ b/docs/content/commands/rclone_lsd.md
@@ -52,7 +52,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
-Listing a non-existent directory will produce an error except for
+Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
diff --git a/docs/content/commands/rclone_lsf.md b/docs/content/commands/rclone_lsf.md
index 2cdd3ce5c..0a34277fa 100644
--- a/docs/content/commands/rclone_lsf.md
+++ b/docs/content/commands/rclone_lsf.md
@@ -126,7 +126,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
-Listing a non-existent directory will produce an error except for
+Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md
index abf8a39ca..e3cac5d21 100644
--- a/docs/content/commands/rclone_lsjson.md
+++ b/docs/content/commands/rclone_lsjson.md
@@ -56,7 +56,7 @@ If `--files-only` is not specified directories in addition to the files
will be returned.
If `--metadata` is set then an additional Metadata key will be returned.
-This will have metdata in rclone standard format as a JSON object.
+This will have metadata in rclone standard format as a JSON object.
if `--stat` is set then a single JSON blob will be returned about the
item pointed to. This will return an error if the item isn't found.
@@ -102,7 +102,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
-Listing a non-existent directory will produce an error except for
+Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md
index 8fbaaba67..f493916a9 100644
--- a/docs/content/commands/rclone_lsl.md
+++ b/docs/content/commands/rclone_lsl.md
@@ -42,7 +42,7 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
-Listing a non-existent directory will produce an error except for
+Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md
index de2b68fb0..9cd53cad0 100644
--- a/docs/content/commands/rclone_md5sum.md
+++ b/docs/content/commands/rclone_md5sum.md
@@ -26,7 +26,7 @@ to running `rclone hashsum MD5 remote:path`.
This command can also hash data received on standard input (stdin),
by not passing a remote:path, or by passing a hyphen as remote:path
-when there is data to read (if not, the hypen will be treated literaly,
+when there is data to read (if not, the hyphen will be treated literally,
as a relative path).
diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md
index fa132d24c..b322b23f8 100644
--- a/docs/content/commands/rclone_mount.md
+++ b/docs/content/commands/rclone_mount.md
@@ -98,7 +98,7 @@ and experience unexpected program errors, freezes or other issues, consider moun
as a network drive instead.
When mounting as a fixed disk drive you can either mount to an unused drive letter,
-or to a path representing a **non-existent** subdirectory of an **existing** parent
+or to a path representing a **nonexistent** subdirectory of an **existing** parent
directory or drive. Using the special value `*` will tell rclone to
automatically assign the next available drive letter, starting with Z: and moving backward.
Examples:
@@ -129,7 +129,7 @@ the mapped drive, shown in Windows Explorer etc, while the complete
`\\server\share` will be reported as the remote UNC path by
`net use` etc, just like a normal network drive mapping.
-If you specify a full network share UNC path with `--volname`, this will implicitely
+If you specify a full network share UNC path with `--volname`, this will implicitly
set the `--network-mode` option, so the following two examples have same result:
rclone mount remote:path/to/files X: --network-mode
@@ -138,7 +138,7 @@ set the `--network-mode` option, so the following two examples have same result:
You may also specify the network share UNC path as the mountpoint itself. Then rclone
will automatically assign a drive letter, same as with `*` and use that as
mountpoint, and instead use the UNC path specified as the volume name, as if it were
-specified with the `--volname` option. This will also implicitely set
+specified with the `--volname` option. This will also implicitly set
the `--network-mode` option. This means the following two examples have same result:
rclone mount remote:path/to/files \\cloud\remote
@@ -174,7 +174,7 @@ The permissions on each entry will be set according to [options](#options)
The default permissions corresponds to `--file-perms 0666 --dir-perms 0777`,
i.e. read and write permissions to everyone. This means you will not be able
-to start any programs from the the mount. To be able to do that you must add
+to start any programs from the mount. To be able to do that you must add
execute permissions, e.g. `--file-perms 0777 --dir-perms 0777` to add it
to everyone. If the program needs to write files, chances are you will have
to enable [VFS File Caching](#vfs-file-caching) as well (see also [limitations](#limitations)).
@@ -245,8 +245,8 @@ applications won't work with their files on an rclone mount without
`--vfs-cache-mode writes` or `--vfs-cache-mode full`.
See the [VFS File Caching](#vfs-file-caching) section for more info.
-The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2,
-Hubic) do not support the concept of empty directories, so empty
+The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2)
+do not support the concept of empty directories, so empty
directories will have a tendency to disappear once they fall out of
the directory cache.
@@ -341,6 +341,8 @@ mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/p
or create systemd mount units:
```
# /etc/systemd/system/mnt-data.mount
+[Unit]
+After=network-online.target
[Mount]
Type=rclone
What=sftp1:subdir
@@ -352,6 +354,7 @@ optionally accompanied by systemd automount unit
```
# /etc/systemd/system/mnt-data.automount
[Unit]
+After=network-online.target
Before=remote-fs.target
[Automount]
Where=/mnt/data
diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md
index e1f51e604..154eb219b 100644
--- a/docs/content/commands/rclone_ncdu.md
+++ b/docs/content/commands/rclone_ncdu.md
@@ -45,7 +45,7 @@ press '?' to toggle the help on and off. The supported keys are:
q/ESC/^c to quit
Listed files/directories may be prefixed by a one-character flag,
-some of them combined with a description in brackes at end of line.
+some of them combined with a description in brackets at end of line.
These flags have the following meaning:
e means this is an empty directory, i.e. contains no files (but
diff --git a/docs/content/commands/rclone_serve_dlna.md b/docs/content/commands/rclone_serve_dlna.md
index 9ab1e83b9..0cb85400e 100644
--- a/docs/content/commands/rclone_serve_dlna.md
+++ b/docs/content/commands/rclone_serve_dlna.md
@@ -32,11 +32,6 @@ IPs.
Use `--name` to choose the friendly server name, which is by
default "rclone (hostname)".
-Use `--announce-interval` to specify the interval at which SSDP server
-announce devices and services. Larger active announcement intervals help
-keep the multicast domain clean, this value does not affect unicast
-responses to `M-SEARCH` requests from other devices.
-
Use `--log-trace` in conjunction with `-vv` to enable additional debug
logging of all UPNP traffic.
@@ -367,11 +362,13 @@ rclone serve dlna remote:path [flags]
```
--addr string The ip:port or :port to bind the DLNA http server to (default ":7879")
+ --announce-interval duration The interval between SSDP announcements (default 12m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for dlna
+ --interface stringArray The interface to use for SSDP (repeat as necessary)
--log-trace Enable trace logging of SOAP traffic
--name string Name of DLNA server
--no-checksum Don't compare checksums on up/download
diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md
index 329bc1420..5ad079b41 100644
--- a/docs/content/commands/rclone_serve_http.md
+++ b/docs/content/commands/rclone_serve_http.md
@@ -60,6 +60,10 @@ of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.
+--min-tls-version is minimum TLS version that is acceptable. Valid
+ values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
+ "tls1.0").
+
### Template
`--template` allows a user to specify a custom markup template for HTTP
@@ -446,6 +450,7 @@ rclone serve http remote:path [flags]
--htpasswd string A htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
diff --git a/docs/content/commands/rclone_serve_restic.md b/docs/content/commands/rclone_serve_restic.md
index 881e697f0..fff9d2f20 100644
--- a/docs/content/commands/rclone_serve_restic.md
+++ b/docs/content/commands/rclone_serve_restic.md
@@ -174,6 +174,10 @@ of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.
+--min-tls-version is minimum TLS version that is acceptable. Valid
+ values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
+ "tls1.0").
+
```
rclone serve restic remote:path [flags]
@@ -192,6 +196,7 @@ rclone serve restic remote:path [flags]
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
--private-repos Users can only access their private repo
--realm string Realm for authentication (default "rclone")
diff --git a/docs/content/commands/rclone_serve_sftp.md b/docs/content/commands/rclone_serve_sftp.md
index 226e29769..9bed264fb 100644
--- a/docs/content/commands/rclone_serve_sftp.md
+++ b/docs/content/commands/rclone_serve_sftp.md
@@ -11,11 +11,19 @@ Serve the remote over SFTP.
## Synopsis
-Run a SFTP server to serve a remote over SFTP. This can be used
-with an SFTP client or you can make a remote of type sftp to use with it.
+Run an SFTP server to serve a remote over SFTP. This can be used
+with an SFTP client or you can make a remote of type [sftp](/sftp) to use with it.
-You can use the filter flags (e.g. `--include`, `--exclude`) to control what
-is served.
+You can use the [filter](/filtering) flags (e.g. `--include`, `--exclude`)
+to control what is served.
+
+The server will respond to a small number of shell commands, mainly
+md5sum, sha1sum and df, which enable it to provide support for checksums
+and the about feature when accessed from an sftp remote.
+
+Note that this server uses standard 32 KiB packet payload size, which
+means you must not configure the client to expect anything else, e.g.
+with the [chunk_size](/sftp/#sftp-chunk-size) option on an sftp remote.
The server will log errors. Use `-v` to see access logs.
@@ -28,11 +36,6 @@ You must provide some means of authentication, either with
`--auth-proxy`, or set the `--no-auth` flag for no
authentication when logging in.
-Note that this also implements a small number of shell commands so
-that it can provide md5sum/sha1sum/df information for the rclone sftp
-backend. This means that is can support SHA1SUMs, MD5SUMs and the
-about command when paired with the rclone sftp backend.
-
If you don't supply a host `--key` then rclone will generate rsa, ecdsa
and ed25519 variants, and cache them for later use in rclone's cache
directory (see `rclone help flags cache-dir`) in the "serve-sftp"
@@ -484,7 +487,7 @@ rclone serve sftp remote:path [flags]
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
- --stdio Run an sftp server on run stdin/stdout
+ --stdio Run an sftp server on stdin/stdout
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md
index 5209719f4..21c2491ea 100644
--- a/docs/content/commands/rclone_serve_webdav.md
+++ b/docs/content/commands/rclone_serve_webdav.md
@@ -109,6 +109,10 @@ of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
certificate authority certificate.
+--min-tls-version is minimum TLS version that is acceptable. Valid
+ values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
+ "tls1.0").
+
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -531,6 +535,7 @@ rclone serve webdav remote:path [flags]
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md
index a61b15e45..49a09ec29 100644
--- a/docs/content/commands/rclone_sha1sum.md
+++ b/docs/content/commands/rclone_sha1sum.md
@@ -26,7 +26,7 @@ to running `rclone hashsum SHA1 remote:path`.
This command can also hash data received on standard input (stdin),
by not passing a remote:path, or by passing a hyphen as remote:path
-when there is data to read (if not, the hypen will be treated literaly,
+when there is data to read (if not, the hyphen will be treated literally,
as a relative path).
This command can also hash data received on STDIN, if not passing
diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md
index 0f7da84a2..173714f21 100644
--- a/docs/content/commands/rclone_sync.md
+++ b/docs/content/commands/rclone_sync.md
@@ -37,6 +37,11 @@ extended explanation in the [copy](/commands/rclone_copy/) command if unsure.
If dest:path doesn't exist, it is created and the source:path contents
go there.
+It is not possible to sync overlapping remotes. However, you may exclude
+the destination from the sync with a filter rule or by putting an
+exclude-if-present file inside the destination directory and sync to a
+destination that is inside the source directory.
+
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
**Note**: Use the `rclone dedupe` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors.
diff --git a/docs/content/flags.md b/docs/content/flags.md
index acf93f6c7..1cdce0e1d 100644
--- a/docs/content/flags.md
+++ b/docs/content/flags.md
@@ -119,7 +119,7 @@ These flags are available for every command.
--rc-job-expire-interval duration Interval to check for expired async jobs (default 10s)
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
- --rc-min-tls-version string Minimum TLS version that is acceptable
+ --rc-min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--rc-no-auth Don't require auth for certain methods
--rc-pass string Password for authentication
--rc-realm string Realm for authentication (default "rclone")
@@ -136,6 +136,7 @@ These flags are available for every command.
--refresh-times Refresh the modtime of remote files
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable)
+ --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
@@ -161,7 +162,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.60.0")
-v, --verbose count Print lots more stuff (repeat for more)
```
@@ -348,6 +349,7 @@ and may be set in the config file.
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
--ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
+ --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
@@ -358,7 +360,6 @@ and may be set in the config file.
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
--ftp-user string FTP username (default "$USER")
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
- --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD.
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL
--gcs-bucket-acl string Access Control List for new buckets
@@ -367,6 +368,7 @@ and may be set in the config file.
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gcs-endpoint string Endpoint for the service
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
--gcs-object-acl string Access Control List for new objects
@@ -412,14 +414,6 @@ and may be set in the config file.
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
- --hubic-auth-url string Auth server URL
- --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
- --hubic-client-id string OAuth Client Id
- --hubic-client-secret string OAuth Client Secret
- --hubic-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
- --hubic-no-chunk Don't chunk files during streaming upload
- --hubic-token string OAuth Access Token as a JSON blob
- --hubic-token-url string Token server url
--internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
--internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
@@ -535,6 +529,7 @@ and may be set in the config file.
--s3-bucket-acl string Canned ACL used when creating buckets
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
+ --s3-decompress If set this will decompress gzip encoded objects
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
@@ -553,6 +548,7 @@ and may be set in the config file.
--s3-no-check-bucket If set, don't attempt to check the bucket exists or create it
--s3-no-head If set, don't HEAD uploaded objects to check integrity
--s3-no-head-object If set, do not do HEAD before GET when getting objects
+ --s3-no-system-metadata Suppress setting and reading of system metadata
--s3-profile string Profile to use in the shared credentials file
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
@@ -562,7 +558,8 @@ and may be set in the config file.
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
- --s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data
+ --s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
+ --s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
@@ -572,6 +569,8 @@ and may be set in the config file.
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
+ --s3-version-at Time Show file versions as they were at the specified time (default off)
+ --s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
--seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
@@ -618,6 +617,15 @@ and may be set in the config file.
--sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent")
--skip-links Don't warn about skipped symlinks
+ --smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
+ --smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
+ --smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true)
+ --smb-host string SMB server hostname to connect to
+ --smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
+ --smb-pass string SMB password (obscured)
+ --smb-port int SMB port number (default 445)
+ --smb-user string SMB username (default "$USER")
--storj-access-grant string Access grant
--storj-api-key string API key
--storj-passphrase string Encryption passphrase
@@ -648,6 +656,7 @@ and may be set in the config file.
--swift-key string API key or password (OS_PASSWORD)
--swift-leave-parts-on-error If true avoid calling abort upload on a failure
--swift-no-chunk Don't chunk files during streaming upload
+ --swift-no-large-objects Disable support for static and dynamic large objects
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
diff --git a/docs/content/ftp.md b/docs/content/ftp.md
index 7d4da7d7b..0ccd0f915 100644
--- a/docs/content/ftp.md
+++ b/docs/content/ftp.md
@@ -248,6 +248,20 @@ Here are the Advanced options specific to ftp (FTP).
Maximum number of FTP simultaneous connections, 0 for unlimited.
+Note that setting this is very likely to cause deadlocks so it should
+be used with care.
+
+If you are doing a sync or copy then make sure concurrency is one more
+than the sum of `--transfers` and `--checkers`.
+
+If you use `--check-first` then it just needs to be one more than the
+maximum of `--checkers` and `--transfers`.
+
+So for `concurrency 3` you'd use `--checkers 2 --transfers 2
+--check-first` or `--checkers 1 --transfers 1`.
+
+
+
Properties:
- Config: concurrency
diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md
index e23090585..b98f89035 100644
--- a/docs/content/googlecloudstorage.md
+++ b/docs/content/googlecloudstorage.md
@@ -621,6 +621,19 @@ Properties:
- Type: bool
- Default: false
+#### --gcs-endpoint
+
+Endpoint for the service.
+
+Leave blank normally.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_GCS_ENDPOINT
+- Type: string
+- Required: false
+
#### --gcs-encoding
The encoding for the backend.
diff --git a/docs/content/rc.md b/docs/content/rc.md
index 51d291720..4b966635e 100644
--- a/docs/content/rc.md
+++ b/docs/content/rc.md
@@ -982,6 +982,12 @@ Parameters:
- jobid - id of the job (integer).
+### job/stopgroup: Stop all running jobs in a group {#job-stopgroup}
+
+Parameters:
+
+- group - name of the group (string).
+
### mount/listmounts: Show current mount points {#mount-listmounts}
This shows currently mounted points, which can be used for performing an unmount.
@@ -1057,9 +1063,11 @@ Example:
**Authentication is required for this call.**
-### mount/unmountall: Show current mount points {#mount-unmountall}
+### mount/unmountall: Unmount all active mounts {#mount-unmountall}
-This shows currently mounted points, which can be used for performing an unmount.
+rclone allows Linux, FreeBSD, macOS and Windows to
+mount any of Rclone's cloud storage systems as a file system with
+FUSE.
This takes no parameters and returns error if unmount does not succeed.
diff --git a/docs/content/s3.md b/docs/content/s3.md
index a07e2dc27..4ffa23922 100644
--- a/docs/content/s3.md
+++ b/docs/content/s3.md
@@ -641,7 +641,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
### Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).
#### --s3-provider
@@ -676,6 +676,8 @@ Properties:
- IBM COS S3
- "IDrive"
- IDrive e2
+ - "IONOS"
+ - IONOS Cloud
- "LyveCloud"
- Seagate Lyve Cloud
- "Minio"
@@ -696,6 +698,8 @@ Properties:
- Tencent Cloud Object Storage (COS)
- "Wasabi"
- Wasabi Object Storage
+ - "Qiniu"
+ - Qiniu Object Storage (Kodo)
- "Other"
- Any other S3 compatible provider
@@ -966,13 +970,68 @@ Properties:
Region to connect to.
+Properties:
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+ - "cn-east-1"
+ - The default endpoint - a good choice if you are unsure.
+ - East China Region 1.
+ - Needs location constraint cn-east-1.
+ - "cn-east-2"
+ - East China Region 2.
+ - Needs location constraint cn-east-2.
+ - "cn-north-1"
+ - North China Region 1.
+ - Needs location constraint cn-north-1.
+ - "cn-south-1"
+ - South China Region 1.
+ - Needs location constraint cn-south-1.
+ - "us-north-1"
+ - North America Region.
+ - Needs location constraint us-north-1.
+ - "ap-southeast-1"
+ - Southeast Asia Region 1.
+ - Needs location constraint ap-southeast-1.
+ - "ap-northeast-1"
+ - Northeast Asia Region 1.
+ - Needs location constraint ap-northeast-1.
+
+#### --s3-region
+
+Region where your bucket will be created and your data stored.
+
+
+Properties:
+
+- Config: region
+- Env Var: RCLONE_S3_REGION
+- Provider: IONOS
+- Type: string
+- Required: false
+- Examples:
+ - "de"
+ - Frankfurt, Germany
+ - "eu-central-2"
+ - Berlin, Germany
+ - "eu-south-2"
+ - Logrono, Spain
+
+#### --s3-region
+
+Region to connect to.
+
Leave blank if you are using an S3 clone and you don't have a region.
Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
-- Provider: !AWS,Alibaba,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
+- Provider: !AWS,Alibaba,ChinaMobile,Cloudflare,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
- Type: string
- Required: false
- Examples:
@@ -1230,6 +1289,27 @@ Properties:
#### --s3-endpoint
+Endpoint for IONOS S3 Object Storage.
+
+Specify the endpoint from the same region.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Provider: IONOS
+- Type: string
+- Required: false
+- Examples:
+ - "s3-eu-central-1.ionoscloud.com"
+ - Frankfurt, Germany
+ - "s3-eu-central-2.ionoscloud.com"
+ - Berlin, Germany
+ - "s3-eu-south-2.ionoscloud.com"
+ - Logrono, Spain
+
+#### --s3-endpoint
+
Endpoint for OSS API.
Properties:
@@ -1495,6 +1575,33 @@ Properties:
#### --s3-endpoint
+Endpoint for Qiniu Object Storage.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_S3_ENDPOINT
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+ - "s3-cn-east-1.qiniucs.com"
+ - East China Endpoint 1
+ - "s3-cn-east-2.qiniucs.com"
+ - East China Endpoint 2
+ - "s3-cn-north-1.qiniucs.com"
+ - North China Endpoint 1
+ - "s3-cn-south-1.qiniucs.com"
+ - South China Endpoint 1
+ - "s3-us-north-1.qiniucs.com"
+ - North America Endpoint 1
+ - "s3-ap-southeast-1.qiniucs.com"
+ - Southeast Asia Endpoint 1
+ - "s3-ap-northeast-1.qiniucs.com"
+ - Northeast Asia Endpoint 1
+
+#### --s3-endpoint
+
Endpoint for S3 API.
Required when using an S3 clone.
@@ -1503,7 +1610,7 @@ Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
-- Provider: !AWS,IBMCOS,IDrive,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp
+- Provider: !AWS,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp,Qiniu
- Type: string
- Required: false
- Examples:
@@ -1830,13 +1937,42 @@ Properties:
Location constraint - must be set to match the Region.
+Used when creating buckets only.
+
+Properties:
+
+- Config: location_constraint
+- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+ - "cn-east-1"
+ - East China Region 1
+ - "cn-east-2"
+ - East China Region 2
+ - "cn-north-1"
+ - North China Region 1
+ - "cn-south-1"
+ - South China Region 1
+ - "us-north-1"
+ - North America Region 1
+ - "ap-southeast-1"
+ - Southeast Asia Region 1
+ - "ap-northeast-1"
+ - Northeast Asia Region 1
+
+#### --s3-location-constraint
+
+Location constraint - must be set to match the Region.
+
Leave blank if not sure. Used when creating buckets only.
Properties:
- Config: location_constraint
- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: !AWS,IBMCOS,IDrive,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS
+- Provider: !AWS,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS
- Type: string
- Required: false
@@ -2066,9 +2202,30 @@ Properties:
- Archived storage.
- Prices are lower, but it needs to be restored first to be accessed.
+#### --s3-storage-class
+
+The storage class to use when storing new objects in Qiniu.
+
+Properties:
+
+- Config: storage_class
+- Env Var: RCLONE_S3_STORAGE_CLASS
+- Provider: Qiniu
+- Type: string
+- Required: false
+- Examples:
+ - "STANDARD"
+ - Standard storage class
+ - "LINE"
+ - Infrequent access storage mode
+ - "GLACIER"
+ - Archive storage mode
+ - "DEEP_ARCHIVE"
+ - Deep archive storage mode
+
### Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).
#### --s3-bucket-acl
@@ -2131,7 +2288,9 @@ Properties:
#### --s3-sse-customer-key
-If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.
+To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data.
+
+Alternatively you can provide --sse-customer-key-base64.
Properties:
@@ -2144,6 +2303,23 @@ Properties:
- ""
- None
+#### --s3-sse-customer-key-base64
+
+If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data.
+
+Alternatively you can provide --sse-customer-key.
+
+Properties:
+
+- Config: sse_customer_key_base64
+- Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_BASE64
+- Provider: AWS,Ceph,ChinaMobile,Minio
+- Type: string
+- Required: false
+- Examples:
+ - ""
+ - None
+
#### --s3-sse-customer-key-md5
If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
@@ -2663,6 +2839,36 @@ Properties:
- Type: Time
- Default: off
+#### --s3-decompress
+
+If set this will decompress gzip encoded objects.
+
+It is possible to upload objects to S3 with "Content-Encoding: gzip"
+set. Normally rclone will download these files as compressed objects.
+
+If this flag is set then rclone will decompress these files with
+"Content-Encoding: gzip" as they are received. This means that rclone
+can't check the size and hash but the file contents will be decompressed.
+
+
+Properties:
+
+- Config: decompress
+- Env Var: RCLONE_S3_DECOMPRESS
+- Type: bool
+- Default: false
+
+#### --s3-no-system-metadata
+
+Suppress setting and reading of system metadata
+
+Properties:
+
+- Config: no_system_metadata
+- Env Var: RCLONE_S3_NO_SYSTEM_METADATA
+- Type: bool
+- Default: false
+
### Metadata
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
diff --git a/docs/content/sftp.md b/docs/content/sftp.md
index 48ebfb3a3..dfaf179b8 100644
--- a/docs/content/sftp.md
+++ b/docs/content/sftp.md
@@ -789,19 +789,24 @@ Properties:
Upload and download chunk size.
-This controls the maximum packet size used in the SFTP protocol. The
-RFC limits this to 32768 bytes (32k), however a lot of servers
-support larger sizes and setting it larger will increase transfer
-speed dramatically on high latency links.
+This controls the maximum size of payload in SFTP protocol packets.
+The RFC limits this to 32768 bytes (32k), which is the default. However,
+a lot of servers support larger sizes, typically limited to a maximum
+total package size of 256k, and setting it larger will increase transfer
+speed dramatically on high latency links. This includes OpenSSH, and,
+for example, using the value of 255k works well, leaving plenty of room
+for overhead while still being within a total packet size of 256k.
-Only use a setting higher than 32k if you always connect to the same
-server or after sufficiently broad testing.
-
-For example using the value of 252k with OpenSSH works well with its
-maximum packet size of 256k.
-
-If you get the error "failed to send packet header: EOF" when copying
-a large file, try lowering this number.
+Make sure to test thoroughly before using a value higher than 32k,
+and only use it if you always connect to the same server or after
+sufficiently broad testing. If you get errors such as
+"failed to send packet payload: EOF", lots of "connection lost",
+or "corrupted on transfer", when copying a larger file, try lowering
+the value. The server run by [rclone serve sftp](/commands/rclone_serve_sftp)
+sends packets with standard 32k maximum payload so you must not
+set a different chunk_size when downloading files, but it accepts
+packets up to the 256k total size, so for uploads the chunk_size
+can be set as for the OpenSSH example above.
Properties:
diff --git a/docs/content/swift.md b/docs/content/swift.md
index f75e4f6dd..c58706efc 100644
--- a/docs/content/swift.md
+++ b/docs/content/swift.md
@@ -534,6 +534,38 @@ Properties:
- Type: bool
- Default: false
+#### --swift-no-large-objects
+
+Disable support for static and dynamic large objects
+
+Swift cannot transparently store files bigger than 5 GiB. There are
+two schemes for doing that, static or dynamic large objects, and the
+API does not allow rclone to determine whether a file is a static or
+dynamic large object without doing a HEAD on the object. Since these
+need to be treated differently, this means rclone has to issue HEAD
+requests for objects for example when reading checksums.
+
+When `no_large_objects` is set, rclone will assume that there are no
+static or dynamic large objects stored. This means it can stop doing
+the extra HEAD calls which in turn increases performance greatly
+especially when doing a swift to swift transfer with `--checksum` set.
+
+Setting this option implies `no_chunk` and also that no files will be
+uploaded in chunks, so files bigger than 5 GiB will just fail on
+upload.
+
+If you set this option and there *are* static or dynamic large objects,
+then this will give incorrect hashes for them. Downloads will succeed,
+but other operations such as Remove and Copy will fail.
+
+
+Properties:
+
+- Config: no_large_objects
+- Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS
+- Type: bool
+- Default: false
+
#### --swift-encoding
The encoding for the backend.
diff --git a/rclone.1 b/rclone.1
index 6b3c7fd37..fd0219cdd 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 2.9.2.1
.\"
-.TH "rclone" "1" "Jul 09, 2022" "User Manual" ""
+.TH "rclone" "1" "Oct 21, 2022" "User Manual" ""
.hy
.SH Rclone syncs your files to cloud storage
.PP
@@ -163,8 +163,6 @@ Arvan Cloud Object Storage (AOS)
.IP \[bu] 2
Citrix ShareFile
.IP \[bu] 2
-C14
-.IP \[bu] 2
Cloudflare R2
.IP \[bu] 2
DigitalOcean Spaces
@@ -193,8 +191,6 @@ HiDrive
.IP \[bu] 2
HTTP
.IP \[bu] 2
-Hubic
-.IP \[bu] 2
Internet Archive
.IP \[bu] 2
Jottacloud
@@ -203,6 +199,8 @@ IBM COS S3
.IP \[bu] 2
IDrive e2
.IP \[bu] 2
+IONOS Cloud
+.IP \[bu] 2
Koofr
.IP \[bu] 2
Mail.ru Cloud
@@ -227,7 +225,9 @@ OpenDrive
.IP \[bu] 2
OpenStack Swift
.IP \[bu] 2
-Oracle Cloud Storage
+Oracle Cloud Storage Swift
+.IP \[bu] 2
+Oracle Object Storage
.IP \[bu] 2
ownCloud
.IP \[bu] 2
@@ -239,6 +239,8 @@ put.io
.IP \[bu] 2
QingStor
.IP \[bu] 2
+Qiniu Cloud Object Storage (Kodo)
+.IP \[bu] 2
Rackspace Cloud Files
.IP \[bu] 2
rsync.net
@@ -255,6 +257,8 @@ SFTP
.IP \[bu] 2
Sia
.IP \[bu] 2
+SMB / CIFS
+.IP \[bu] 2
StackPath
.IP \[bu] 2
Storj
@@ -318,7 +322,7 @@ See rclone config docs (https://rclone.org/docs/) for more details.
.IP \[bu] 2
Optionally configure automatic execution.
.PP
-See below for some expanded Linux / macOS instructions.
+See below for some expanded Linux / macOS / Windows instructions.
.PP
See the usage (https://rclone.org/docs/) docs for how to use rclone, or
run \f[C]rclone -h\f[R].
@@ -346,7 +350,8 @@ sudo -v ; curl https://rclone.org/install.sh | sudo bash -s beta
.PP
Note that this script checks the version of rclone installed first and
won\[aq]t re-download if not needed.
-.SS Linux installation from precompiled binary
+.SS Linux installation
+.SS Precompiled binary
.PP
Fetch and unpack
.IP
@@ -386,7 +391,8 @@ See rclone config docs (https://rclone.org/docs/) for more details.
rclone config
\f[R]
.fi
-.SS macOS installation with brew
+.SS macOS installation
+.SS Installation with brew
.IP
.nf
\f[C]
@@ -398,7 +404,14 @@ NOTE: This version of rclone will not support \f[C]mount\f[R] any more
(see #5373 (https://github.com/rclone/rclone/issues/5373)).
If mounting is wanted on macOS, either install a precompiled binary or
enable the relevant option when installing from source.
-.SS macOS installation from precompiled binary, using curl
+.PP
+Note that this is a third party installer not controlled by the rclone
+developers so it may be out of date.
+Its current version is as below.
+.PP
+[IMAGE: Homebrew
+package (https://repology.org/badge/version-for-repo/homebrew/rclone.svg)] (https://repology.org/project/rclone/versions)
+.SS Precompiled binary, using curl
.PP
To avoid problems with macOS gatekeeper enforcing the binary to be
signed and notarized it is enough to download with \f[C]curl\f[R].
@@ -448,7 +461,7 @@ See rclone config docs (https://rclone.org/docs/) for more details.
rclone config
\f[R]
.fi
-.SS macOS installation from precompiled binary, using a web browser
+.SS Precompiled binary, using a web browser
.PP
When downloading a binary with a web browser, the browser will set the
macOS gatekeeper quarantine attribute.
@@ -469,12 +482,89 @@ The simplest fix is to run
xattr -d com.apple.quarantine rclone
\f[R]
.fi
-.SS Install with docker
+.SS Windows installation
+.SS Precompiled binary
.PP
-The rclone maintains a docker image for
+Fetch the correct binary for your processor type by clicking on these
+links.
+If not sure, use the first link.
+.IP \[bu] 2
+Intel/AMD - 64
+Bit (https://downloads.rclone.org/rclone-current-linux-amd64.zip)
+.IP \[bu] 2
+Intel/AMD - 32
+Bit (https://downloads.rclone.org/rclone-current-linux-386.zip)
+.IP \[bu] 2
+ARM - 64
+Bit (https://downloads.rclone.org/rclone-current-linux-arm64.zip)
+.PP
+Open this file in the Explorer and extract \f[C]rclone.exe\f[R].
+Rclone is a portable executable so you can place it wherever is
+convenient.
+.PP
+Open a CMD window (or powershell) and run the binary.
+Note that rclone does not launch a GUI by default, it runs in the CMD
+Window.
+.IP \[bu] 2
+Run \f[C]rclone.exe config\f[R] to setup.
+See rclone config docs (https://rclone.org/docs/) for more details.
+.IP \[bu] 2
+Optionally configure automatic execution.
+.PP
+If you are planning to use the rclone
+mount (https://rclone.org/commands/rclone_mount/) feature then you will
+need to install the third party utility WinFsp (https://winfsp.dev/)
+also.
+.SS Chocolatey package manager
+.PP
+Make sure you have Choco (https://chocolatey.org/) installed
+.IP
+.nf
+\f[C]
+choco search rclone
+choco install rclone
+\f[R]
+.fi
+.PP
+This will install rclone on your Windows machine.
+If you are planning to use rclone
+mount (https://rclone.org/commands/rclone_mount/) then
+.IP
+.nf
+\f[C]
+choco install winfsp
+\f[R]
+.fi
+.PP
+will install that too.
+.PP
+Note that this is a third party installer not controlled by the rclone
+developers so it may be out of date.
+Its current version is as below.
+.PP
+[IMAGE: Chocolatey
+package (https://repology.org/badge/version-for-repo/chocolatey/rclone.svg)] (https://repology.org/project/rclone/versions)
+.SS Package manager installation
+.PP
+Many Linux, Windows, macOS and other OS distributions package and
+distribute rclone.
+.PP
+The distributed versions of rclone are often quite out of date and for
+this reason we recommend one of the other installation methods if
+possible.
+.PP
+You can get an idea of how up to date or not your OS distribution\[aq]s
+package is here.
+.PP
+[IMAGE: Packaging
+status (https://repology.org/badge/vertical-allrepos/rclone.svg?columns=3)] (https://repology.org/project/rclone/versions)
+.SS Docker installation
+.PP
+The rclone developers maintain a docker image for
rclone (https://hub.docker.com/r/rclone/rclone).
-These images are autobuilt by docker hub from the rclone source based on
-a minimal Alpine linux image.
+.PP
+These images are built as part of the release process based on a minimal
+Alpine Linux.
.PP
The \f[C]:latest\f[R] tag will always point to the latest stable
release.
@@ -568,10 +658,10 @@ ls \[ti]/data/mount
kill %1
\f[R]
.fi
-.SS Install from source
+.SS Source installation
.PP
Make sure you have git and Go (https://golang.org/) installed.
-Go version 1.16 or newer is required, latest release is recommended.
+Go version 1.17 or newer is required, latest release is recommended.
You can get it from your package manager, or download it from
golang.org/dl (https://golang.org/dl/).
Then you can run the following:
@@ -592,7 +682,7 @@ As an initial check you can now run \f[C]./rclone version\f[R]
.PP
Note that on macOS and Windows the
mount (https://rclone.org/commands/rclone_mount/) command will not be
-available unless you specify additional build tag \f[C]cmount\f[R].
+available unless you specify an additional build tag \f[C]cmount\f[R].
.IP
.nf
\f[C]
@@ -615,8 +705,8 @@ sure you install it in the classic mingw64 subsystem, the ucrt64 version
is not compatible).
.PP
Additionally, on Windows, you must install the third party utility
-WinFsp (http://www.secfs.net/winfsp/), with the \[dq]Developer\[dq]
-feature selected.
+WinFsp (https://winfsp.dev/), with the \[dq]Developer\[dq] feature
+selected.
If building with cgo, you must also set environment variable CPATH
pointing to the fuse include directory within the WinFsp installation
(normally
@@ -635,9 +725,11 @@ go build -trimpath -ldflags -s -tags cmount
.fi
.PP
Instead of executing the \f[C]go build\f[R] command directly, you can
-run it via the Makefile, which also sets version information and copies
-the resulting rclone executable into your GOPATH bin folder
-(\f[C]$(go env GOPATH)/bin\f[R], which corresponds to
+run it via the Makefile.
+It changes the version number suffix from \[dq]-DEV\[dq] to
+\[dq]-beta\[dq] and appends commit details.
+It also copies the resulting rclone executable into your GOPATH bin
+folder (\f[C]$(go env GOPATH)/bin\f[R], which corresponds to
\f[C]\[ti]/go/bin/rclone\f[R] by default).
.IP
.nf
@@ -654,7 +746,15 @@ make GOTAGS=cmount
\f[R]
.fi
.PP
-As an alternative you can download the source, build and install rclone
+There are other make targets that can be used for more advanced builds,
+such as cross-compiling for all supported os/architectures, embedding
+icon and version info resources into windows executable, and packaging
+results into release artifacts.
+See Makefile (https://github.com/rclone/rclone/blob/master/Makefile) and
+cross-compile.go (https://github.com/rclone/rclone/blob/master/bin/cross-compile.go)
+for details.
+.PP
+Another alternative is to download the source, build and install rclone
in one operation, as a regular Go package.
The source will be stored it in the Go module cache, and the resulting
executable will be in your GOPATH bin folder
@@ -678,7 +778,7 @@ and sometimes these don\[aq]t work with the current version):
go get github.com/rclone/rclone
\f[R]
.fi
-.SS Installation with Ansible
+.SS Ansible installation
.PP
This can be done with Stefan Weichinger\[aq]s ansible
role (https://github.com/stefangweichinger/ansible-rclone).
@@ -732,9 +832,9 @@ system\[aq]s scheduler.
If you need to expose \f[I]service\f[R]-like features, such as remote
control (https://rclone.org/rc/), GUI (https://rclone.org/gui/),
serve (https://rclone.org/commands/rclone_serve/) or
-mount (https://rclone.org/commands/rclone_move/), you will often want an
-rclone command always running in the background, and configuring it to
-run in a service infrastructure may be a better option.
+mount (https://rclone.org/commands/rclone_mount/), you will often want
+an rclone command always running in the background, and configuring it
+to run in a service infrastructure may be a better option.
Below are some alternatives on how to achieve this on different
operating systems.
.PP
@@ -770,7 +870,7 @@ c:\[rs]rclone\[rs]rclone.exe sync c:\[rs]files remote:/files --no-console --log-
.fi
.SS User account
.PP
-As mentioned in the mount (https://rclone.org/commands/rclone_move/)
+As mentioned in the mount (https://rclone.org/commands/rclone_mount/)
documentation, mounted drives created as Administrator are not visible
to other accounts, not even the account that was elevated as
Administrator.
@@ -782,8 +882,8 @@ NOTE: Remember that when rclone runs as the \f[C]SYSTEM\f[R] user, the
user profile that it sees will not be yours.
This means that if you normally run rclone with configuration file in
the default location, to be able to use the same configuration when
-running as the system user you must explicitely tell rclone where to
-find it with the
+running as the system user you must explicitly tell rclone where to find
+it with the
\f[C]--config\f[R] (https://rclone.org/docs/#config-config-file) option,
or else it will look in the system users profile path
(\f[C]C:\[rs]Windows\[rs]System32\[rs]config\[rs]systemprofile\f[R]).
@@ -862,7 +962,7 @@ here (https://github.com/rclone/rclone/issues/3340).
To Windows service running any rclone command, the excellent third-party
utility NSSM (http://nssm.cc), the \[dq]Non-Sucking Service
Manager\[dq], can be used.
-It includes some advanced features such as adjusting process periority,
+It includes some advanced features such as adjusting process priority,
defining process environment variables, redirect to file anything
written to stdout, and customized response to different exit codes, with
a GUI to configure everything from (although it can also be used from
@@ -971,8 +1071,6 @@ HiDrive (https://rclone.org/hidrive/)
.IP \[bu] 2
HTTP (https://rclone.org/http/)
.IP \[bu] 2
-Hubic (https://rclone.org/hubic/)
-.IP \[bu] 2
Internet Archive (https://rclone.org/internetarchive/)
.IP \[bu] 2
Jottacloud (https://rclone.org/jottacloud/)
@@ -994,6 +1092,8 @@ Memstore (https://rclone.org/swift/)
.IP \[bu] 2
OpenDrive (https://rclone.org/opendrive/)
.IP \[bu] 2
+Oracle Object Storage (https://rclone.org/oracleobjectstorage/)
+.IP \[bu] 2
Pcloud (https://rclone.org/pcloud/)
.IP \[bu] 2
premiumize.me (https://rclone.org/premiumizeme/)
@@ -1008,6 +1108,8 @@ SFTP (https://rclone.org/sftp/)
.IP \[bu] 2
Sia (https://rclone.org/sia/)
.IP \[bu] 2
+SMB (https://rclone.org/smb/)
+.IP \[bu] 2
Storj (https://rclone.org/storj/)
.IP \[bu] 2
SugarSync (https://rclone.org/sugarsync/)
@@ -1271,6 +1373,11 @@ copy (https://rclone.org/commands/rclone_copy/) command if unsure.
If dest:path doesn\[aq]t exist, it is created and the source:path
contents go there.
.PP
+It is not possible to sync overlapping remotes.
+However, you may exclude the destination from the sync with a filter
+rule or by putting an exclude-if-present file inside the destination
+directory and sync to a destination that is inside the source directory.
+.PP
\f[B]Note\f[R]: Use the \f[C]-P\f[R]/\f[C]--progress\f[R] flag to view
real-time transfer statistics
.PP
@@ -1651,8 +1758,8 @@ Note that \f[C]ls\f[R] and \f[C]lsl\f[R] recurse by default - use
The other list commands \f[C]lsd\f[R],\f[C]lsf\f[R],\f[C]lsjson\f[R] do
not recurse by default - use \f[C]-R\f[R] to make them recurse.
.PP
-Listing a non-existent directory will produce an error except for
-remotes which can\[aq]t have empty directories (e.g.
+Listing a nonexistent directory will produce an error except for remotes
+which can\[aq]t have empty directories (e.g.
s3, swift, or gcs - the bucket-based remotes).
.IP
.nf
@@ -1735,8 +1842,8 @@ Note that \f[C]ls\f[R] and \f[C]lsl\f[R] recurse by default - use
The other list commands \f[C]lsd\f[R],\f[C]lsf\f[R],\f[C]lsjson\f[R] do
not recurse by default - use \f[C]-R\f[R] to make them recurse.
.PP
-Listing a non-existent directory will produce an error except for
-remotes which can\[aq]t have empty directories (e.g.
+Listing a nonexistent directory will produce an error except for remotes
+which can\[aq]t have empty directories (e.g.
s3, swift, or gcs - the bucket-based remotes).
.IP
.nf
@@ -1805,8 +1912,8 @@ Note that \f[C]ls\f[R] and \f[C]lsl\f[R] recurse by default - use
The other list commands \f[C]lsd\f[R],\f[C]lsf\f[R],\f[C]lsjson\f[R] do
not recurse by default - use \f[C]-R\f[R] to make them recurse.
.PP
-Listing a non-existent directory will produce an error except for
-remotes which can\[aq]t have empty directories (e.g.
+Listing a nonexistent directory will produce an error except for remotes
+which can\[aq]t have empty directories (e.g.
s3, swift, or gcs - the bucket-based remotes).
.IP
.nf
@@ -1848,8 +1955,8 @@ Running \f[C]rclone md5sum remote:path\f[R] is equivalent to running
.PP
This command can also hash data received on standard input (stdin), by
not passing a remote:path, or by passing a hyphen as remote:path when
-there is data to read (if not, the hypen will be treated literaly, as a
-relative path).
+there is data to read (if not, the hyphen will be treated literally, as
+a relative path).
.IP
.nf
\f[C]
@@ -1894,8 +2001,8 @@ Running \f[C]rclone sha1sum remote:path\f[R] is equivalent to running
.PP
This command can also hash data received on standard input (stdin), by
not passing a remote:path, or by passing a hyphen as remote:path when
-there is data to read (if not, the hypen will be treated literaly, as a
-relative path).
+there is data to read (if not, the hyphen will be treated literally, as
+a relative path).
.PP
This command can also hash data received on STDIN, if not passing a
remote:path.
@@ -2442,10 +2549,10 @@ rclone (https://rclone.org/commands/rclone/) - Show help for rclone
commands, flags and backends.
.SH rclone bisync
.PP
-Perform bidirectonal synchronization between two paths.
+Perform bidirectional synchronization between two paths.
.SS Synopsis
.PP
-Perform bidirectonal synchronization between two paths.
+Perform bidirectional synchronization between two paths.
.PP
Bisync (https://rclone.org/bisync/) provides a bidirectional cloud sync
solution in rclone.
@@ -2695,7 +2802,7 @@ rclone completion bash > /etc/bash_completion.d/rclone
.IP
.nf
\f[C]
-rclone completion bash > /usr/local/etc/bash_completion.d/rclone
+rclone completion bash > $(brew --prefix)/etc/bash_completion.d/rclone
\f[R]
.fi
.PP
@@ -2821,6 +2928,14 @@ echo \[dq]autoload -U compinit; compinit\[dq] >> \[ti]/.zshrc
\f[R]
.fi
.PP
+To load completions in your current shell session:
+.IP
+.nf
+\f[C]
+source <(rclone completion zsh); compdef _rclone rclone
+\f[R]
+.fi
+.PP
To load completions for every new session, execute once:
.SS Linux:
.IP
@@ -2833,7 +2948,7 @@ rclone completion zsh > \[dq]${fpath[1]}/_rclone\[dq]
.IP
.nf
\f[C]
-rclone completion zsh > /usr/local/share/zsh/site-functions/_rclone
+rclone completion zsh > $(brew --prefix)/share/zsh/site-functions/_rclone
\f[R]
.fi
.PP
@@ -2907,8 +3022,8 @@ You can also set obscured passwords using the
\f[C]rclone config password\f[R] command.
.PP
The flag \f[C]--non-interactive\f[R] is for use by applications that
-wish to configure rclone themeselves, rather than using rclone\[aq]s
-text based configuration questions.
+wish to configure rclone themselves, rather than using rclone\[aq]s text
+based configuration questions.
If this flag is set, and rclone needs to ask the user a question, a JSON
blob will be returned with the question in it.
.PP
@@ -3362,8 +3477,8 @@ You can also set obscured passwords using the
\f[C]rclone config password\f[R] command.
.PP
The flag \f[C]--non-interactive\f[R] is for use by applications that
-wish to configure rclone themeselves, rather than using rclone\[aq]s
-text based configuration questions.
+wish to configure rclone themselves, rather than using rclone\[aq]s text
+based configuration questions.
If this flag is set, and rclone needs to ask the user a question, a JSON
blob will be returned with the question in it.
.PP
@@ -4022,8 +4137,8 @@ sha1sum (https://rclone.org/commands/rclone_sha1sum/).
.PP
This command can also hash data received on standard input (stdin), by
not passing a remote:path, or by passing a hyphen as remote:path when
-there is data to read (if not, the hypen will be treated literaly, as a
-relative path).
+there is data to read (if not, the hyphen will be treated literally, as
+a relative path).
.PP
Run without a hash to see the list of all supported hashes, e.g.
.IP
@@ -4320,8 +4435,8 @@ Note that \f[C]ls\f[R] and \f[C]lsl\f[R] recurse by default - use
The other list commands \f[C]lsd\f[R],\f[C]lsf\f[R],\f[C]lsjson\f[R] do
not recurse by default - use \f[C]-R\f[R] to make them recurse.
.PP
-Listing a non-existent directory will produce an error except for
-remotes which can\[aq]t have empty directories (e.g.
+Listing a nonexistent directory will produce an error except for remotes
+which can\[aq]t have empty directories (e.g.
s3, swift, or gcs - the bucket-based remotes).
.IP
.nf
@@ -4412,7 +4527,7 @@ the files will be returned.
.PP
If \f[C]--metadata\f[R] is set then an additional Metadata key will be
returned.
-This will have metdata in rclone standard format as a JSON object.
+This will have metadata in rclone standard format as a JSON object.
.PP
if \f[C]--stat\f[R] is set then a single JSON blob will be returned
about the item pointed to.
@@ -4471,8 +4586,8 @@ Note that \f[C]ls\f[R] and \f[C]lsl\f[R] recurse by default - use
The other list commands \f[C]lsd\f[R],\f[C]lsf\f[R],\f[C]lsjson\f[R] do
not recurse by default - use \f[C]-R\f[R] to make them recurse.
.PP
-Listing a non-existent directory will produce an error except for
-remotes which can\[aq]t have empty directories (e.g.
+Listing a nonexistent directory will produce an error except for remotes
+which can\[aq]t have empty directories (e.g.
s3, swift, or gcs - the bucket-based remotes).
.IP
.nf
@@ -4620,7 +4735,7 @@ experience unexpected program errors, freezes or other issues, consider
mounting as a network drive instead.
.PP
When mounting as a fixed disk drive you can either mount to an unused
-drive letter, or to a path representing a \f[B]non-existent\f[R]
+drive letter, or to a path representing a \f[B]nonexistent\f[R]
subdirectory of an \f[B]existing\f[R] parent directory or drive.
Using the special value \f[C]*\f[R] will tell rclone to automatically
assign the next available drive letter, starting with Z: and moving
@@ -4670,7 +4785,7 @@ shown in Windows Explorer etc, while the complete
path by \f[C]net use\f[R] etc, just like a normal network drive mapping.
.PP
If you specify a full network share UNC path with \f[C]--volname\f[R],
-this will implicitely set the \f[C]--network-mode\f[R] option, so the
+this will implicitly set the \f[C]--network-mode\f[R] option, so the
following two examples have same result:
.IP
.nf
@@ -4686,7 +4801,7 @@ Then rclone will automatically assign a drive letter, same as with
\f[C]*\f[R] and use that as mountpoint, and instead use the UNC path
specified as the volume name, as if it were specified with the
\f[C]--volname\f[R] option.
-This will also implicitely set the \f[C]--network-mode\f[R] option.
+This will also implicitly set the \f[C]--network-mode\f[R] option.
This means the following two examples have same result:
.IP
.nf
@@ -4731,8 +4846,7 @@ notation (https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation
The default permissions corresponds to
\f[C]--file-perms 0666 --dir-perms 0777\f[R], i.e.
read and write permissions to everyone.
-This means you will not be able to start any programs from the the
-mount.
+This means you will not be able to start any programs from the mount.
To be able to do that you must add execute permissions, e.g.
\f[C]--file-perms 0777 --dir-perms 0777\f[R] to add it to everyone.
If the program needs to write files, chances are you will have to enable
@@ -4818,8 +4932,8 @@ rclone mount without \f[C]--vfs-cache-mode writes\f[R] or
See the VFS File Caching section for more info.
.PP
The bucket-based remotes (e.g.
-Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept
-of empty directories, so empty directories will have a tendency to
+Swift, S3, Google Compute Storage, B2) do not support the concept of
+empty directories, so empty directories will have a tendency to
disappear once they fall out of the directory cache.
.PP
When \f[C]rclone mount\f[R] is invoked on Unix with \f[C]--daemon\f[R]
@@ -5561,7 +5675,7 @@ The supported keys are:
.fi
.PP
Listed files/directories may be prefixed by a one-character flag, some
-of them combined with a description in brackes at end of line.
+of them combined with a description in brackets at end of line.
These flags have the following meaning:
.IP
.nf
@@ -6496,11 +6610,13 @@ rclone serve dlna remote:path [flags]
.nf
\f[C]
--addr string The ip:port or :port to bind the DLNA http server to (default \[dq]:7879\[dq])
+ --announce-interval duration The interval between SSDP announcements (default 12m0s)
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for dlna
+ --interface stringArray The interface to use for SSDP (repeat as necessary)
--log-trace Enable trace logging of SOAP traffic
--name string Name of DLNA server
--no-checksum Don\[aq]t compare checksums on up/download
@@ -7668,6 +7784,10 @@ concatenation of that with the CA certificate.
\f[C]--key\f[R] should be the PEM encoded private key and
\f[C]--client-ca\f[R] should be the PEM encoded client certificate
authority certificate.
+.PP
+--min-tls-version is minimum TLS version that is acceptable.
+Valid values are \[dq]tls1.0\[dq], \[dq]tls1.1\[dq], \[dq]tls1.2\[dq]
+and \[dq]tls1.3\[dq] (default \[dq]tls1.0\[dq]).
.SS Template
.PP
\f[C]--template\f[R] allows a user to specify a custom markup template
@@ -8209,6 +8329,7 @@ rclone serve http remote:path [flags]
--htpasswd string A htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq])
--no-checksum Don\[aq]t compare checksums on up/download
--no-modtime Don\[aq]t read/write the modification time (can speed things up)
--no-seek Don\[aq]t allow seeking in files
@@ -8510,6 +8631,10 @@ concatenation of that with the CA certificate.
\f[C]--key\f[R] should be the PEM encoded private key and
\f[C]--client-ca\f[R] should be the PEM encoded client certificate
authority certificate.
+.PP
+--min-tls-version is minimum TLS version that is acceptable.
+Valid values are \[dq]tls1.0\[dq], \[dq]tls1.1\[dq], \[dq]tls1.2\[dq]
+and \[dq]tls1.3\[dq] (default \[dq]tls1.0\[dq]).
.IP
.nf
\f[C]
@@ -8530,6 +8655,7 @@ rclone serve restic remote:path [flags]
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq])
--pass string Password for authentication
--private-repos Users can only access their private repo
--realm string Realm for authentication (default \[dq]rclone\[dq])
@@ -8552,13 +8678,22 @@ remote over a protocol.
Serve the remote over SFTP.
.SS Synopsis
.PP
-Run a SFTP server to serve a remote over SFTP.
+Run an SFTP server to serve a remote over SFTP.
This can be used with an SFTP client or you can make a remote of type
sftp to use with it.
.PP
You can use the filter flags (e.g.
\f[C]--include\f[R], \f[C]--exclude\f[R]) to control what is served.
.PP
+The server will respond to a small number of shell commands, mainly
+md5sum, sha1sum and df, which enable it to provide support for checksums
+and the about feature when accessed from an sftp remote.
+.PP
+Note that this server uses standard 32 KiB packet payload size, which
+means you must not configure the client to expect anything else, e.g.
+with the chunk_size (https://rclone.org/sftp/#sftp-chunk-size) option on
+an sftp remote.
+.PP
The server will log errors.
Use \f[C]-v\f[R] to see access logs.
.PP
@@ -8571,12 +8706,6 @@ location with \f[C]--authorized-keys\f[R] - the default is the same as
ssh), an \f[C]--auth-proxy\f[R], or set the \f[C]--no-auth\f[R] flag for
no authentication when logging in.
.PP
-Note that this also implements a small number of shell commands so that
-it can provide md5sum/sha1sum/df information for the rclone sftp
-backend.
-This means that is can support SHA1SUMs, MD5SUMs and the about command
-when paired with the rclone sftp backend.
-.PP
If you don\[aq]t supply a host \f[C]--key\f[R] then rclone will generate
rsa, ecdsa and ed25519 variants, and cache them for later use in
rclone\[aq]s cache directory (see \f[C]rclone help flags cache-dir\f[R])
@@ -9121,7 +9250,7 @@ rclone serve sftp remote:path [flags]
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
- --stdio Run an sftp server on run stdin/stdout
+ --stdio Run an sftp server on stdin/stdout
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
@@ -9334,6 +9463,10 @@ concatenation of that with the CA certificate.
\f[C]--key\f[R] should be the PEM encoded private key and
\f[C]--client-ca\f[R] should be the PEM encoded client certificate
authority certificate.
+.PP
+--min-tls-version is minimum TLS version that is acceptable.
+Valid values are \[dq]tls1.0\[dq], \[dq]tls1.1\[dq], \[dq]tls1.2\[dq]
+and \[dq]tls1.3\[dq] (default \[dq]tls1.0\[dq]).
.SS VFS - Virtual File System
.PP
This command uses the VFS layer.
@@ -9844,6 +9977,7 @@ rclone serve webdav remote:path [flags]
--htpasswd string htpasswd file - if not provided no authentication is done
--key string SSL PEM Private key
--max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq])
--no-checksum Don\[aq]t compare checksums on up/download
--no-modtime Don\[aq]t read/write the modification time (can speed things up)
--no-seek Don\[aq]t allow seeking in files
@@ -10804,7 +10938,7 @@ Some backends can also store arbitrary user metadata.
.PP
Where possible the key names are standardized, so, for example, it is
possible to copy object metadata from s3 to azureblob for example and
-metadata will be translated apropriately.
+metadata will be translated appropriately.
.PP
Some backends have limits on the size of the metadata and rclone will
give errors on upload if they are exceeded.
@@ -10946,13 +11080,44 @@ It is also possible to specify \f[C]--boolean=false\f[R] or
Note that \f[C]--boolean false\f[R] is not valid - this is parsed as
\f[C]--boolean\f[R] and the \f[C]false\f[R] is parsed as an extra
command line argument for rclone.
+.SS Time or duration options
+.PP
+TIME or DURATION options can be specified as a duration string or a time
+string.
.PP
-Options which use TIME use the go time parser.
A duration string is a possibly signed sequence of decimal numbers, each
with optional fraction and a unit suffix, such as \[dq]300ms\[dq],
\[dq]-1.5h\[dq] or \[dq]2h45m\[dq].
-Valid time units are \[dq]ns\[dq], \[dq]us\[dq] (or \[dq]\[mc]s\[dq]),
-\[dq]ms\[dq], \[dq]s\[dq], \[dq]m\[dq], \[dq]h\[dq].
+Default units are seconds or the following abbreviations are valid:
+.IP \[bu] 2
+\f[C]ms\f[R] - Milliseconds
+.IP \[bu] 2
+\f[C]s\f[R] - Seconds
+.IP \[bu] 2
+\f[C]m\f[R] - Minutes
+.IP \[bu] 2
+\f[C]h\f[R] - Hours
+.IP \[bu] 2
+\f[C]d\f[R] - Days
+.IP \[bu] 2
+\f[C]w\f[R] - Weeks
+.IP \[bu] 2
+\f[C]M\f[R] - Months
+.IP \[bu] 2
+\f[C]y\f[R] - Years
+.PP
+These can also be specified as an absolute time in the following
+formats:
+.IP \[bu] 2
+RFC3339 - e.g.
+\f[C]2006-01-02T15:04:05Z\f[R] or \f[C]2006-01-02T15:04:05+07:00\f[R]
+.IP \[bu] 2
+ISO8601 Date and time, local timezone - \f[C]2006-01-02T15:04:05\f[R]
+.IP \[bu] 2
+ISO8601 Date and time, local timezone - \f[C]2006-01-02 15:04:05\f[R]
+.IP \[bu] 2
+ISO8601 Date - \f[C]2006-01-02\f[R] (YYYY-MM-DD)
+.SS Size options
.PP
Options which use SIZE use KiB (multiples of 1024 bytes) by default.
However, a suffix of \f[C]B\f[R] for Byte, \f[C]K\f[R] for KiB,
@@ -10973,7 +11138,8 @@ in DIR, then it will be overwritten.
.PP
The remote in use must support server-side move or copy and you must use
the same remote as the destination of the sync.
-The backup directory must not overlap the destination directory.
+The backup directory must not overlap the destination directory without
+it being excluded by a filter rule.
.PP
For example
.IP
@@ -11018,7 +11184,7 @@ To use a single limit, specify the desired bandwidth in KiB/s, or use a
suffix B|K|M|G|T|P.
The default is \f[C]0\f[R] which means to not limit bandwidth.
.PP
-The upload and download bandwidth can be specified seperately, as
+The upload and download bandwidth can be specified separately, as
\f[C]--bwlimit UP:DOWN\f[R], so
.IP
.nf
@@ -12207,6 +12373,16 @@ This sets the interval between each retry specified by
.PP
The default is \f[C]0\f[R].
Use \f[C]0\f[R] to disable.
+.SS --server-side-across-configs
+.PP
+Allow server-side operations (e.g.
+copy or move) to work across different configurations.
+.PP
+This can be useful if you wish to do a server-side copy or move between
+two remotes which use the same backend but are configured differently.
+.PP
+Note that this isn\[aq]t enabled by default because it isn\[aq]t easy
+for rclone to tell if it will work between any two configurations.
.SS --size-only
.PP
Normally rclone will look at modification time and size of files to see
@@ -12409,13 +12585,22 @@ By default, rclone doesn\[aq]t keep track of renamed files, so if you
rename a file locally then sync it to a remote, rclone will delete the
old file on the remote and upload a new copy.
.PP
-If you use this flag, and the remote supports server-side copy or
-server-side move, and the source and destination have a compatible hash,
-then this will track renames during \f[C]sync\f[R] operations and
-perform renaming server-side.
+An rclone sync with \f[C]--track-renames\f[R] runs like a normal sync,
+but keeps track of objects which exist in the destination but not in the
+source (which would normally be deleted), and which objects exist in the
+source but not the destination (which would normally be transferred).
+These objects are then candidates for renaming.
.PP
-Files will be matched by size and hash - if both match then a rename
-will be considered.
+After the sync, rclone matches up the source only and destination only
+objects using the \f[C]--track-renames-strategy\f[R] specified and
+either renames the destination object or transfers the source and
+deletes the destination object.
+\f[C]--track-renames\f[R] is stateless like all of rclone\[aq]s syncs.
+.PP
+To use this flag the destination must support server-side copy or
+server-side move, and to use a hash based
+\f[C]--track-renames-strategy\f[R] (the default) the source and the
+destination must have a compatible hash.
.PP
If the destination does not support server-side copy or move, rclone
will fall back to the default behaviour and log an error level message
@@ -12434,7 +12619,8 @@ Note also that \f[C]--track-renames\f[R] is incompatible with
instead of \f[C]--delete-during\f[R].
.SS --track-renames-strategy (hash,modtime,leaf,size)
.PP
-This option changes the matching criteria for \f[C]--track-renames\f[R].
+This option changes the file matching criteria for
+\f[C]--track-renames\f[R].
.PP
The matching is controlled by a comma separated selection of these
tokens:
@@ -12449,16 +12635,14 @@ backends
.IP \[bu] 2
\f[C]size\f[R] - the size of the file (this is always enabled)
.PP
-So using \f[C]--track-renames-strategy modtime,leaf\f[R] would match
-files based on modification time, the leaf of the file name and the size
-only.
+The default option is \f[C]hash\f[R].
+.PP
+Using \f[C]--track-renames-strategy modtime,leaf\f[R] would match files
+based on modification time, the leaf of the file name and the size only.
.PP
Using \f[C]--track-renames-strategy modtime\f[R] or \f[C]leaf\f[R] can
enable \f[C]--track-renames\f[R] support for encrypted destinations.
.PP
-If nothing is specified, the default option is matching by
-\f[C]hash\f[R]es.
-.PP
Note that the \f[C]hash\f[R] strategy is not supported with encrypted
destinations.
.SS --delete-(before,during,after)
@@ -12499,7 +12683,7 @@ of memory.
However, some remotes have a way of listing all files beneath a
directory in one (or a small number) of transactions.
These tend to be the bucket-based remotes (e.g.
-S3, B2, GCS, Swift, Hubic).
+S3, B2, GCS, Swift).
.PP
If you use the \f[C]--fast-list\f[R] flag then rclone will use this
method for listing directories.
@@ -12572,9 +12756,9 @@ In all other cases the file will not be updated.
Consider using the \f[C]--modify-window\f[R] flag to compensate for time
skews between the source and the backend, for backends that do not
support mod times, and instead use uploaded times.
-However, if the backend does not support checksums, note that
-sync\[aq]ing or copying within the time skew window may still result in
-additional transfers for safety.
+However, if the backend does not support checksums, note that syncing or
+copying within the time skew window may still result in additional
+transfers for safety.
.SS --use-mmap
.PP
If this flag is set then rclone will use anonymous memory allocated by
@@ -13593,7 +13777,7 @@ T}
T{
T}@T{
T}@T{
-\f[C]/dir/file.gif\f[R]
+\f[C]/dir/file.png\f[R]
T}@T{
\f[C]/dir/file.gif\f[R]
T}
@@ -14343,6 +14527,9 @@ Default units are \f[C]KiB\f[R] but abbreviations \f[C]K\f[R],
E.g.
\f[C]rclone ls remote: --min-size 50k\f[R] lists files on
\f[C]remote:\f[R] of 50 KiB size or larger.
+.PP
+See the size option docs (https://rclone.org/docs/#size-option) for more
+info.
.SS \f[C]--max-size\f[R] - Don\[aq]t transfer any file larger than this
.PP
Controls the maximum size file within the scope of an rclone command.
@@ -14352,44 +14539,21 @@ Default units are \f[C]KiB\f[R] but abbreviations \f[C]K\f[R],
E.g.
\f[C]rclone ls remote: --max-size 1G\f[R] lists files on
\f[C]remote:\f[R] of 1 GiB size or smaller.
+.PP
+See the size option docs (https://rclone.org/docs/#size-option) for more
+info.
.SS \f[C]--max-age\f[R] - Don\[aq]t transfer any file older than this
.PP
Controls the maximum age of files within the scope of an rclone command.
-Default units are seconds or the following abbreviations are valid:
-.IP \[bu] 2
-\f[C]ms\f[R] - Milliseconds
-.IP \[bu] 2
-\f[C]s\f[R] - Seconds
-.IP \[bu] 2
-\f[C]m\f[R] - Minutes
-.IP \[bu] 2
-\f[C]h\f[R] - Hours
-.IP \[bu] 2
-\f[C]d\f[R] - Days
-.IP \[bu] 2
-\f[C]w\f[R] - Weeks
-.IP \[bu] 2
-\f[C]M\f[R] - Months
-.IP \[bu] 2
-\f[C]y\f[R] - Years
-.PP
-\f[C]--max-age\f[R] can also be specified as an absolute time in the
-following formats:
-.IP \[bu] 2
-RFC3339 - e.g.
-\f[C]2006-01-02T15:04:05Z\f[R] or \f[C]2006-01-02T15:04:05+07:00\f[R]
-.IP \[bu] 2
-ISO8601 Date and time, local timezone - \f[C]2006-01-02T15:04:05\f[R]
-.IP \[bu] 2
-ISO8601 Date and time, local timezone - \f[C]2006-01-02 15:04:05\f[R]
-.IP \[bu] 2
-ISO8601 Date - \f[C]2006-01-02\f[R] (YYYY-MM-DD)
.PP
\f[C]--max-age\f[R] applies only to files and not to directories.
.PP
E.g.
\f[C]rclone ls remote: --max-age 2d\f[R] lists files on
\f[C]remote:\f[R] of 2 days old or less.
+.PP
+See the time option docs (https://rclone.org/docs/#time-option) for
+valid formats.
.SS \f[C]--min-age\f[R] - Don\[aq]t transfer any file younger than this
.PP
Controls the minimum age of files within the scope of an rclone command.
@@ -14400,6 +14564,9 @@ Controls the minimum age of files within the scope of an rclone command.
E.g.
\f[C]rclone ls remote: --min-age 2d\f[R] lists files on
\f[C]remote:\f[R] of 2 days old or more.
+.PP
+See the time option docs (https://rclone.org/docs/#time-option) for
+valid formats.
.SS Other flags
.SS \f[C]--delete-excluded\f[R] - Delete files on dest excluded from sync
.PP
@@ -14628,6 +14795,11 @@ SSL PEM Private key
.SS --rc-max-header-bytes=VALUE
.PP
Maximum size of request header (default 4096)
+.SS --rc-min-tls-version=VALUE
+.PP
+The minimum TLS version that is acceptable.
+Valid values are \[dq]tls1.0\[dq], \[dq]tls1.1\[dq], \[dq]tls1.2\[dq]
+and \[dq]tls1.3\[dq] (default \[dq]tls1.0\[dq]).
.SS --rc-user=VALUE
.PP
User name for authentication.
@@ -15030,7 +15202,7 @@ The parameters can be a string as per the rest of rclone, eg
\f[C]s3:bucket/path\f[R] or \f[C]:sftp:/my/dir\f[R].
They can also be specified as JSON blobs.
.PP
-If specifyng a JSON blob it should be a object mapping strings to
+If specifying a JSON blob it should be a object mapping strings to
strings.
These values will be used to configure the remote.
There are 3 special values which may be set:
@@ -15731,6 +15903,11 @@ progress - output of the progress related to the underlying job
Parameters:
.IP \[bu] 2
jobid - id of the job (integer).
+.SS job/stopgroup: Stop all running jobs in a group
+.PP
+Parameters:
+.IP \[bu] 2
+group - name of the group (string).
.SS mount/listmounts: Show current mount points
.PP
This shows currently mounted points, which can be used for performing an
@@ -15831,10 +16008,10 @@ rclone rc mount/unmount mountPoint=/home//mountPoint
.fi
.PP
\f[B]Authentication is required for this call.\f[R]
-.SS mount/unmountall: Show current mount points
+.SS mount/unmountall: Unmount all active mounts
.PP
-This shows currently mounted points, which can be used for performing an
-unmount.
+rclone allows Linux, FreeBSD, macOS and Windows to mount any of
+Rclone\[aq]s cloud storage systems as a file system with FUSE.
.PP
This takes no parameters and returns error if unmount does not succeed.
.PP
@@ -16455,7 +16632,7 @@ It can be used to check that rclone is still alive and to check that
parameter passing is working properly.
.PP
\f[B]Authentication is required for this call.\f[R]
-.SS sync/bisync: Perform bidirectonal synchronization between two paths.
+.SS sync/bisync: Perform bidirectional synchronization between two paths.
.PP
This takes the following parameters
.IP \[bu] 2
@@ -17223,21 +17400,6 @@ T}@T{
-
T}
T{
-Hubic
-T}@T{
-MD5
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R/W
-T}@T{
--
-T}
-T{
Internet Archive
T}@T{
MD5, SHA1, CRC32
@@ -17388,6 +17550,21 @@ T}@T{
-
T}
T{
+Oracle Object Storage
+T}@T{
+MD5
+T}@T{
+R/W
+T}@T{
+No
+T}@T{
+No
+T}@T{
+R/W
+T}@T{
+-
+T}
+T{
pCloud
T}@T{
MD5, SHA1 \[u2077]
@@ -17493,6 +17670,21 @@ T}@T{
-
T}
T{
+SMB
+T}@T{
+-
+T}@T{
+-
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+-
+T}@T{
+-
+T}
+T{
SugarSync
T}@T{
-
@@ -17652,7 +17844,7 @@ To use the verify checksums when transferring between cloud storage
systems they must support a common hash type.
.SS ModTime
.PP
-Allmost all cloud storage systems store some sort of timestamp on
+Almost all cloud storage systems store some sort of timestamp on
objects, but several of them not something that is appropriate to use
for syncing.
E.g.
@@ -18889,29 +19081,6 @@ T}@T{
Yes
T}
T{
-Hubic
-T}@T{
-Yes \[dg]
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}
-T{
Internet Archive
T}@T{
No
@@ -19142,6 +19311,29 @@ T}@T{
No
T}
T{
+Oracle Object Storage
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+No
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}
+T{
pCloud
T}@T{
Yes
@@ -19303,6 +19495,29 @@ T}@T{
Yes
T}
T{
+SMB
+T}@T{
+No
+T}@T{
+No
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+No
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+No
+T}@T{
+Yes
+T}
+T{
SugarSync
T}@T{
Yes
@@ -19469,9 +19684,9 @@ T}
This deletes a directory quicker than just deleting all the files in the
directory.
.PP
-\[dg] Note Swift, Hubic, and Storj implement this in order to delete
-directory markers but they don\[aq]t actually have a quicker way of
-deleting files other than deleting them individually.
+\[dg] Note Swift and Storj implement this in order to delete directory
+markers but they don\[aq]t actually have a quicker way of deleting files
+other than deleting them individually.
.PP
\[dd] StreamUpload is not supported with Nextcloud
.SS Copy
@@ -19666,6 +19881,7 @@ These flags are available for every command.
--rc-job-expire-interval duration Interval to check for expired async jobs (default 10s)
--rc-key string SSL PEM Private key
--rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq])
--rc-no-auth Don\[aq]t require auth for certain methods
--rc-pass string Password for authentication
--rc-realm string Realm for authentication (default \[dq]rclone\[dq])
@@ -19682,6 +19898,7 @@ These flags are available for every command.
--refresh-times Refresh the modtime of remote files
--retries int Retry operations this many times if they fail (default 3)
--retries-sleep duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable)
+ --server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
@@ -19707,7 +19924,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.59.0\[dq])
+ --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.60.0\[dq])
-v, --verbose count Print lots more stuff (repeat for more)
\f[R]
.fi
@@ -19861,7 +20078,7 @@ They control the backends and may be set in the config file.
--drive-use-trash Send files to the trash instead of deleting permanently (default true)
--drive-v2-download-min-size SizeSuffix If Object\[aq]s are greater, use drive v2 API to download (default off)
--dropbox-auth-url string Auth server URL
- --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish comitting (default 10m0s)
+ --dropbox-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
--dropbox-batch-mode string Upload file batching sync|async|off (default \[dq]sync\[dq])
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
@@ -19895,6 +20112,7 @@ They control the backends and may be set in the config file.
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
--ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
+ --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
@@ -19913,6 +20131,7 @@ They control the backends and may be set in the config file.
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gcs-endpoint string Endpoint for the service
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don\[aq]t attempt to check the bucket exists or create it
--gcs-object-acl string Access Control List for new objects
@@ -19958,14 +20177,6 @@ They control the backends and may be set in the config file.
--http-no-head Don\[aq]t use HEAD requests
--http-no-slash Set this if the site doesn\[aq]t end directories with /
--http-url string URL of HTTP host to connect to
- --hubic-auth-url string Auth server URL
- --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
- --hubic-client-id string OAuth Client Id
- --hubic-client-secret string OAuth Client Secret
- --hubic-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
- --hubic-no-chunk Don\[aq]t chunk files during streaming upload
- --hubic-token string OAuth Access Token as a JSON blob
- --hubic-token-url string Token server url
--internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don\[aq]t ask the server to test against MD5 checksum calculated by rclone (default true)
--internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
@@ -20034,6 +20245,22 @@ They control the backends and may be set in the config file.
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
+ --oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
+ --oos-compartment string Object storage compartment OCID
+ --oos-config-file string Path to OCI config file (default \[dq]\[ti]/.oci/config\[dq])
+ --oos-config-profile string Profile name inside the oci config file (default \[dq]Default\[dq])
+ --oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
+ --oos-copy-timeout Duration Timeout for copy (default 1m0s)
+ --oos-disable-checksum Don\[aq]t store MD5 checksum with object metadata
+ --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --oos-endpoint string Endpoint for Object storage API
+ --oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery
+ --oos-namespace string Object storage namespace
+ --oos-no-check-bucket If set, don\[aq]t attempt to check the bucket exists or create it
+ --oos-provider string Choose your Auth Provider (default \[dq]env_auth\[dq])
+ --oos-region string Object storage Region
+ --oos-upload-concurrency int Concurrency for multipart uploads (default 10)
+ --oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
--opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
@@ -20065,6 +20292,7 @@ They control the backends and may be set in the config file.
--s3-bucket-acl string Canned ACL used when creating buckets
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
+ --s3-decompress If set this will decompress gzip encoded objects
--s3-disable-checksum Don\[aq]t store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
@@ -20083,6 +20311,7 @@ They control the backends and may be set in the config file.
--s3-no-check-bucket If set, don\[aq]t attempt to check the bucket exists or create it
--s3-no-head If set, don\[aq]t HEAD uploaded objects to check integrity
--s3-no-head-object If set, do not do HEAD before GET when getting objects
+ --s3-no-system-metadata Suppress setting and reading of system metadata
--s3-profile string Profile to use in the shared credentials file
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
@@ -20092,7 +20321,8 @@ They control the backends and may be set in the config file.
--s3-session-token string An AWS session token
--s3-shared-credentials-file string Path to the shared credentials file
--s3-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in S3
- --s3-sse-customer-key string If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data
+ --s3-sse-customer-key string To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data
+ --s3-sse-customer-key-base64 string If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data
--s3-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
@@ -20102,6 +20332,8 @@ They control the backends and may be set in the config file.
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
+ --s3-version-at Time Show file versions as they were at the specified time (default off)
+ --s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn\[aq]t exist
--seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
@@ -20148,6 +20380,15 @@ They control the backends and may be set in the config file.
--sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default \[dq]Sia-Agent\[dq])
--skip-links Don\[aq]t warn about skipped symlinks
+ --smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
+ --smb-domain string Domain name for NTLM authentication (default \[dq]WORKGROUP\[dq])
+ --smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --smb-hide-special-share Hide special shares (e.g. print$) which users aren\[aq]t supposed to access (default true)
+ --smb-host string SMB server hostname to connect to
+ --smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
+ --smb-pass string SMB password (obscured)
+ --smb-port int SMB port number (default 445)
+ --smb-user string SMB username (default \[dq]$USER\[dq])
--storj-access-grant string Access grant
--storj-api-key string API key
--storj-passphrase string Encryption passphrase
@@ -20178,6 +20419,7 @@ They control the backends and may be set in the config file.
--swift-key string API key or password (OS_PASSWORD)
--swift-leave-parts-on-error If true avoid calling abort upload on a failure
--swift-no-chunk Don\[aq]t chunk files during streaming upload
+ --swift-no-large-objects Disable support for static and dynamic large objects
--swift-region string Region name - optional (OS_REGION_NAME)
--swift-storage-policy string The storage policy to use when creating a new container
--swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
@@ -21422,7 +21664,7 @@ On such a critical error the \f[C]{...}.path1.lst\f[R] and
\f[C].lst-err\f[R], which blocks any future bisync runs (since the
normal \f[C].lst\f[R] files are not found).
Bisync keeps them under \f[C]bisync\f[R] subdirectory of the rclone
-cache direcory, typically at \f[C]${HOME}/.cache/rclone/bisync/\f[R] on
+cache directory, typically at \f[C]${HOME}/.cache/rclone/bisync/\f[R] on
Linux.
.PP
Some errors are considered temporary and re-running the bisync is not
@@ -21521,7 +21763,7 @@ don\[aq]t have spelling case differences (\f[C]Smile.jpg\f[R] vs.
.SS Windows support
.PP
Bisync has been tested on Windows 8.1, Windows 10 Pro 64-bit and on
-Windows Github runners.
+Windows GitHub runners.
.PP
Drive letters are allowed, including drive letters mapped to network
drives (\f[C]rclone bisync J:\[rs]localsync GDrive:\f[R]).
@@ -22144,7 +22386,7 @@ file mismatches in the test tree.
.IP \[bu] 2
Some Dropbox tests can fail, notably printing the following message:
\f[C]src and dst identical but can\[aq]t set mod time without deleting and re-uploading\f[R]
-This is expected and happens due a way Dropbox handles modificaion
+This is expected and happens due a way Dropbox handles modification
times.
You should use the \f[C]-refresh-times\f[R] test flag to make up for
this.
@@ -22157,7 +22399,7 @@ instructions (https://rclone.org/dropbox/#get-your-own-dropbox-app-id).
.PP
Sometimes even a slight change in the bisync source can cause little
changes spread around many log files.
-Updating them manually would be a nighmare.
+Updating them manually would be a nightmare.
.PP
The \f[C]-golden\f[R] flag will store the \f[C]test.log\f[R] and
\f[C]*.lst\f[R] listings from each test case into respective golden
@@ -22701,6 +22943,14 @@ Invoking \f[C]rclone mkdir backup:../desktop\f[R] is exactly the same as
invoking \f[C]rclone mkdir mydrive:private/backup/../desktop\f[R].
The empty path is not allowed as a remote.
To alias the current directory use \f[C].\f[R] instead.
+.PP
+The target remote can also be a connection
+string (https://rclone.org/docs/#connection-strings).
+This can be used to modify the config of a remote for different uses,
+e.g.
+the alias \f[C]myDriveTrash\f[R] with the target remote
+\f[C]myDrive,trashed_only:\f[R] can be used to only show the trashed
+files in \f[C]myDrive\f[R].
.SS Configuration
.PP
Here is an example of how to make an alias called \f[C]remote\f[R] for
@@ -23227,8 +23477,12 @@ IBM COS S3
.IP \[bu] 2
IDrive e2
.IP \[bu] 2
+IONOS Cloud
+.IP \[bu] 2
Minio
.IP \[bu] 2
+Qiniu Cloud Object Storage (Kodo)
+.IP \[bu] 2
RackCorp Object Storage
.IP \[bu] 2
Scaleway
@@ -23596,7 +23850,7 @@ the modification times of the objects being the time of upload.
Rclone\[aq]s default directory traversal is to process each directory
individually.
This takes one API call per directory.
-Using the \f[C]--fast-list\f[R] flag will read all info about the the
+Using the \f[C]--fast-list\f[R] flag will read all info about the
objects into memory first using a smaller number of API calls (one per
1000 objects).
See the rclone docs (https://rclone.org/docs/#fast-list) for more
@@ -23658,6 +23912,81 @@ This will mean that these objects do not have an MD5 checksum.
Note that reading this from the object takes an additional
\f[C]HEAD\f[R] request as the metadata isn\[aq]t returned in object
listings.
+.SS Versions
+.PP
+When bucket versioning is enabled (this can be done with rclone with the
+\f[C]rclone backend versioning\f[R] command) when rclone uploads a new
+version of a file it creates a new version of
+it (https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html)
+Likewise when you delete a file, the old version will be marked hidden
+and still be available.
+.PP
+Old versions of files, where available, are visible using the
+\f[C]--s3-versions\f[R] flag.
+.PP
+It is also possible to view a bucket as it was at a certain point in
+time, using the \f[C]--s3-version-at\f[R] flag.
+This will show the file versions as they were at that time, showing
+files that have been deleted afterwards, and hiding files that were
+created since.
+.PP
+If you wish to remove all the old versions then you can use the
+\f[C]rclone backend cleanup-hidden remote:bucket\f[R] command which will
+delete all the old hidden versions of files, leaving the current ones
+intact.
+You can also supply a path and only old versions under that path will be
+deleted, e.g.
+\f[C]rclone backend cleanup-hidden remote:bucket/path/to/stuff\f[R].
+.PP
+When you \f[C]purge\f[R] a bucket, the current and the old versions will
+be deleted then the bucket will be deleted.
+.PP
+However \f[C]delete\f[R] will cause the current versions of the files to
+become hidden old versions.
+.PP
+Here is a session showing the listing and retrieval of an old version
+followed by a \f[C]cleanup\f[R] of the old versions.
+.PP
+Show current version and all the versions with \f[C]--s3-versions\f[R]
+flag.
+.IP
+.nf
+\f[C]
+$ rclone -q ls s3:cleanup-test
+ 9 one.txt
+
+$ rclone -q --s3-versions ls s3:cleanup-test
+ 9 one.txt
+ 8 one-v2016-07-04-141032-000.txt
+ 16 one-v2016-07-04-141003-000.txt
+ 15 one-v2016-07-02-155621-000.txt
+\f[R]
+.fi
+.PP
+Retrieve an old version
+.IP
+.nf
+\f[C]
+$ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
+
+$ ls -l /tmp/one-v2016-07-04-141003-000.txt
+-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txt
+\f[R]
+.fi
+.PP
+Clean up all the old versions and show that they\[aq]ve gone.
+.IP
+.nf
+\f[C]
+$ rclone -q backend cleanup-hidden s3:cleanup-test
+
+$ rclone -q ls s3:cleanup-test
+ 9 one.txt
+
+$ rclone -q --s3-versions ls s3:cleanup-test
+ 9 one.txt
+\f[R]
+.fi
.SS Cleanup
.PP
If you run \f[C]rclone cleanup s3:bucket\f[R] then it will remove all
@@ -23939,8 +24268,8 @@ all the files to be uploaded as multipart.
Here are the Standard options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, Ceph, China Mobile,
Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS,
-IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS,
-StackPath, Storj, Tencent COS and Wasabi).
+IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway,
+SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).
.SS --s3-provider
.PP
Choose your S3 provider.
@@ -24024,6 +24353,12 @@ IBM COS S3
IDrive e2
.RE
.IP \[bu] 2
+\[dq]IONOS\[dq]
+.RS 2
+.IP \[bu] 2
+IONOS Cloud
+.RE
+.IP \[bu] 2
\[dq]LyveCloud\[dq]
.RS 2
.IP \[bu] 2
@@ -24084,6 +24419,12 @@ Tencent Cloud Object Storage (COS)
Wasabi Object Storage
.RE
.IP \[bu] 2
+\[dq]Qiniu\[dq]
+.RS 2
+.IP \[bu] 2
+Qiniu Object Storage (Kodo)
+.RE
+.IP \[bu] 2
\[dq]Other\[dq]
.RS 2
.IP \[bu] 2
@@ -24685,6 +25026,120 @@ centers for low latency.
.PP
Region to connect to.
.PP
+Properties:
+.IP \[bu] 2
+Config: region
+.IP \[bu] 2
+Env Var: RCLONE_S3_REGION
+.IP \[bu] 2
+Provider: Qiniu
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]cn-east-1\[dq]
+.RS 2
+.IP \[bu] 2
+The default endpoint - a good choice if you are unsure.
+.IP \[bu] 2
+East China Region 1.
+.IP \[bu] 2
+Needs location constraint cn-east-1.
+.RE
+.IP \[bu] 2
+\[dq]cn-east-2\[dq]
+.RS 2
+.IP \[bu] 2
+East China Region 2.
+.IP \[bu] 2
+Needs location constraint cn-east-2.
+.RE
+.IP \[bu] 2
+\[dq]cn-north-1\[dq]
+.RS 2
+.IP \[bu] 2
+North China Region 1.
+.IP \[bu] 2
+Needs location constraint cn-north-1.
+.RE
+.IP \[bu] 2
+\[dq]cn-south-1\[dq]
+.RS 2
+.IP \[bu] 2
+South China Region 1.
+.IP \[bu] 2
+Needs location constraint cn-south-1.
+.RE
+.IP \[bu] 2
+\[dq]us-north-1\[dq]
+.RS 2
+.IP \[bu] 2
+North America Region.
+.IP \[bu] 2
+Needs location constraint us-north-1.
+.RE
+.IP \[bu] 2
+\[dq]ap-southeast-1\[dq]
+.RS 2
+.IP \[bu] 2
+Southeast Asia Region 1.
+.IP \[bu] 2
+Needs location constraint ap-southeast-1.
+.RE
+.IP \[bu] 2
+\[dq]ap-northeast-1\[dq]
+.RS 2
+.IP \[bu] 2
+Northeast Asia Region 1.
+.IP \[bu] 2
+Needs location constraint ap-northeast-1.
+.RE
+.RE
+.SS --s3-region
+.PP
+Region where your bucket will be created and your data stored.
+.PP
+Properties:
+.IP \[bu] 2
+Config: region
+.IP \[bu] 2
+Env Var: RCLONE_S3_REGION
+.IP \[bu] 2
+Provider: IONOS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]de\[dq]
+.RS 2
+.IP \[bu] 2
+Frankfurt, Germany
+.RE
+.IP \[bu] 2
+\[dq]eu-central-2\[dq]
+.RS 2
+.IP \[bu] 2
+Berlin, Germany
+.RE
+.IP \[bu] 2
+\[dq]eu-south-2\[dq]
+.RS 2
+.IP \[bu] 2
+Logrono, Spain
+.RE
+.RE
+.SS --s3-region
+.PP
+Region to connect to.
+.PP
Leave blank if you are using an S3 clone and you don\[aq]t have a
region.
.PP
@@ -24695,7 +25150,7 @@ Config: region
Env Var: RCLONE_S3_REGION
.IP \[bu] 2
Provider:
-!AWS,Alibaba,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
+!AWS,Alibaba,ChinaMobile,Cloudflare,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive
.IP \[bu] 2
Type: string
.IP \[bu] 2
@@ -25367,6 +25822,45 @@ Singapore Single Site Private Endpoint
.RE
.SS --s3-endpoint
.PP
+Endpoint for IONOS S3 Object Storage.
+.PP
+Specify the endpoint from the same region.
+.PP
+Properties:
+.IP \[bu] 2
+Config: endpoint
+.IP \[bu] 2
+Env Var: RCLONE_S3_ENDPOINT
+.IP \[bu] 2
+Provider: IONOS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]s3-eu-central-1.ionoscloud.com\[dq]
+.RS 2
+.IP \[bu] 2
+Frankfurt, Germany
+.RE
+.IP \[bu] 2
+\[dq]s3-eu-central-2.ionoscloud.com\[dq]
+.RS 2
+.IP \[bu] 2
+Berlin, Germany
+.RE
+.IP \[bu] 2
+\[dq]s3-eu-south-2.ionoscloud.com\[dq]
+.RS 2
+.IP \[bu] 2
+Logrono, Spain
+.RE
+.RE
+.SS --s3-endpoint
+.PP
Endpoint for OSS API.
.PP
Properties:
@@ -26022,6 +26516,67 @@ Auckland (New Zealand) Endpoint
.RE
.SS --s3-endpoint
.PP
+Endpoint for Qiniu Object Storage.
+.PP
+Properties:
+.IP \[bu] 2
+Config: endpoint
+.IP \[bu] 2
+Env Var: RCLONE_S3_ENDPOINT
+.IP \[bu] 2
+Provider: Qiniu
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]s3-cn-east-1.qiniucs.com\[dq]
+.RS 2
+.IP \[bu] 2
+East China Endpoint 1
+.RE
+.IP \[bu] 2
+\[dq]s3-cn-east-2.qiniucs.com\[dq]
+.RS 2
+.IP \[bu] 2
+East China Endpoint 2
+.RE
+.IP \[bu] 2
+\[dq]s3-cn-north-1.qiniucs.com\[dq]
+.RS 2
+.IP \[bu] 2
+North China Endpoint 1
+.RE
+.IP \[bu] 2
+\[dq]s3-cn-south-1.qiniucs.com\[dq]
+.RS 2
+.IP \[bu] 2
+South China Endpoint 1
+.RE
+.IP \[bu] 2
+\[dq]s3-us-north-1.qiniucs.com\[dq]
+.RS 2
+.IP \[bu] 2
+North America Endpoint 1
+.RE
+.IP \[bu] 2
+\[dq]s3-ap-southeast-1.qiniucs.com\[dq]
+.RS 2
+.IP \[bu] 2
+Southeast Asia Endpoint 1
+.RE
+.IP \[bu] 2
+\[dq]s3-ap-northeast-1.qiniucs.com\[dq]
+.RS 2
+.IP \[bu] 2
+Northeast Asia Endpoint 1
+.RE
+.RE
+.SS --s3-endpoint
+.PP
Endpoint for S3 API.
.PP
Required when using an S3 clone.
@@ -26033,7 +26588,7 @@ Config: endpoint
Env Var: RCLONE_S3_ENDPOINT
.IP \[bu] 2
Provider:
-!AWS,IBMCOS,IDrive,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp
+!AWS,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp,Qiniu
.IP \[bu] 2
Type: string
.IP \[bu] 2
@@ -26882,6 +27437,69 @@ Auckland (New Zealand) Region
.PP
Location constraint - must be set to match the Region.
.PP
+Used when creating buckets only.
+.PP
+Properties:
+.IP \[bu] 2
+Config: location_constraint
+.IP \[bu] 2
+Env Var: RCLONE_S3_LOCATION_CONSTRAINT
+.IP \[bu] 2
+Provider: Qiniu
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]cn-east-1\[dq]
+.RS 2
+.IP \[bu] 2
+East China Region 1
+.RE
+.IP \[bu] 2
+\[dq]cn-east-2\[dq]
+.RS 2
+.IP \[bu] 2
+East China Region 2
+.RE
+.IP \[bu] 2
+\[dq]cn-north-1\[dq]
+.RS 2
+.IP \[bu] 2
+North China Region 1
+.RE
+.IP \[bu] 2
+\[dq]cn-south-1\[dq]
+.RS 2
+.IP \[bu] 2
+South China Region 1
+.RE
+.IP \[bu] 2
+\[dq]us-north-1\[dq]
+.RS 2
+.IP \[bu] 2
+North America Region 1
+.RE
+.IP \[bu] 2
+\[dq]ap-southeast-1\[dq]
+.RS 2
+.IP \[bu] 2
+Southeast Asia Region 1
+.RE
+.IP \[bu] 2
+\[dq]ap-northeast-1\[dq]
+.RS 2
+.IP \[bu] 2
+Northeast Asia Region 1
+.RE
+.RE
+.SS --s3-location-constraint
+.PP
+Location constraint - must be set to match the Region.
+.PP
Leave blank if not sure.
Used when creating buckets only.
.PP
@@ -26892,7 +27510,7 @@ Config: location_constraint
Env Var: RCLONE_S3_LOCATION_CONSTRAINT
.IP \[bu] 2
Provider:
-!AWS,IBMCOS,IDrive,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS
+!AWS,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,ArvanCloud,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS
.IP \[bu] 2
Type: string
.IP \[bu] 2
@@ -27369,13 +27987,56 @@ Archived storage.
Prices are lower, but it needs to be restored first to be accessed.
.RE
.RE
+.SS --s3-storage-class
+.PP
+The storage class to use when storing new objects in Qiniu.
+.PP
+Properties:
+.IP \[bu] 2
+Config: storage_class
+.IP \[bu] 2
+Env Var: RCLONE_S3_STORAGE_CLASS
+.IP \[bu] 2
+Provider: Qiniu
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]STANDARD\[dq]
+.RS 2
+.IP \[bu] 2
+Standard storage class
+.RE
+.IP \[bu] 2
+\[dq]LINE\[dq]
+.RS 2
+.IP \[bu] 2
+Infrequent access storage mode
+.RE
+.IP \[bu] 2
+\[dq]GLACIER\[dq]
+.RS 2
+.IP \[bu] 2
+Archive storage mode
+.RE
+.IP \[bu] 2
+\[dq]DEEP_ARCHIVE\[dq]
+.RS 2
+.IP \[bu] 2
+Deep archive storage mode
+.RE
+.RE
.SS Advanced options
.PP
Here are the Advanced options specific to s3 (Amazon S3 Compliant
Storage Providers including AWS, Alibaba, Ceph, China Mobile,
Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS,
-IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS,
-StackPath, Storj, Tencent COS and Wasabi).
+IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway,
+SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi).
.SS --s3-bucket-acl
.PP
Canned ACL used when creating buckets.
@@ -27482,9 +28143,11 @@ AES256
.RE
.SS --s3-sse-customer-key
.PP
-If using SSE-C you must provide the secret encryption key used to
+To use SSE-C you may provide the secret encryption key used to
encrypt/decrypt your data.
.PP
+Alternatively you can provide --sse-customer-key-base64.
+.PP
Properties:
.IP \[bu] 2
Config: sse_customer_key
@@ -27506,6 +28169,34 @@ Examples:
None
.RE
.RE
+.SS --s3-sse-customer-key-base64
+.PP
+If using SSE-C you must provide the secret encryption key encoded in
+base64 format to encrypt/decrypt your data.
+.PP
+Alternatively you can provide --sse-customer-key.
+.PP
+Properties:
+.IP \[bu] 2
+Config: sse_customer_key_base64
+.IP \[bu] 2
+Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_BASE64
+.IP \[bu] 2
+Provider: AWS,Ceph,ChinaMobile,Minio
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]\[dq]
+.RS 2
+.IP \[bu] 2
+None
+.RE
+.RE
.SS --s3-sse-customer-key-md5
.PP
If using SSE-C you may provide the secret encryption key MD5 checksum
@@ -28074,6 +28765,77 @@ Env Var: RCLONE_S3_USE_PRESIGNED_REQUEST
Type: bool
.IP \[bu] 2
Default: false
+.SS --s3-versions
+.PP
+Include old versions in directory listings.
+.PP
+Properties:
+.IP \[bu] 2
+Config: versions
+.IP \[bu] 2
+Env Var: RCLONE_S3_VERSIONS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS --s3-version-at
+.PP
+Show file versions as they were at the specified time.
+.PP
+The parameter should be a date, \[dq]2006-01-02\[dq], datetime
+\[dq]2006-01-02 15:04:05\[dq] or a duration for that long ago, eg
+\[dq]100d\[dq] or \[dq]1h\[dq].
+.PP
+Note that when using this no file write operations are permitted, so you
+can\[aq]t upload files or delete them.
+.PP
+See the time option docs (https://rclone.org/docs/#time-option) for
+valid formats.
+.PP
+Properties:
+.IP \[bu] 2
+Config: version_at
+.IP \[bu] 2
+Env Var: RCLONE_S3_VERSION_AT
+.IP \[bu] 2
+Type: Time
+.IP \[bu] 2
+Default: off
+.SS --s3-decompress
+.PP
+If set this will decompress gzip encoded objects.
+.PP
+It is possible to upload objects to S3 with \[dq]Content-Encoding:
+gzip\[dq] set.
+Normally rclone will download these files as compressed objects.
+.PP
+If this flag is set then rclone will decompress these files with
+\[dq]Content-Encoding: gzip\[dq] as they are received.
+This means that rclone can\[aq]t check the size and hash but the file
+contents will be decompressed.
+.PP
+Properties:
+.IP \[bu] 2
+Config: decompress
+.IP \[bu] 2
+Env Var: RCLONE_S3_DECOMPRESS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS --s3-no-system-metadata
+.PP
+Suppress setting and reading of system metadata
+.PP
+Properties:
+.IP \[bu] 2
+Config: no_system_metadata
+.IP \[bu] 2
+Env Var: RCLONE_S3_NO_SYSTEM_METADATA
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS Metadata
.PP
User metadata is stored as x-amz-meta- keys.
@@ -28348,6 +29110,52 @@ Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
Options:
.IP \[bu] 2
\[dq]max-age\[dq]: Max age of upload to delete
+.SS cleanup-hidden
+.PP
+Remove old versions of files.
+.IP
+.nf
+\f[C]
+rclone backend cleanup-hidden remote: [options] [+]
+\f[R]
+.fi
+.PP
+This command removes any old hidden versions of files on a versions
+enabled bucket.
+.PP
+Note that you can use -i/--dry-run with this command to see what it
+would do.
+.IP
+.nf
+\f[C]
+rclone backend cleanup-hidden s3:bucket/path/to/dir
+\f[R]
+.fi
+.SS versioning
+.PP
+Set/get versioning support for a bucket.
+.IP
+.nf
+\f[C]
+rclone backend versioning remote: [options] [+]
+\f[R]
+.fi
+.PP
+This command sets versioning support if a parameter is passed and then
+returns the current versioning status for the bucket supplied.
+.IP
+.nf
+\f[C]
+rclone backend versioning s3:bucket # read status only
+rclone backend versioning s3:bucket Enabled
+rclone backend versioning s3:bucket Suspended
+\f[R]
+.fi
+.PP
+It may return \[dq]Enabled\[dq], \[dq]Suspended\[dq] or
+\[dq]Unversioned\[dq].
+Note that once versioning has been enabled the status can\[aq]t be set
+back to \[dq]Unversioned\[dq].
.SS Anonymous access to public buckets
.PP
If you want to use rclone to access a public bucket, configure with a
@@ -29140,6 +29948,230 @@ d) Delete this remote
y/e/d> y
\f[R]
.fi
+.SS IONOS Cloud
+.PP
+IONOS S3 Object Storage (https://cloud.ionos.com/storage/object-storage)
+is a service offered by IONOS for storing and accessing unstructured
+data.
+To connect to the service, you will need an access key and a secret key.
+These can be found in the Data Center Designer (https://dcd.ionos.com/),
+by selecting \f[B]Manager resources\f[R] > \f[B]Object Storage Key
+Manager\f[R].
+.PP
+Here is an example of a configuration.
+First, run \f[C]rclone config\f[R].
+This will walk you through an interactive setup process.
+Type \f[C]n\f[R] to add the new remote, and then enter a name:
+.IP
+.nf
+\f[C]
+Enter name for new remote.
+name> ionos-fra
+\f[R]
+.fi
+.PP
+Type \f[C]s3\f[R] to choose the connection type:
+.IP
+.nf
+\f[C]
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi
+ \[rs] (s3)
+[snip]
+Storage> s3
+\f[R]
+.fi
+.PP
+Type \f[C]IONOS\f[R]:
+.IP
+.nf
+\f[C]
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / IONOS Cloud
+ \[rs] (IONOS)
+[snip]
+provider> IONOS
+\f[R]
+.fi
+.PP
+Press Enter to choose the default option
+\f[C]Enter AWS credentials in the next step\f[R]:
+.IP
+.nf
+\f[C]
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \[rs] (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \[rs] (true)
+env_auth>
+\f[R]
+.fi
+.PP
+Enter your Access Key and Secret key.
+These can be retrieved in the Data Center
+Designer (https://dcd.ionos.com/), click on the menu \[lq]Manager
+resources\[rq] / \[dq]Object Storage Key Manager\[dq].
+.IP
+.nf
+\f[C]
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> YOUR_ACCESS_KEY
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> YOUR_SECRET_KEY
+\f[R]
+.fi
+.PP
+Choose the region where your bucket is located:
+.IP
+.nf
+\f[C]
+Option region.
+Region where your bucket will be created and your data stored.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Frankfurt, Germany
+ \[rs] (de)
+ 2 / Berlin, Germany
+ \[rs] (eu-central-2)
+ 3 / Logrono, Spain
+ \[rs] (eu-south-2)
+region> 2
+\f[R]
+.fi
+.PP
+Choose the endpoint from the same region:
+.IP
+.nf
+\f[C]
+Option endpoint.
+Endpoint for IONOS S3 Object Storage.
+Specify the endpoint from the same region.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Frankfurt, Germany
+ \[rs] (s3-eu-central-1.ionoscloud.com)
+ 2 / Berlin, Germany
+ \[rs] (s3-eu-central-2.ionoscloud.com)
+ 3 / Logrono, Spain
+ \[rs] (s3-eu-south-2.ionoscloud.com)
+endpoint> 1
+\f[R]
+.fi
+.PP
+Press Enter to choose the default option or choose the desired ACL
+setting:
+.IP
+.nf
+\f[C]
+Option acl.
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn\[aq]t set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn\[aq]t copy the ACL from the source but rather writes a fresh one.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \[rs] (private)
+ / Owner gets FULL_CONTROL.
+[snip]
+acl>
+\f[R]
+.fi
+.PP
+Press Enter to skip the advanced config:
+.IP
+.nf
+\f[C]
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n>
+\f[R]
+.fi
+.PP
+Press Enter to save the configuration, and then \f[C]q\f[R] to quit the
+configuration process:
+.IP
+.nf
+\f[C]
+Configuration complete.
+Options:
+- type: s3
+- provider: IONOS
+- access_key_id: YOUR_ACCESS_KEY
+- secret_access_key: YOUR_SECRET_KEY
+- endpoint: s3-eu-central-1.ionoscloud.com
+Keep this \[dq]ionos-fra\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.PP
+Done! Now you can try some commands (for macOS, use \f[C]./rclone\f[R]
+instead of \f[C]rclone\f[R]).
+.IP "1)" 3
+Create a bucket (the name must be unique within the whole IONOS S3)
+.IP
+.nf
+\f[C]
+rclone mkdir ionos-fra:my-bucket
+\f[R]
+.fi
+.IP "2)" 3
+List available buckets
+.IP
+.nf
+\f[C]
+rclone lsd ionos-fra:
+\f[R]
+.fi
+.IP "4)" 3
+Copy a file from local to remote
+.IP
+.nf
+\f[C]
+rclone copy /Users/file.txt ionos-fra:my-bucket
+\f[R]
+.fi
+.IP "3)" 3
+List contents of a bucket
+.IP
+.nf
+\f[C]
+rclone ls ionos-fra:my-bucket
+\f[R]
+.fi
+.IP "5)" 3
+Copy a file from remote to local
+.IP
+.nf
+\f[C]
+rclone copy ionos-fra:my-bucket/file.txt
+\f[R]
+.fi
.SS Minio
.PP
Minio (https://minio.io/) is an object storage server built for cloud
@@ -29217,6 +30249,228 @@ So once set up, for example, to copy files into a bucket
rclone copy /path/to/files minio:bucket
\f[R]
.fi
+.SS Qiniu Cloud Object Storage (Kodo)
+.PP
+Qiniu Cloud Object Storage
+(Kodo) (https://www.qiniu.com/en/products/kodo), a completely
+independent-researched core technology which is proven by repeated
+customer experience has occupied absolute leading market leader
+position.
+Kodo can be widely applied to mass data management.
+.PP
+To configure access to Qiniu Kodo, follow the steps below:
+.IP "1." 3
+Run \f[C]rclone config\f[R] and select \f[C]n\f[R] for a new remote.
+.IP
+.nf
+\f[C]
+rclone config
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+\f[R]
+.fi
+.IP "2." 3
+Give the name of the configuration.
+For example, name it \[aq]qiniu\[aq].
+.IP
+.nf
+\f[C]
+name> qiniu
+\f[R]
+.fi
+.IP "3." 3
+Select \f[C]s3\f[R] storage.
+.IP
+.nf
+\f[C]
+Choose a number from below, or type in your own value
+ 1 / 1Fichier
+ \[rs] (fichier)
+ 2 / Akamai NetStorage
+ \[rs] (netstorage)
+ 3 / Alias for an existing remote
+ \[rs] (alias)
+ 4 / Amazon Drive
+ \[rs] (amazon cloud drive)
+ 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, Digital Ocean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi
+ \[rs] (s3)
+[snip]
+Storage> s3
+\f[R]
+.fi
+.IP "4." 3
+Select \f[C]Qiniu\f[R] provider.
+.IP
+.nf
+\f[C]
+Choose a number from below, or type in your own value
+1 / Amazon Web Services (AWS) S3
+ \[rs] \[dq]AWS\[dq]
+[snip]
+22 / Qiniu Object Storage (Kodo)
+ \[rs] (Qiniu)
+[snip]
+provider> Qiniu
+\f[R]
+.fi
+.IP "5." 3
+Enter your SecretId and SecretKey of Qiniu Kodo.
+.IP
+.nf
+\f[C]
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Enter a boolean value (true or false). Press Enter for the default (\[dq]false\[dq]).
+Choose a number from below, or type in your own value
+ 1 / Enter AWS credentials in the next step
+ \[rs] \[dq]false\[dq]
+ 2 / Get AWS credentials from the environment (env vars or IAM)
+ \[rs] \[dq]true\[dq]
+env_auth> 1
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a string value. Press Enter for the default (\[dq]\[dq]).
+access_key_id> AKIDxxxxxxxxxx
+AWS Secret Access Key (password)
+Leave blank for anonymous access or runtime credentials.
+Enter a string value. Press Enter for the default (\[dq]\[dq]).
+secret_access_key> xxxxxxxxxxx
+\f[R]
+.fi
+.IP "6." 3
+Select endpoint for Qiniu Kodo.
+This is the standard endpoint for different region.
+.IP
+.nf
+\f[C]
+ / The default endpoint - a good choice if you are unsure.
+ 1 | East China Region 1.
+ | Needs location constraint cn-east-1.
+ \[rs] (cn-east-1)
+ / East China Region 2.
+ 2 | Needs location constraint cn-east-2.
+ \[rs] (cn-east-2)
+ / North China Region 1.
+ 3 | Needs location constraint cn-north-1.
+ \[rs] (cn-north-1)
+ / South China Region 1.
+ 4 | Needs location constraint cn-south-1.
+ \[rs] (cn-south-1)
+ / North America Region.
+ 5 | Needs location constraint us-north-1.
+ \[rs] (us-north-1)
+ / Southeast Asia Region 1.
+ 6 | Needs location constraint ap-southeast-1.
+ \[rs] (ap-southeast-1)
+ / Northeast Asia Region 1.
+ 7 | Needs location constraint ap-northeast-1.
+ \[rs] (ap-northeast-1)
+[snip]
+endpoint> 1
+
+Option endpoint.
+Endpoint for Qiniu Object Storage.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / East China Endpoint 1
+ \[rs] (s3-cn-east-1.qiniucs.com)
+ 2 / East China Endpoint 2
+ \[rs] (s3-cn-east-2.qiniucs.com)
+ 3 / North China Endpoint 1
+ \[rs] (s3-cn-north-1.qiniucs.com)
+ 4 / South China Endpoint 1
+ \[rs] (s3-cn-south-1.qiniucs.com)
+ 5 / North America Endpoint 1
+ \[rs] (s3-us-north-1.qiniucs.com)
+ 6 / Southeast Asia Endpoint 1
+ \[rs] (s3-ap-southeast-1.qiniucs.com)
+ 7 / Northeast Asia Endpoint 1
+ \[rs] (s3-ap-northeast-1.qiniucs.com)
+endpoint> 1
+
+Option location_constraint.
+Location constraint - must be set to match the Region.
+Used when creating buckets only.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / East China Region 1
+ \[rs] (cn-east-1)
+ 2 / East China Region 2
+ \[rs] (cn-east-2)
+ 3 / North China Region 1
+ \[rs] (cn-north-1)
+ 4 / South China Region 1
+ \[rs] (cn-south-1)
+ 5 / North America Region 1
+ \[rs] (us-north-1)
+ 6 / Southeast Asia Region 1
+ \[rs] (ap-southeast-1)
+ 7 / Northeast Asia Region 1
+ \[rs] (ap-northeast-1)
+location_constraint> 1
+\f[R]
+.fi
+.IP "7." 3
+Choose acl and storage class.
+.IP
+.nf
+\f[C]
+Note that this ACL is applied when server-side copying objects as S3
+doesn\[aq]t copy the ACL from the source but rather writes a fresh one.
+Enter a string value. Press Enter for the default (\[dq]\[dq]).
+Choose a number from below, or type in your own value
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \[rs] (private)
+ / Owner gets FULL_CONTROL.
+ 2 | The AllUsers group gets READ access.
+ \[rs] (public-read)
+[snip]
+acl> 2
+The storage class to use when storing new objects in Tencent COS.
+Enter a string value. Press Enter for the default (\[dq]\[dq]).
+Choose a number from below, or type in your own value
+ 1 / Standard storage class
+ \[rs] (STANDARD)
+ 2 / Infrequent access storage mode
+ \[rs] (LINE)
+ 3 / Archive storage mode
+ \[rs] (GLACIER)
+ 4 / Deep archive storage mode
+ \[rs] (DEEP_ARCHIVE)
+[snip]
+storage_class> 1
+Edit advanced config? (y/n)
+y) Yes
+n) No (default)
+y/n> n
+Remote config
+--------------------
+[qiniu]
+- type: s3
+- provider: Qiniu
+- access_key_id: xxx
+- secret_access_key: xxx
+- region: cn-east-1
+- endpoint: s3-cn-east-1.qiniucs.com
+- location_constraint: cn-east-1
+- acl: public-read
+- storage_class: STANDARD
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+qiniu s3
+\f[R]
+.fi
.SS RackCorp
.PP
RackCorp Object Storage (https://www.rackcorp.com/storage/s3storage) is
@@ -33714,8 +34968,8 @@ unencrypted content, as well as through a crypt remote for encrypted
content, it is recommended to point the crypt remote to a separate
directory within the wrapped remote.
If you use a bucket-based storage system (e.g.
-Swift, S3, Google Compute Storage, B2, Hubic) it is generally advisable
-to wrap the crypt remote around a specific bucket (\f[C]s3:bucket\f[R]).
+Swift, S3, Google Compute Storage, B2) it is generally advisable to wrap
+the crypt remote around a specific bucket (\f[C]s3:bucket\f[R]).
If wrapping around the entire root of the storage (\f[C]s3:\f[R]), and
use the optional file name encryption, rclone will encrypt the bucket
name.
@@ -33733,7 +34987,7 @@ content.
The only possibility is to re-upload everything via a crypt remote
configured with your new password.
.PP
-Depending on the size of your data, your bandwith, storage quota etc,
+Depending on the size of your data, your bandwidth, storage quota etc,
there are different approaches you can take: - If you have everything in
a different location, for example on your local system, you could remove
all of the prior encrypted files, change the password for your
@@ -33748,7 +35002,7 @@ and re-encrypting using the new password.
When done, delete the original crypt remote directory and finally the
rclone crypt configuration with the old password.
All data will be streamed from the storage system and back, so you will
-get half the bandwith and be charged twice if you have upload and
+get half the bandwidth and be charged twice if you have upload and
download quota on the storage system.
.PP
\f[B]Note\f[R]: A security problem related to the random password
@@ -34138,7 +35392,7 @@ How to encode the encrypted filename to text string.
.PP
This option could help with shortening the encrypted filename.
The suitable option would depend on the way your remote count the
-filename length and if it\[aq]s case sensitve.
+filename length and if it\[aq]s case sensitive.
.PP
Properties:
.IP \[bu] 2
@@ -34531,7 +35785,7 @@ Generally -1 (default, equivalent to 5) is recommended.
Levels 1 to 9 increase compression at the cost of speed.
Going past 6 generally offers very little return.
.PP
-Level -2 uses Huffmann encoding only.
+Level -2 uses Huffman encoding only.
Only use if you know what you are doing.
Level 0 turns off compression.
.PP
@@ -34701,7 +35955,7 @@ remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives]
type = combine
-remote = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq]
+upstreams = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq]
\f[R]
.fi
.PP
@@ -35256,7 +36510,7 @@ Type: Duration
Default: 0s
.SS --dropbox-batch-commit-timeout
.PP
-Max time to wait for a batch to finish comitting
+Max time to wait for a batch to finish committing
.PP
Properties:
.IP \[bu] 2
@@ -35369,8 +36623,8 @@ accessible through a global file system.
.SS Configuration
.PP
The initial setup for the Enterprise File Fabric backend involves
-getting a token from the the Enterprise File Fabric which you need to do
-in your browser.
+getting a token from the Enterprise File Fabric which you need to do in
+your browser.
\f[C]rclone config\f[R] walks you through it.
.PP
Here is an example of how to make a remote called \f[C]remote\f[R].
@@ -35702,8 +36956,7 @@ rclone config
Rclone config guides you through an interactive setup process.
A minimal rclone FTP remote definition only requires host, username and
password.
-For an anonymous FTP server, use \f[C]anonymous\f[R] as username and
-your email address as password.
+For an anonymous FTP server, see below.
.IP
.nf
\f[C]
@@ -35797,11 +37050,41 @@ any excess files in the directory.
rclone sync -i /home/local/directory remote:directory
\f[R]
.fi
-.SS Example without a config file
+.SS Anonymous FTP
+.PP
+When connecting to a FTP server that allows anonymous login, you can use
+the special \[dq]anonymous\[dq] username.
+Traditionally, this user account accepts any string as a password,
+although it is common to use either the password \[dq]anonymous\[dq] or
+\[dq]guest\[dq].
+Some servers require the use of a valid e-mail address as password.
+.PP
+Using on-the-fly or connection
+string (https://rclone.org/docs/#connection-strings) remotes makes it
+easy to access such servers, without requiring any configuration in
+advance.
+The following are examples of that:
.IP
.nf
\f[C]
-rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=\[ga]rclone obscure dummy\[ga]
+rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy)
+rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy):
+\f[R]
+.fi
+.PP
+The above examples work in Linux shells and in PowerShell, but not
+Windows Command Prompt.
+They execute the rclone
+obscure (https://rclone.org/commands/rclone_obscure/) command to create
+a password string in the format required by the pass option.
+The following examples are exactly the same, except use an already
+obscured string representation of the same password \[dq]dummy\[dq], and
+therefore works even in Windows Command Prompt:
+.IP
+.nf
+\f[C]
+rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM
+rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM:
\f[R]
.fi
.SS Implicit TLS
@@ -35818,7 +37101,7 @@ set (https://rclone.org/overview/#restricted-characters) the following
characters are also replaced:
.PP
File names cannot end with the following characters.
-Repacement is limited to the last character in a file name:
+Replacement is limited to the last character in a file name:
.PP
.TS
tab(@);
@@ -35971,6 +37254,19 @@ Here are the Advanced options specific to ftp (FTP).
.PP
Maximum number of FTP simultaneous connections, 0 for unlimited.
.PP
+Note that setting this is very likely to cause deadlocks so it should be
+used with care.
+.PP
+If you are doing a sync or copy then make sure concurrency is one more
+than the sum of \f[C]--transfers\f[R] and \f[C]--checkers\f[R].
+.PP
+If you use \f[C]--check-first\f[R] then it just needs to be one more
+than the maximum of \f[C]--checkers\f[R] and \f[C]--transfers\f[R].
+.PP
+So for \f[C]concurrency 3\f[R] you\[aq]d use
+\f[C]--checkers 2 --transfers 2 --check-first\f[R] or
+\f[C]--checkers 1 --transfers 1\f[R].
+.PP
Properties:
.IP \[bu] 2
Config: concurrency
@@ -36045,6 +37341,20 @@ Env Var: RCLONE_FTP_WRITING_MDTM
Type: bool
.IP \[bu] 2
Default: false
+.SS --ftp-force-list-hidden
+.PP
+Use LIST -a to force listing of hidden files and folders.
+This will disable the use of MLSD.
+.PP
+Properties:
+.IP \[bu] 2
+Config: force_list_hidden
+.IP \[bu] 2
+Env Var: RCLONE_FTP_FORCE_LIST_HIDDEN
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --ftp-idle-timeout
.PP
Max time before closing idle connections.
@@ -37185,7 +38495,7 @@ If set this will decompress gzip encoded objects.
.PP
It is possible to upload objects to GCS with \[dq]Content-Encoding:
gzip\[dq] set.
-Normally rclone will download these files files as compressed objects.
+Normally rclone will download these files as compressed objects.
.PP
If this flag is set then rclone will decompress these files with
\[dq]Content-Encoding: gzip\[dq] as they are received.
@@ -37201,6 +38511,21 @@ Env Var: RCLONE_GCS_DECOMPRESS
Type: bool
.IP \[bu] 2
Default: false
+.SS --gcs-endpoint
+.PP
+Endpoint for the service.
+.PP
+Leave blank normally.
+.PP
+Properties:
+.IP \[bu] 2
+Config: endpoint
+.IP \[bu] 2
+Env Var: RCLONE_GCS_ENDPOINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SS --gcs-encoding
.PP
The encoding for the backend.
@@ -39081,14 +40406,14 @@ remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives]
type = combine
-remote = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq]
+upstreams = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq]
\f[R]
.fi
.PP
Adding this to the rclone config file will cause those team drives to be
accessible with the aliases shown.
-Any illegal charactes will be substituted with \[dq]_\[dq] and duplicate
-names will have numbers suffixed.
+Any illegal characters will be substituted with \[dq]_\[dq] and
+duplicate names will have numbers suffixed.
It will also add a remote called AllDrives which shows all the shared
drives combined into one directory tree.
.SS untrash
@@ -40729,7 +42054,7 @@ HiDrive allows modification times to be set on objects accurate to 1
second.
.PP
HiDrive supports its own hash type (https://static.hidrive.com/dev/0001)
-which is used to verify the integrety of file contents after successful
+which is used to verify the integrity of file contents after successful
transfers.
.SS Restricted filename characters
.PP
@@ -41400,275 +42725,6 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) and rclone
about (https://rclone.org/commands/rclone_about/)
-.SH Hubic
-.PP
-Paths are specified as \f[C]remote:path\f[R]
-.PP
-Paths are specified as \f[C]remote:container\f[R] (or \f[C]remote:\f[R]
-for the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g.
-\f[C]remote:container/path/to/dir\f[R].
-.SS Configuration
-.PP
-The initial setup for Hubic involves getting a token from Hubic which
-you need to do in your browser.
-\f[C]rclone config\f[R] walks you through it.
-.PP
-Here is an example of how to make a remote called \f[C]remote\f[R].
-First run:
-.IP
-.nf
-\f[C]
- rclone config
-\f[R]
-.fi
-.PP
-This will guide you through an interactive setup process:
-.IP
-.nf
-\f[C]
-n) New remote
-s) Set configuration password
-n/s> n
-name> remote
-Type of storage to configure.
-Choose a number from below, or type in your own value
-[snip]
-XX / Hubic
- \[rs] \[dq]hubic\[dq]
-[snip]
-Storage> hubic
-Hubic Client Id - leave blank normally.
-client_id>
-Hubic Client Secret - leave blank normally.
-client_secret>
-Remote config
-Use auto config?
- * Say Y if not sure
- * Say N if you are working on a remote or headless machine
-y) Yes
-n) No
-y/n> y
-If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth
-Log in and authorize rclone for access
-Waiting for code...
-Got code
---------------------
-[remote]
-client_id =
-client_secret =
-token = {\[dq]access_token\[dq]:\[dq]XXXXXX\[dq]}
---------------------
-y) Yes this is OK
-e) Edit this remote
-d) Delete this remote
-y/e/d> y
-\f[R]
-.fi
-.PP
-See the remote setup docs (https://rclone.org/remote_setup/) for how to
-set it up on a machine with no Internet browser available.
-.PP
-Note that rclone runs a webserver on your local machine to collect the
-token as returned from Hubic.
-This only runs from the moment it opens your browser to the moment you
-get back the verification code.
-This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you
-to unblock it temporarily if you are running a host firewall.
-.PP
-Once configured you can then use \f[C]rclone\f[R] like this,
-.PP
-List containers in the top level of your Hubic
-.IP
-.nf
-\f[C]
-rclone lsd remote:
-\f[R]
-.fi
-.PP
-List all the files in your Hubic
-.IP
-.nf
-\f[C]
-rclone ls remote:
-\f[R]
-.fi
-.PP
-To copy a local directory to an Hubic directory called backup
-.IP
-.nf
-\f[C]
-rclone copy /home/source remote:backup
-\f[R]
-.fi
-.PP
-If you want the directory to be visible in the official \f[I]Hubic
-browser\f[R], you need to copy your files to the \f[C]default\f[R]
-directory
-.IP
-.nf
-\f[C]
-rclone copy /home/source remote:default/backup
-\f[R]
-.fi
-.SS --fast-list
-.PP
-This remote supports \f[C]--fast-list\f[R] which allows you to use fewer
-transactions in exchange for more memory.
-See the rclone docs (https://rclone.org/docs/#fast-list) for more
-details.
-.SS Modified time
-.PP
-The modified time is stored as metadata on the object as
-\f[C]X-Object-Meta-Mtime\f[R] as floating point since the epoch accurate
-to 1 ns.
-.PP
-This is a de facto standard (used in the official python-swiftclient
-amongst others) for storing the modification time for an object.
-.PP
-Note that Hubic wraps the Swift backend, so most of the properties of
-are the same.
-.SS Standard options
-.PP
-Here are the Standard options specific to hubic (Hubic).
-.SS --hubic-client-id
-.PP
-OAuth Client Id.
-.PP
-Leave blank normally.
-.PP
-Properties:
-.IP \[bu] 2
-Config: client_id
-.IP \[bu] 2
-Env Var: RCLONE_HUBIC_CLIENT_ID
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.SS --hubic-client-secret
-.PP
-OAuth Client Secret.
-.PP
-Leave blank normally.
-.PP
-Properties:
-.IP \[bu] 2
-Config: client_secret
-.IP \[bu] 2
-Env Var: RCLONE_HUBIC_CLIENT_SECRET
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.SS Advanced options
-.PP
-Here are the Advanced options specific to hubic (Hubic).
-.SS --hubic-token
-.PP
-OAuth Access Token as a JSON blob.
-.PP
-Properties:
-.IP \[bu] 2
-Config: token
-.IP \[bu] 2
-Env Var: RCLONE_HUBIC_TOKEN
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.SS --hubic-auth-url
-.PP
-Auth server URL.
-.PP
-Leave blank to use the provider defaults.
-.PP
-Properties:
-.IP \[bu] 2
-Config: auth_url
-.IP \[bu] 2
-Env Var: RCLONE_HUBIC_AUTH_URL
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.SS --hubic-token-url
-.PP
-Token server url.
-.PP
-Leave blank to use the provider defaults.
-.PP
-Properties:
-.IP \[bu] 2
-Config: token_url
-.IP \[bu] 2
-Env Var: RCLONE_HUBIC_TOKEN_URL
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.SS --hubic-chunk-size
-.PP
-Above this size files will be chunked into a _segments container.
-.PP
-Above this size files will be chunked into a _segments container.
-The default for this is 5 GiB which is its maximum value.
-.PP
-Properties:
-.IP \[bu] 2
-Config: chunk_size
-.IP \[bu] 2
-Env Var: RCLONE_HUBIC_CHUNK_SIZE
-.IP \[bu] 2
-Type: SizeSuffix
-.IP \[bu] 2
-Default: 5Gi
-.SS --hubic-no-chunk
-.PP
-Don\[aq]t chunk files during streaming upload.
-.PP
-When doing streaming uploads (e.g.
-using rcat or mount) setting this flag will cause the swift backend to
-not upload chunked files.
-.PP
-This will limit the maximum upload size to 5 GiB.
-However non chunked files are easier to deal with and have an MD5SUM.
-.PP
-Rclone will still chunk files bigger than chunk_size when doing normal
-copy operations.
-.PP
-Properties:
-.IP \[bu] 2
-Config: no_chunk
-.IP \[bu] 2
-Env Var: RCLONE_HUBIC_NO_CHUNK
-.IP \[bu] 2
-Type: bool
-.IP \[bu] 2
-Default: false
-.SS --hubic-encoding
-.PP
-The encoding for the backend.
-.PP
-See the encoding section in the
-overview (https://rclone.org/overview/#encoding) for more info.
-.PP
-Properties:
-.IP \[bu] 2
-Config: encoding
-.IP \[bu] 2
-Env Var: RCLONE_HUBIC_ENCODING
-.IP \[bu] 2
-Type: MultiEncoder
-.IP \[bu] 2
-Default: Slash,InvalidUtf8
-.SS Limitations
-.PP
-This uses the normal OpenStack Swift mechanism to refresh the Swift API
-credentials and ignores the expires field returned by the Hubic API.
-.PP
-The Swift API doesn\[aq]t return a correct MD5SUM for segmented files
-(Dynamic or Static Large Objects) so rclone won\[aq]t check or use the
-MD5SUM for these.
.SH Internet Archive
.PP
The Internet Archive backend utilizes Items on
@@ -41682,11 +42738,10 @@ Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for
the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g.
\f[C]remote:item/path/to/dir\f[R].
.PP
-Once you have made a remote (see the provider specific section above)
-you can use it like this:
-.PP
Unlike S3, listing up all items uploaded by you isn\[aq]t supported.
.PP
+Once you have made a remote, you can use it like this:
+.PP
Make a new item
.IP
.nf
@@ -41741,7 +42796,7 @@ However, some fields are reserved by both Internet Archive and rclone.
The following are reserved by Internet Archive: - \f[C]name\f[R] -
\f[C]source\f[R] - \f[C]size\f[R] - \f[C]md5\f[R] - \f[C]crc32\f[R] -
\f[C]sha1\f[R] - \f[C]format\f[R] - \f[C]old_version\f[R] -
-\f[C]viruscheck\f[R]
+\f[C]viruscheck\f[R] - \f[C]summation\f[R]
.PP
Trying to set values to these keys is ignored with a warning.
Only setting \f[C]mtime\f[R] is an exception.
@@ -41999,7 +43054,7 @@ string
T}@T{
01234567
T}@T{
-N
+\f[B]Y\f[R]
T}
T{
format
@@ -42010,7 +43065,7 @@ string
T}@T{
Comma-Separated Values
T}@T{
-N
+\f[B]Y\f[R]
T}
T{
md5
@@ -42021,7 +43076,7 @@ string
T}@T{
01234567012345670123456701234567
T}@T{
-N
+\f[B]Y\f[R]
T}
T{
mtime
@@ -42032,7 +43087,7 @@ RFC 3339
T}@T{
2006-01-02T15:04:05.999999999Z
T}@T{
-N
+\f[B]Y\f[R]
T}
T{
name
@@ -42043,7 +43098,7 @@ filename
T}@T{
backend/internetarchive/internetarchive.go
T}@T{
-N
+\f[B]Y\f[R]
T}
T{
old_version
@@ -42054,7 +43109,7 @@ boolean
T}@T{
true
T}@T{
-N
+\f[B]Y\f[R]
T}
T{
rclone-ia-mtime
@@ -42098,7 +43153,7 @@ string
T}@T{
0123456701234567012345670123456701234567
T}@T{
-N
+\f[B]Y\f[R]
T}
T{
size
@@ -42109,7 +43164,7 @@ decimal number
T}@T{
123456
T}@T{
-N
+\f[B]Y\f[R]
T}
T{
source
@@ -42120,7 +43175,18 @@ string
T}@T{
original
T}@T{
-N
+\f[B]Y\f[R]
+T}
+T{
+summation
+T}@T{
+Check https://forum.rclone.org/t/31922 for how it is used
+T}@T{
+string
+T}@T{
+md5
+T}@T{
+\f[B]Y\f[R]
T}
T{
viruscheck
@@ -42131,7 +43197,7 @@ unixtime
T}@T{
1654191352
T}@T{
-N
+\f[B]Y\f[R]
T}
.TE
.PP
@@ -42147,7 +43213,7 @@ Cloud (cloud.telia.se) * Telia Sky (sky.telia.no) * Tele2 * Tele2 Cloud
(mittcloud.tele2.se) * Elkj\[/o]p (with subsidiaries): * Elkj\[/o]p
Cloud (cloud.elkjop.no) * Elgiganten Sweden (cloud.elgiganten.se) *
Elgiganten Denmark (cloud.elgiganten.dk) * Giganti Cloud
-(cloud.gigantti.fi) * ELKO Clouud (cloud.elko.is)
+(cloud.gigantti.fi) * ELKO Cloud (cloud.elko.is)
.PP
Most of the white-label versions are supported by this backend, although
may require different authentication setup - described below.
@@ -42163,12 +43229,48 @@ than the official service, and you have to choose the correct one when
setting up the remote.
.SS Standard authentication
.PP
-To configure Jottacloud you will need to generate a personal security
-token in the Jottacloud web interface.
-You will the option to do in your account security
-settings (https://www.jottacloud.com/web/secure) (for whitelabel version
-you need to find this page in its web interface).
-Note that the web interface may refer to this token as a JottaCli token.
+The standard authentication method used by the official service
+(jottacloud.com), as well as some of the whitelabel services, requires
+you to generate a single-use personal login token from the account
+security settings in the service\[aq]s web interface.
+Log in to your account, go to \[dq]Settings\[dq] and then
+\[dq]Security\[dq], or use the direct link presented to you by rclone
+when configuring the remote: .
+Scroll down to the section \[dq]Personal login token\[dq], and click the
+\[dq]Generate\[dq] button.
+Note that if you are using a whitelabel service you probably can\[aq]t
+use the direct link, you need to find the same page in their dedicated
+web interface, and also it may be in a different location than described
+above.
+.PP
+To access your account from multiple instances of rclone, you need to
+configure each of them with a separate personal login token.
+E.g.
+you create a Jottacloud remote with rclone in one location, and copy the
+configuration file to a second location where you also want to run
+rclone and access the same remote.
+Then you need to replace the token for one of them, using the config
+reconnect (https://rclone.org/commands/rclone_config_reconnect/)
+command, which requires you to generate a new personal login token and
+supply as input.
+If you do not do this, the token may easily end up being invalidated,
+resulting in both instances failing with an error message something
+along the lines of:
+.IP
+.nf
+\f[C]
+oauth2: cannot fetch token: 400 Bad Request
+Response: {\[dq]error\[dq]:\[dq]invalid_grant\[dq],\[dq]error_description\[dq]:\[dq]Stale token\[dq]}
+\f[R]
+.fi
+.PP
+When this happens, you need to replace the token as described above to
+be able to use your remote again.
+.PP
+All personal login tokens you have taken into use will be listed in the
+web interface under \[dq]My logged in devices\[dq], and from the right
+side of that list you can click the \[dq]X\[dq] button to revoke
+individual tokens.
.SS Legacy authentication
.PP
If you are using one of the whitelabel versions (e.g.
@@ -43750,7 +44852,7 @@ Use \f[C]rclone dedupe\f[R] to fix duplicated files.
.SS Object not found
.PP
If you are connecting to your Mega remote for the first time, to test
-access and syncronisation, you may receive an error such as
+access and synchronization, you may receive an error such as
.IP
.nf
\f[C]
@@ -44214,7 +45316,7 @@ group.
\f[B]Implicit Directory\f[R].
This refers to a directory within a path that has not been physically
created.
-For example, during upload of a file, non-existent subdirectories can be
+For example, during upload of a file, nonexistent subdirectories can be
specified in the target path.
NetStorage creates these as \[dq]implicit.\[dq] While the directories
aren\[aq]t physically created, they exist implicitly and the noted path
@@ -45245,7 +46347,7 @@ performing requests.
.PP
You may choose to create and use your own Client ID, in case the default
one does not work well for you.
-For example, you might see throtting.
+For example, you might see throttling.
.SS Creating Client ID for OneDrive Personal
.PP
To create your own Client ID, please follow these steps:
@@ -45312,8 +46414,7 @@ Make sure to create the App with your business account.
Follow the steps above to create an App.
However, we need a different account type here:
\f[C]Accounts in this organizational directory only (*** - Single tenant)\f[R].
-Note that you can also change the account type aftering creating the
-App.
+Note that you can also change the account type after creating the App.
.IP "3." 3
Find the tenant
ID (https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant)
@@ -45555,7 +46656,7 @@ Microsoft Cloud Germany
\[dq]cn\[dq]
.RS 2
.IP \[bu] 2
-Azure and Office 365 operated by 21Vianet in China
+Azure and Office 365 operated by Vnet Group in China
.RE
.RE
.SS Advanced options
@@ -45958,7 +47059,7 @@ here (https://support.office.com/en-us/article/invalid-file-names-and-file-types
.SS Versions
.PP
Every change in a file OneDrive causes the service to create a new
-version of the the file.
+version of the file.
This counts against a users quota.
For example changing the modification time of a file creates a second
version, so the file apparently uses twice the space.
@@ -46471,6 +47572,662 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) and rclone
about (https://rclone.org/commands/rclone_about/)
+.SH Oracle Object Storage
+.PP
+Oracle Object Storage
+Overview (https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm)
+.PP
+Oracle Object Storage
+FAQ (https://www.oracle.com/cloud/storage/object-storage/faq/)
+.PP
+Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for
+the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g.
+\f[C]remote:bucket/path/to/dir\f[R].
+.SS Configuration
+.PP
+Here is an example of making an oracle object storage configuration.
+\f[C]rclone config\f[R] walks you through it.
+.PP
+Here is an example of how to make a remote called \f[C]remote\f[R].
+First run:
+.IP
+.nf
+\f[C]
+ rclone config
+\f[R]
+.fi
+.PP
+This will guide you through an interactive setup process:
+.IP
+.nf
+\f[C]
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> n
+
+Enter name for new remote.
+name> remote
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Oracle Cloud Infrastructure Object Storage
+ \[rs] (oracleobjectstorage)
+Storage> oracleobjectstorage
+
+Option provider.
+Choose your Auth Provider
+Choose a number from below, or type in your own string value.
+Press Enter for the default (env_auth).
+ 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins
+ \[rs] (env_auth)
+ / use an OCI user and an API key for authentication.
+ 2 | you\[cq]ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
+ | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
+ \[rs] (user_principal_auth)
+ / use instance principals to authorize an instance to make API calls.
+ 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata.
+ | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
+ \[rs] (instance_principal_auth)
+ 4 / use resource principals to make API calls
+ \[rs] (resource_principal_auth)
+ 5 / no credentials needed, this is typically for reading public buckets
+ \[rs] (no_auth)
+provider> 2
+
+Option namespace.
+Object storage namespace
+Enter a value.
+namespace> idbamagbg734
+
+Option compartment.
+Object storage compartment OCID
+Enter a value.
+compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
+
+Option region.
+Object storage Region
+Enter a value.
+region> us-ashburn-1
+
+Option endpoint.
+Endpoint for Object storage API.
+Leave blank to use the default endpoint for the region.
+Enter a value. Press Enter to leave empty.
+endpoint>
+
+Option config_file.
+Path to OCI config file
+Choose a number from below, or type in your own string value.
+Press Enter for the default (\[ti]/.oci/config).
+ 1 / oci configuration file location
+ \[rs] (\[ti]/.oci/config)
+config_file> /etc/oci/dev.conf
+
+Option config_profile.
+Profile name inside OCI config file
+Choose a number from below, or type in your own string value.
+Press Enter for the default (Default).
+ 1 / Use the default profile
+ \[rs] (Default)
+config_profile> Test
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: oracleobjectstorage
+- namespace: idbamagbg734
+- compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
+- region: us-ashburn-1
+- provider: user_principal_auth
+- config_file: /etc/oci/dev.conf
+- config_profile: Test
+Keep this \[dq]remote\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.PP
+See all buckets
+.IP
+.nf
+\f[C]
+rclone lsd remote:
+\f[R]
+.fi
+.PP
+Create a new bucket
+.IP
+.nf
+\f[C]
+rclone mkdir remote:bucket
+\f[R]
+.fi
+.PP
+List the contents of a bucket
+.IP
+.nf
+\f[C]
+rclone ls remote:bucket
+rclone ls remote:bucket --max-depth 1
+\f[R]
+.fi
+.SS Modified time
+.PP
+The modified time is stored as metadata on the object as
+\f[C]opc-meta-mtime\f[R] as floating point since the epoch, accurate to
+1 ns.
+.PP
+If the modification time needs to be updated rclone will attempt to
+perform a server side copy to update the modification if the object can
+be copied in a single part.
+In the case the object is larger than 5Gb, the object will be uploaded
+rather than copied.
+.PP
+Note that reading this from the object takes an additional
+\f[C]HEAD\f[R] request as the metadata isn\[aq]t returned in object
+listings.
+.SS Multipart uploads
+.PP
+rclone supports multipart uploads with OOS which means that it can
+upload files bigger than 5 GiB.
+.PP
+Note that files uploaded \f[I]both\f[R] with multipart upload
+\f[I]and\f[R] through crypt remotes do not have MD5 sums.
+.PP
+rclone switches from single part uploads to multipart uploads at the
+point specified by \f[C]--oos-upload-cutoff\f[R].
+This can be a maximum of 5 GiB and a minimum of 0 (ie always upload
+multipart files).
+.PP
+The chunk sizes used in the multipart upload are specified by
+\f[C]--oos-chunk-size\f[R] and the number of chunks uploaded
+concurrently is specified by \f[C]--oos-upload-concurrency\f[R].
+.PP
+Multipart uploads will use \f[C]--transfers\f[R] *
+\f[C]--oos-upload-concurrency\f[R] * \f[C]--oos-chunk-size\f[R] extra
+memory.
+Single part uploads to not use extra memory.
+.PP
+Single part transfers can be faster than multipart transfers or slower
+depending on your latency from oos - the more latency, the more likely
+single part transfers will be faster.
+.PP
+Increasing \f[C]--oos-upload-concurrency\f[R] will increase throughput
+(8 would be a sensible value) and increasing \f[C]--oos-chunk-size\f[R]
+also increases throughput (16M would be sensible).
+Increasing either of these will use more memory.
+The default values are high enough to gain most of the possible
+performance without using too much memory.
+.SS Standard options
+.PP
+Here are the Standard options specific to oracleobjectstorage (Oracle
+Cloud Infrastructure Object Storage).
+.SS --oos-provider
+.PP
+Choose your Auth Provider
+.PP
+Properties:
+.IP \[bu] 2
+Config: provider
+.IP \[bu] 2
+Env Var: RCLONE_OOS_PROVIDER
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: \[dq]env_auth\[dq]
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]env_auth\[dq]
+.RS 2
+.IP \[bu] 2
+automatically pickup the credentials from runtime(env), first one to
+provide auth wins
+.RE
+.IP \[bu] 2
+\[dq]user_principal_auth\[dq]
+.RS 2
+.IP \[bu] 2
+use an OCI user and an API key for authentication.
+.IP \[bu] 2
+you\[cq]ll need to put in a config file your tenancy OCID, user OCID,
+region, the path, fingerprint to an API key.
+.IP \[bu] 2
+https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
+.RE
+.IP \[bu] 2
+\[dq]instance_principal_auth\[dq]
+.RS 2
+.IP \[bu] 2
+use instance principals to authorize an instance to make API calls.
+.IP \[bu] 2
+each instance has its own identity, and authenticates using the
+certificates that are read from instance metadata.
+.IP \[bu] 2
+https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
+.RE
+.IP \[bu] 2
+\[dq]resource_principal_auth\[dq]
+.RS 2
+.IP \[bu] 2
+use resource principals to make API calls
+.RE
+.IP \[bu] 2
+\[dq]no_auth\[dq]
+.RS 2
+.IP \[bu] 2
+no credentials needed, this is typically for reading public buckets
+.RE
+.RE
+.SS --oos-namespace
+.PP
+Object storage namespace
+.PP
+Properties:
+.IP \[bu] 2
+Config: namespace
+.IP \[bu] 2
+Env Var: RCLONE_OOS_NAMESPACE
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS --oos-compartment
+.PP
+Object storage compartment OCID
+.PP
+Properties:
+.IP \[bu] 2
+Config: compartment
+.IP \[bu] 2
+Env Var: RCLONE_OOS_COMPARTMENT
+.IP \[bu] 2
+Provider: !no_auth
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS --oos-region
+.PP
+Object storage Region
+.PP
+Properties:
+.IP \[bu] 2
+Config: region
+.IP \[bu] 2
+Env Var: RCLONE_OOS_REGION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS --oos-endpoint
+.PP
+Endpoint for Object storage API.
+.PP
+Leave blank to use the default endpoint for the region.
+.PP
+Properties:
+.IP \[bu] 2
+Config: endpoint
+.IP \[bu] 2
+Env Var: RCLONE_OOS_ENDPOINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --oos-config-file
+.PP
+Path to OCI config file
+.PP
+Properties:
+.IP \[bu] 2
+Config: config_file
+.IP \[bu] 2
+Env Var: RCLONE_OOS_CONFIG_FILE
+.IP \[bu] 2
+Provider: user_principal_auth
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: \[dq]\[ti]/.oci/config\[dq]
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]\[ti]/.oci/config\[dq]
+.RS 2
+.IP \[bu] 2
+oci configuration file location
+.RE
+.RE
+.SS --oos-config-profile
+.PP
+Profile name inside the oci config file
+.PP
+Properties:
+.IP \[bu] 2
+Config: config_profile
+.IP \[bu] 2
+Env Var: RCLONE_OOS_CONFIG_PROFILE
+.IP \[bu] 2
+Provider: user_principal_auth
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: \[dq]Default\[dq]
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]Default\[dq]
+.RS 2
+.IP \[bu] 2
+Use the default profile
+.RE
+.RE
+.SS Advanced options
+.PP
+Here are the Advanced options specific to oracleobjectstorage (Oracle
+Cloud Infrastructure Object Storage).
+.SS --oos-upload-cutoff
+.PP
+Cutoff for switching to chunked upload.
+.PP
+Any files larger than this will be uploaded in chunks of chunk_size.
+The minimum is 0 and the maximum is 5 GiB.
+.PP
+Properties:
+.IP \[bu] 2
+Config: upload_cutoff
+.IP \[bu] 2
+Env Var: RCLONE_OOS_UPLOAD_CUTOFF
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 200Mi
+.SS --oos-chunk-size
+.PP
+Chunk size to use for uploading.
+.PP
+When uploading files larger than upload_cutoff or files with unknown
+size (e.g.
+from \[dq]rclone rcat\[dq] or uploaded with \[dq]rclone mount\[dq] or
+google photos or google docs) they will be uploaded as multipart uploads
+using this chunk size.
+.PP
+Note that \[dq]upload_concurrency\[dq] chunks of this size are buffered
+in memory per transfer.
+.PP
+If you are transferring large files over high-speed links and you have
+enough memory, then increasing this will speed up the transfers.
+.PP
+Rclone will automatically increase the chunk size when uploading a large
+file of known size to stay below the 10,000 chunks limit.
+.PP
+Files of unknown size are uploaded with the configured chunk_size.
+Since the default chunk size is 5 MiB and there can be at most 10,000
+chunks, this means that by default the maximum size of a file you can
+stream upload is 48 GiB.
+If you wish to stream upload larger files then you will need to increase
+chunk_size.
+.PP
+Increasing the chunk size decreases the accuracy of the progress
+statistics displayed with \[dq]-P\[dq] flag.
+.PP
+Properties:
+.IP \[bu] 2
+Config: chunk_size
+.IP \[bu] 2
+Env Var: RCLONE_OOS_CHUNK_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 5Mi
+.SS --oos-upload-concurrency
+.PP
+Concurrency for multipart uploads.
+.PP
+This is the number of chunks of the same file that are uploaded
+concurrently.
+.PP
+If you are uploading small numbers of large files over high-speed links
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+.PP
+Properties:
+.IP \[bu] 2
+Config: upload_concurrency
+.IP \[bu] 2
+Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 10
+.SS --oos-copy-cutoff
+.PP
+Cutoff for switching to multipart copy.
+.PP
+Any files larger than this that need to be server-side copied will be
+copied in chunks of this size.
+.PP
+The minimum is 0 and the maximum is 5 GiB.
+.PP
+Properties:
+.IP \[bu] 2
+Config: copy_cutoff
+.IP \[bu] 2
+Env Var: RCLONE_OOS_COPY_CUTOFF
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 4.656Gi
+.SS --oos-copy-timeout
+.PP
+Timeout for copy.
+.PP
+Copy is an asynchronous operation, specify timeout to wait for copy to
+succeed
+.PP
+Properties:
+.IP \[bu] 2
+Config: copy_timeout
+.IP \[bu] 2
+Env Var: RCLONE_OOS_COPY_TIMEOUT
+.IP \[bu] 2
+Type: Duration
+.IP \[bu] 2
+Default: 1m0s
+.SS --oos-disable-checksum
+.PP
+Don\[aq]t store MD5 checksum with object metadata.
+.PP
+Normally rclone will calculate the MD5 checksum of the input before
+uploading it so it can add it to metadata on the object.
+This is great for data integrity checking but can cause long delays for
+large files to start uploading.
+.PP
+Properties:
+.IP \[bu] 2
+Config: disable_checksum
+.IP \[bu] 2
+Env Var: RCLONE_OOS_DISABLE_CHECKSUM
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS --oos-encoding
+.PP
+The encoding for the backend.
+.PP
+See the encoding section in the
+overview (https://rclone.org/overview/#encoding) for more info.
+.PP
+Properties:
+.IP \[bu] 2
+Config: encoding
+.IP \[bu] 2
+Env Var: RCLONE_OOS_ENCODING
+.IP \[bu] 2
+Type: MultiEncoder
+.IP \[bu] 2
+Default: Slash,InvalidUtf8,Dot
+.SS --oos-leave-parts-on-error
+.PP
+If true avoid calling abort upload on a failure, leaving all
+successfully uploaded parts on S3 for manual recovery.
+.PP
+It should be set to true for resuming uploads across different sessions.
+.PP
+WARNING: Storing parts of an incomplete multipart upload counts towards
+space usage on object storage and will add additional costs if not
+cleaned up.
+.PP
+Properties:
+.IP \[bu] 2
+Config: leave_parts_on_error
+.IP \[bu] 2
+Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS --oos-no-check-bucket
+.PP
+If set, don\[aq]t attempt to check the bucket exists or create it.
+.PP
+This can be useful when trying to minimise the number of transactions
+rclone does if you know the bucket exists already.
+.PP
+It can also be needed if the user you are using does not have bucket
+creation permissions.
+.PP
+Properties:
+.IP \[bu] 2
+Config: no_check_bucket
+.IP \[bu] 2
+Env Var: RCLONE_OOS_NO_CHECK_BUCKET
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS Backend commands
+.PP
+Here are the commands specific to the oracleobjectstorage backend.
+.PP
+Run them with
+.IP
+.nf
+\f[C]
+rclone backend COMMAND remote:
+\f[R]
+.fi
+.PP
+The help below will explain what arguments each command takes.
+.PP
+See the backend (https://rclone.org/commands/rclone_backend/) command
+for more info on how to pass options and arguments.
+.PP
+These can be run on a running backend using the rc command
+backend/command (https://rclone.org/rc/#backend-command).
+.SS rename
+.PP
+change the name of an object
+.IP
+.nf
+\f[C]
+rclone backend rename remote: [options] [+]
+\f[R]
+.fi
+.PP
+This command can be used to rename a object.
+.PP
+Usage Examples:
+.IP
+.nf
+\f[C]
+rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
+\f[R]
+.fi
+.SS list-multipart-uploads
+.PP
+List the unfinished multipart uploads
+.IP
+.nf
+\f[C]
+rclone backend list-multipart-uploads remote: [options] [+]
+\f[R]
+.fi
+.PP
+This command lists the unfinished multipart uploads in JSON format.
+.IP
+.nf
+\f[C]
+rclone backend list-multipart-uploads oos:bucket/path/to/object
+\f[R]
+.fi
+.PP
+It returns a dictionary of buckets with values as lists of unfinished
+multipart uploads.
+.PP
+You can call it with no bucket in which case it lists all bucket, with a
+bucket or with a bucket and path.
+.IP
+.nf
+\f[C]
+{
+ \[dq]test-bucket\[dq]: [
+ {
+ \[dq]namespace\[dq]: \[dq]test-namespace\[dq],
+ \[dq]bucket\[dq]: \[dq]test-bucket\[dq],
+ \[dq]object\[dq]: \[dq]600m.bin\[dq],
+ \[dq]uploadId\[dq]: \[dq]51dd8114-52a4-b2f2-c42f-5291f05eb3c8\[dq],
+ \[dq]timeCreated\[dq]: \[dq]2022-07-29T06:21:16.595Z\[dq],
+ \[dq]storageTier\[dq]: \[dq]Standard\[dq]
+ }
+ ]
+\f[R]
+.fi
+.SS cleanup
+.PP
+Remove unfinished multipart uploads.
+.IP
+.nf
+\f[C]
+rclone backend cleanup remote: [options] [+]
+\f[R]
+.fi
+.PP
+This command removes unfinished multipart uploads of age greater than
+max-age which defaults to 24 hours.
+.PP
+Note that you can use -i/--dry-run with this command to see what it
+would do.
+.IP
+.nf
+\f[C]
+rclone backend cleanup oos:bucket/path/to/object
+rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
+\f[R]
+.fi
+.PP
+Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
+.PP
+Options:
+.IP \[bu] 2
+\[dq]max-age\[dq]: Max age of upload to delete
.SH QingStor
.PP
Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for
@@ -47128,7 +48885,8 @@ Memset Memstore (https://www.memset.com/cloud/storage/)
OVH Object
Storage (https://www.ovh.co.uk/public-cloud/storage/object-storage/)
.IP \[bu] 2
-Oracle Cloud Storage (https://cloud.oracle.com/object-storage/buckets)
+Oracle Cloud
+Storage (https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html)
.IP \[bu] 2
IBM Bluemix Cloud ObjectStorage
Swift (https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html)
@@ -47818,6 +49576,41 @@ Env Var: RCLONE_SWIFT_NO_CHUNK
Type: bool
.IP \[bu] 2
Default: false
+.SS --swift-no-large-objects
+.PP
+Disable support for static and dynamic large objects
+.PP
+Swift cannot transparently store files bigger than 5 GiB.
+There are two schemes for doing that, static or dynamic large objects,
+and the API does not allow rclone to determine whether a file is a
+static or dynamic large object without doing a HEAD on the object.
+Since these need to be treated differently, this means rclone has to
+issue HEAD requests for objects for example when reading checksums.
+.PP
+When \f[C]no_large_objects\f[R] is set, rclone will assume that there
+are no static or dynamic large objects stored.
+This means it can stop doing the extra HEAD calls which in turn
+increases performance greatly especially when doing a swift to swift
+transfer with \f[C]--checksum\f[R] set.
+.PP
+Setting this option implies \f[C]no_chunk\f[R] and also that no files
+will be uploaded in chunks, so files bigger than 5 GiB will just fail on
+upload.
+.PP
+If you set this option and there \f[I]are\f[R] static or dynamic large
+objects, then this will give incorrect hashes for them.
+Downloads will succeed, but other operations such as Remove and Copy
+will fail.
+.PP
+Properties:
+.IP \[bu] 2
+Config: no_large_objects
+.IP \[bu] 2
+Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --swift-encoding
.PP
The encoding for the backend.
@@ -49093,7 +50886,7 @@ If the path does not begin with a \f[C]/\f[R] it is relative to the home
directory of the user.
An empty path \f[C]remote:\f[R] refers to the user\[aq]s home directory.
For example, \f[C]rclone lsd remote:\f[R] would list the home directory
-of the user cofigured in the rclone remote config
+of the user configured in the rclone remote config
(\f[C]i.e /home/sftpuser\f[R]).
However, \f[C]rclone lsd remote:/\f[R] would list the root directory for
remote machine (i.e.
@@ -49424,7 +51217,7 @@ On a Windows server the shell handling is different: Although it can
also be set up to use a Unix type shell, e.g.
Cygwin bash, the default is to use Windows Command Prompt (cmd.exe), and
PowerShell is a recommended alternative.
-All of these have bahave differently, which rclone must handle.
+All of these have behave differently, which rclone must handle.
.PP
Rclone tries to auto-detect what type of shell is used on the server,
first time you access the SFTP remote.
@@ -49463,7 +51256,7 @@ If you configure a sftp remote without a config file, e.g.
an on the fly (https://rclone.org/docs/#backend-path-to-dir%5D) remote,
rclone will have nowhere to store the result, and it will re-run the
command on every access.
-To avoid this you should explicitely set the \f[C]shell_type\f[R] option
+To avoid this you should explicitly set the \f[C]shell_type\f[R] option
to the correct value, or to \f[C]none\f[R] if you want to prevent rclone
from executing any remote shell commands.
.PP
@@ -49471,9 +51264,8 @@ It is also important to note that, since the shell type decides how
quoting and escaping of file paths used as command-line arguments are
performed, configuring the wrong shell type may leave you exposed to
command injection exploits.
-Make sure to confirm the auto-detected shell type, or explicitely set
-the shell type you know is correct, or disable shell access until you
-know.
+Make sure to confirm the auto-detected shell type, or explicitly set the
+shell type you know is correct, or disable shell access until you know.
.SS Checksum
.PP
SFTP does not natively support checksums (file hash), but rclone is able
@@ -50085,19 +51877,25 @@ Default: 1m0s
.PP
Upload and download chunk size.
.PP
-This controls the maximum packet size used in the SFTP protocol.
-The RFC limits this to 32768 bytes (32k), however a lot of servers
-support larger sizes and setting it larger will increase transfer speed
-dramatically on high latency links.
+This controls the maximum size of payload in SFTP protocol packets.
+The RFC limits this to 32768 bytes (32k), which is the default.
+However, a lot of servers support larger sizes, typically limited to a
+maximum total package size of 256k, and setting it larger will increase
+transfer speed dramatically on high latency links.
+This includes OpenSSH, and, for example, using the value of 255k works
+well, leaving plenty of room for overhead while still being within a
+total packet size of 256k.
.PP
-Only use a setting higher than 32k if you always connect to the same
-server or after sufficiently broad testing.
-.PP
-For example using the value of 252k with OpenSSH works well with its
-maximum packet size of 256k.
-.PP
-If you get the error \[dq]failed to send packet header: EOF\[dq] when
-copying a large file, try lowering this number.
+Make sure to test thoroughly before using a value higher than 32k, and
+only use it if you always connect to the same server or after
+sufficiently broad testing.
+If you get errors such as \[dq]failed to send packet payload: EOF\[dq],
+lots of \[dq]connection lost\[dq], or \[dq]corrupted on transfer\[dq],
+when copying a larger file, try lowering the value.
+The server run by rclone serve sftp sends packets with standard 32k
+maximum payload so you must not set a different chunk_size when
+downloading files, but it accepts packets up to the 256k total size, so
+for uploads the chunk_size can be set as for the OpenSSH example above.
.PP
Properties:
.IP \[bu] 2
@@ -50202,6 +52000,267 @@ Hetzner Storage Boxes are supported through the SFTP backend on port 23.
.PP
See Hetzner\[aq]s documentation for
details (https://docs.hetzner.com/robot/storage-box/access/access-ssh-rsync-borg#rclone)
+.SH SMB
+.PP
+SMB is a communication protocol to share files over
+network (https://en.wikipedia.org/wiki/Server_Message_Block).
+.PP
+This relies on go-smb2
+library (https://github.com/hirochachacha/go-smb2/) for communication
+with SMB protocol.
+.PP
+Paths are specified as \f[C]remote:sharename\f[R] (or \f[C]remote:\f[R]
+for the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g.
+\f[C]remote:item/path/to/dir\f[R].
+.SS Notes
+.PP
+The first path segment must be the name of the share, which you entered
+when you started to share on Windows.
+On smbd, it\[aq]s the section title in \f[C]smb.conf\f[R] (usually in
+\f[C]/etc/samba/\f[R]) file.
+You can find shares by quering the root if you\[aq]re unsure (e.g.
+\f[C]rclone lsd remote:\f[R]).
+.PP
+You can\[aq]t access to the shared printers from rclone, obviously.
+.PP
+You can\[aq]t use Anonymous access for logging in.
+You have to use the \f[C]guest\f[R] user with an empty password instead.
+The rclone client tries to avoid 8.3 names when uploading files by
+encoding trailing spaces and periods.
+Alternatively, the local
+backend (https://rclone.org/local/#paths-on-windows) on Windows can
+access SMB servers using UNC paths, by
+\f[C]\[rs]\[rs]server\[rs]share\f[R].
+This doesn\[aq]t apply to non-Windows OSes, such as Linux and macOS.
+.SS Configuration
+.PP
+Here is an example of making a SMB configuration.
+.PP
+First run
+.IP
+.nf
+\f[C]
+rclone config
+\f[R]
+.fi
+.PP
+This will guide you through an interactive setup process.
+.IP
+.nf
+\f[C]
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+XX / SMB / CIFS
+ \[rs] (smb)
+Storage> smb
+
+Option host.
+Samba hostname to connect to.
+E.g. \[dq]example.com\[dq].
+Enter a value.
+host> localhost
+
+Option user.
+Samba username.
+Enter a string value. Press Enter for the default (lesmi).
+user> guest
+
+Option port.
+Samba port number.
+Enter a signed integer. Press Enter for the default (445).
+port>
+
+Option pass.
+Samba password.
+Choose an alternative below. Press Enter for the default (n).
+y) Yes, type in my own password
+g) Generate random password
+n) No, leave this optional password blank (default)
+y/g/n> g
+Password strength in bits.
+64 is just about memorable
+128 is secure
+1024 is the maximum
+Bits> 64
+Your password is: XXXX
+Use this password? Please note that an obscured version of this
+password (and not the password itself) will be stored under your
+configuration file, so keep this generated password in a safe place.
+y) Yes (default)
+n) No
+y/n> y
+
+Option domain.
+Domain name for NTLM authentication.
+Enter a string value. Press Enter for the default (WORKGROUP).
+domain>
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: samba
+- host: localhost
+- user: guest
+- pass: *** ENCRYPTED ***
+Keep this \[dq]remote\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> d
+\f[R]
+.fi
+.SS Standard options
+.PP
+Here are the Standard options specific to smb (SMB / CIFS).
+.SS --smb-host
+.PP
+SMB server hostname to connect to.
+.PP
+E.g.
+\[dq]example.com\[dq].
+.PP
+Properties:
+.IP \[bu] 2
+Config: host
+.IP \[bu] 2
+Env Var: RCLONE_SMB_HOST
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS --smb-user
+.PP
+SMB username.
+.PP
+Properties:
+.IP \[bu] 2
+Config: user
+.IP \[bu] 2
+Env Var: RCLONE_SMB_USER
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: \[dq]$USER\[dq]
+.SS --smb-port
+.PP
+SMB port number.
+.PP
+Properties:
+.IP \[bu] 2
+Config: port
+.IP \[bu] 2
+Env Var: RCLONE_SMB_PORT
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 445
+.SS --smb-pass
+.PP
+SMB password.
+.PP
+\f[B]NB\f[R] Input to this must be obscured - see rclone
+obscure (https://rclone.org/commands/rclone_obscure/).
+.PP
+Properties:
+.IP \[bu] 2
+Config: pass
+.IP \[bu] 2
+Env Var: RCLONE_SMB_PASS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --smb-domain
+.PP
+Domain name for NTLM authentication.
+.PP
+Properties:
+.IP \[bu] 2
+Config: domain
+.IP \[bu] 2
+Env Var: RCLONE_SMB_DOMAIN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Default: \[dq]WORKGROUP\[dq]
+.SS Advanced options
+.PP
+Here are the Advanced options specific to smb (SMB / CIFS).
+.SS --smb-idle-timeout
+.PP
+Max time before closing idle connections.
+.PP
+If no connections have been returned to the connection pool in the time
+given, rclone will empty the connection pool.
+.PP
+Set to 0 to keep connections indefinitely.
+.PP
+Properties:
+.IP \[bu] 2
+Config: idle_timeout
+.IP \[bu] 2
+Env Var: RCLONE_SMB_IDLE_TIMEOUT
+.IP \[bu] 2
+Type: Duration
+.IP \[bu] 2
+Default: 1m0s
+.SS --smb-hide-special-share
+.PP
+Hide special shares (e.g.
+print$) which users aren\[aq]t supposed to access.
+.PP
+Properties:
+.IP \[bu] 2
+Config: hide_special_share
+.IP \[bu] 2
+Env Var: RCLONE_SMB_HIDE_SPECIAL_SHARE
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: true
+.SS --smb-case-insensitive
+.PP
+Whether the server is configured to be case-insensitive.
+.PP
+Always true on Windows shares.
+.PP
+Properties:
+.IP \[bu] 2
+Config: case_insensitive
+.IP \[bu] 2
+Env Var: RCLONE_SMB_CASE_INSENSITIVE
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: true
+.SS --smb-encoding
+.PP
+The encoding for the backend.
+.PP
+See the encoding section in the
+overview (https://rclone.org/overview/#encoding) for more info.
+.PP
+Properties:
+.IP \[bu] 2
+Config: encoding
+.IP \[bu] 2
+Env Var: RCLONE_SMB_ENCODING
+.IP \[bu] 2
+Type: MultiEncoder
+.IP \[bu] 2
+Default:
+Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
.SH Storj
.PP
Storj (https://storj.io) is an encrypted, secure, and cost-effective
@@ -53965,6 +56024,354 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: return an error based on option value
.SH Changelog
+.SS v1.60.0 - 2022-10-21
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.59.0...v1.60.0)
+.IP \[bu] 2
+New backends
+.RS 2
+.IP \[bu] 2
+Oracle object storage (https://rclone.org/oracleobjectstorage/) (Manoj
+Ghosh)
+.IP \[bu] 2
+SMB (https://rclone.org/smb/) / CIFS (Windows file sharing) (Lesmiscore)
+.IP \[bu] 2
+New S3 providers
+.RS 2
+.IP \[bu] 2
+IONOS Cloud Storage (https://rclone.org/s3/#ionos) (Dmitry Deniskin)
+.IP \[bu] 2
+Qiniu KODO (https://rclone.org/s3/#qiniu) (Bachue Zhou)
+.RE
+.RE
+.IP \[bu] 2
+New Features
+.RS 2
+.IP \[bu] 2
+build
+.RS 2
+.IP \[bu] 2
+Update to go1.19 and make go1.17 the minimum required version (Nick
+Craig-Wood)
+.IP \[bu] 2
+Install.sh: fix arm-v7 download (Ole Frost)
+.RE
+.IP \[bu] 2
+fs: Warn the user when using an existing remote name without a colon
+(Nick Craig-Wood)
+.IP \[bu] 2
+httplib: Add \f[C]--xxx-min-tls-version\f[R] option to select minimum
+TLS version for HTTP servers (Robert Newson)
+.IP \[bu] 2
+librclone: Add PHP bindings and test program (Jordi Gonzalez Mu\[~n]oz)
+.IP \[bu] 2
+operations
+.RS 2
+.IP \[bu] 2
+Add \f[C]--server-side-across-configs\f[R] global flag for any backend
+(Nick Craig-Wood)
+.IP \[bu] 2
+Optimise \f[C]--copy-dest\f[R] and \f[C]--compare-dest\f[R] (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+rc: add \f[C]job/stopgroup\f[R] to stop group (Evan Spensley)
+.IP \[bu] 2
+serve dlna
+.RS 2
+.IP \[bu] 2
+Add \f[C]--announce-interval\f[R] to control SSDP Announce Interval
+(YanceyChiew)
+.IP \[bu] 2
+Add \f[C]--interface\f[R] to Specify SSDP interface names line (Simon
+Bos)
+.IP \[bu] 2
+Add support for more external subtitles (YanceyChiew)
+.IP \[bu] 2
+Add verification of addresses (YanceyChiew)
+.RE
+.IP \[bu] 2
+sync: Optimise \f[C]--copy-dest\f[R] and \f[C]--compare-dest\f[R] (Nick
+Craig-Wood)
+.IP \[bu] 2
+doc updates (albertony, Alexander Knorr, anonion, Jo\[~a]o Henrique
+Franco, Josh Soref, Lorenzo Milesi, Marco Molteni, Mark Trolley, Ole
+Frost, partev, Ryan Morey, Tom Mombourquette, YFdyh000)
+.RE
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+filter
+.RS 2
+.IP \[bu] 2
+Fix incorrect filtering with \f[C]UseFilter\f[R] context flag and
+wrapping backends (Nick Craig-Wood)
+.IP \[bu] 2
+Make sure we check \f[C]--files-from\f[R] when looking for a single file
+(Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+rc
+.RS 2
+.IP \[bu] 2
+Fix \f[C]mount/listmounts\f[R] not returning the full Fs entered in
+\f[C]mount/mount\f[R] (Tom Mombourquette)
+.IP \[bu] 2
+Handle external unmount when mounting (Isaac Aymerich)
+.IP \[bu] 2
+Validate Daemon option is not set when mounting a volume via RC (Isaac
+Aymerich)
+.RE
+.IP \[bu] 2
+sync: Update docs and error messages to reflect fixes to overlap checks
+(Nick Naumann)
+.RE
+.IP \[bu] 2
+VFS
+.RS 2
+.IP \[bu] 2
+Reduce memory use by embedding \f[C]sync.Cond\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+Reduce memory usage by re-ordering commonly used structures (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix excess CPU used by VFS cache cleaner looping (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Local
+.RS 2
+.IP \[bu] 2
+Obey file filters in listing to fix errors on excluded files (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix \[dq]Failed to read metadata: function not implemented\[dq] on old
+Linux kernels (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Compress
+.RS 2
+.IP \[bu] 2
+Fix crash due to nil metadata (Nick Craig-Wood)
+.IP \[bu] 2
+Fix error handling to not use or return nil objects (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Drive
+.RS 2
+.IP \[bu] 2
+Make \f[C]--drive-stop-on-upload-limit\f[R] obey quota exceeded error
+(Steve Kowalik)
+.RE
+.IP \[bu] 2
+FTP
+.RS 2
+.IP \[bu] 2
+Add \f[C]--ftp-force-list-hidden\f[R] option to show hidden items
+(\[/O]yvind Heddeland Instefjord)
+.IP \[bu] 2
+Fix hang when using ExplicitTLS to certain servers.
+(Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Google Cloud Storage
+.RS 2
+.IP \[bu] 2
+Add \f[C]--gcs-endpoint\f[R] flag and config parameter (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Hubic
+.RS 2
+.IP \[bu] 2
+Remove backend as service has now shut down (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Rename Onedrive(cn) 21Vianet to Vnet Group (Yen Hu)
+.IP \[bu] 2
+Disable change notify in China region since it is not supported (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Implement \f[C]--s3-versions\f[R] flag to show old versions of objects
+if enabled (Nick Craig-Wood)
+.IP \[bu] 2
+Implement \f[C]--s3-version-at\f[R] flag to show versions of objects at
+a particular time (Nick Craig-Wood)
+.IP \[bu] 2
+Implement \f[C]backend versioning\f[R] command to get/set bucket
+versioning (Nick Craig-Wood)
+.IP \[bu] 2
+Implement \f[C]Purge\f[R] to purge versions and
+\f[C]backend cleanup-hidden\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+Add \f[C]--s3-decompress\f[R] flag to decompress gzip-encoded files
+(Nick Craig-Wood)
+.IP \[bu] 2
+Add \f[C]--s3-sse-customer-key-base64\f[R] to supply keys with binary
+data (Richard Bateman)
+.IP \[bu] 2
+Try to keep the maximum precision in ModTime with
+\f[C]--user-server-modtime\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+Drop binary metadata with an ERROR message as it can\[aq]t be stored
+(Nick Craig-Wood)
+.IP \[bu] 2
+Add \f[C]--s3-no-system-metadata\f[R] to suppress read and write of
+system metadata (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+SFTP
+.RS 2
+.IP \[bu] 2
+Fix directory creation races (Lesmiscore)
+.RE
+.IP \[bu] 2
+Swift
+.RS 2
+.IP \[bu] 2
+Add \f[C]--swift-no-large-objects\f[R] to reduce HEAD requests (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+Union
+.RS 2
+.IP \[bu] 2
+Propagate SlowHash feature to fix hasher interaction (Lesmiscore)
+.RE
+.SS v1.59.2 - 2022-09-15
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.59.1...v1.59.2)
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+config: Move locking to fix fatal error: concurrent map read and map
+write (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Local
+.RS 2
+.IP \[bu] 2
+Disable xattr support if the filesystems indicates it is not supported
+(Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Azure Blob
+.RS 2
+.IP \[bu] 2
+Fix chunksize calculations producing too many parts (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+B2
+.RS 2
+.IP \[bu] 2
+Fix chunksize calculations producing too many parts (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Fix chunksize calculations producing too many parts (Nick Craig-Wood)
+.RE
+.SS v1.59.1 - 2022-08-08
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1)
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+accounting: Fix panic in core/stats-reset with unknown group (Nick
+Craig-Wood)
+.IP \[bu] 2
+build: Fix android build after GitHub actions change (Nick Craig-Wood)
+.IP \[bu] 2
+dlna: Fix SOAP action header parsing (Joram Schrijver)
+.IP \[bu] 2
+docs: Fix links to mount command from install docs (albertony)
+.IP \[bu] 2
+dropbox: Fix ChangeNotify was unable to decrypt errors (Nick Craig-Wood)
+.IP \[bu] 2
+fs: Fix parsing of times and durations of the form \[dq]YYYY-MM-DD
+HH:MM:SS\[dq] (Nick Craig-Wood)
+.IP \[bu] 2
+serve sftp: Fix checksum detection (Nick Craig-Wood)
+.IP \[bu] 2
+sync: Add accidentally missed filter-sensitivity to --backup-dir option
+(Nick Naumann)
+.RE
+.IP \[bu] 2
+Combine
+.RS 2
+.IP \[bu] 2
+Fix docs showing \f[C]remote=\f[R] instead of \f[C]upstreams=\f[R] (Nick
+Craig-Wood)
+.IP \[bu] 2
+Throw error if duplicate directory name is specified (Nick Craig-Wood)
+.IP \[bu] 2
+Fix errors with backends shutting down while in use (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Dropbox
+.RS 2
+.IP \[bu] 2
+Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)
+.IP \[bu] 2
+Fix infinite loop on uploading a corrupted file (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Internetarchive
+.RS 2
+.IP \[bu] 2
+Ignore checksums for files using the different method (Lesmiscore)
+.IP \[bu] 2
+Handle hash symbol in the middle of filename (Lesmiscore)
+.RE
+.IP \[bu] 2
+Jottacloud
+.RS 2
+.IP \[bu] 2
+Fix working with whitelabel Elgiganten Cloud
+.IP \[bu] 2
+Do not store username in config when using standard auth (albertony)
+.RE
+.IP \[bu] 2
+Mega
+.RS 2
+.IP \[bu] 2
+Fix nil pointer exception when bad node received (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput
+(Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+SFTP
+.RS 2
+.IP \[bu] 2
+Fix issue with WS_FTP by working around failing RealPath (albertony)
+.RE
+.IP \[bu] 2
+Union
+.RS 2
+.IP \[bu] 2
+Fix duplicated files when using directories with leading / (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix multiple files being uploaded when roots don\[aq]t exist (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix panic due to misalignment of struct field in 32 bit architectures
+(r-ricci)
+.RE
.SS v1.59.0 - 2022-07-09
.PP
See commits (https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0)
@@ -54610,8 +57017,8 @@ Hard fork \f[C]github.com/jlaffaye/ftp\f[R] to fix
\f[C]go get github.com/rclone/rclone\f[R] (Nick Craig-Wood)
.RE
.IP \[bu] 2
-oauthutil: Fix crash when webrowser requests \f[C]/robots.txt\f[R] (Nick
-Craig-Wood)
+oauthutil: Fix crash when webbrowser requests \f[C]/robots.txt\f[R]
+(Nick Craig-Wood)
.IP \[bu] 2
operations: Fix goroutine leak in case of copy retry (Ankur Gupta)
.IP \[bu] 2
@@ -54874,7 +57281,7 @@ Craig-Wood)
Fix timeout on hashing large files by sending keepalives (Nick
Craig-Wood)
.IP \[bu] 2
-Fix unecessary seeking when uploading and downloading files (Nick
+Fix unnecessary seeking when uploading and downloading files (Nick
Craig-Wood)
.IP \[bu] 2
Update docs on how to create \f[C]known_hosts\f[R] file (Nick
@@ -56486,11 +58893,11 @@ Pl\['a]nsk\['y])
.IP \[bu] 2
Add empty folder flag into ncdu browser (Adam Pl\['a]nsk\['y])
.IP \[bu] 2
-Add \f[C]!\f[R] (errror) and \f[C].\f[R] (unreadable) file flags to go
+Add \f[C]!\f[R] (error) and \f[C].\f[R] (unreadable) file flags to go
with \f[C]e\f[R] (empty) (Nick Craig-Wood)
.RE
.IP \[bu] 2
-obscure: Make \f[C]rclone osbcure -\f[R] ignore newline at end of line
+obscure: Make \f[C]rclone obscure -\f[R] ignore newline at end of line
(Nick Craig-Wood)
.IP \[bu] 2
operations
@@ -56563,7 +58970,7 @@ move: Fix data loss when source and destination are the same object
operations
.RS 2
.IP \[bu] 2
-Fix \f[C]--cutof-mode\f[R] hard not cutting off immediately (Nick
+Fix \f[C]--cutoff-mode\f[R] hard not cutting off immediately (Nick
Craig-Wood)
.IP \[bu] 2
Fix \f[C]--immutable\f[R] error message (Nick Craig-Wood)
@@ -56698,7 +59105,7 @@ Box
.IP \[bu] 2
Fix NewObject for files that differ in case (Nick Craig-Wood)
.IP \[bu] 2
-Fix finding directories in a case insentive way (Nick Craig-Wood)
+Fix finding directories in a case insensitive way (Nick Craig-Wood)
.RE
.IP \[bu] 2
Chunker
@@ -56920,7 +59327,7 @@ Sugarsync
.IP \[bu] 2
Fix NewObject for files that differ in case (Nick Craig-Wood)
.IP \[bu] 2
-Fix finding directories in a case insentive way (Nick Craig-Wood)
+Fix finding directories in a case insensitive way (Nick Craig-Wood)
.RE
.IP \[bu] 2
Swift
@@ -57102,7 +59509,7 @@ See commits (https://github.com/rclone/rclone/compare/v1.53.1...v1.53.2)
Bug Fixes
.RS 2
.IP \[bu] 2
-acounting
+accounting
.RS 2
.IP \[bu] 2
Fix incorrect speed and transferTime in core/stats (Nick Craig-Wood)
@@ -59710,7 +62117,7 @@ rcd: Fix permissions problems on cache directory with web gui download
Mount
.RS 2
.IP \[bu] 2
-Default \f[C]--daemon-timout\f[R] to 15 minutes on macOS and FreeBSD
+Default \f[C]--daemon-timeout\f[R] to 15 minutes on macOS and FreeBSD
(Nick Craig-Wood)
.IP \[bu] 2
Update docs to show mounting from root OK for bucket-based (Nick
@@ -60583,7 +62990,7 @@ HTTP
Add an example with username and password which is supported but
wasn\[aq]t documented (Nick Craig-Wood)
.IP \[bu] 2
-Fix backend with \f[C]--files-from\f[R] and non-existent files (Nick
+Fix backend with \f[C]--files-from\f[R] and nonexistent files (Nick
Craig-Wood)
.RE
.IP \[bu] 2
@@ -61860,7 +64267,7 @@ Work around strange response from box FTP server
.IP \[bu] 2
More workarounds for FTP servers to fix mkParentDir error
.IP \[bu] 2
-Fix no error on listing non-existent directory
+Fix no error on listing nonexistent directory
.RE
.IP \[bu] 2
Google Cloud Storage
@@ -62057,7 +64464,7 @@ requests)
Bug Fixes
.RS 2
.IP \[bu] 2
-config: fixes errors on non existing config by loading config file only
+config: fixes errors on nonexistent config by loading config file only
on first access
.IP \[bu] 2
config: retry saving the config after failure (Mateusz)
@@ -63440,7 +65847,7 @@ S3
Command line and config file support for
.RS 2
.IP \[bu] 2
-Setting/overriding ACL - thanks Radek Senfeld
+Setting/overriding ACL - thanks Radek \[vS]enfeld
.IP \[bu] 2
Setting storage class - thanks Asko Tamm
.RE
@@ -66077,6 +68484,56 @@ Lorenzo Maiorfi
Claudio Maradonna
.IP \[bu] 2
Ovidiu Victor Tatar
+.IP \[bu] 2
+Evan Spensley
+.IP \[bu] 2
+Yen Hu <61753151+0x59656e@users.noreply.github.com>
+.IP \[bu] 2
+Steve Kowalik
+.IP \[bu] 2
+Jordi Gonzalez Mu\[~n]oz
+.IP \[bu] 2
+Joram Schrijver
+.IP \[bu] 2
+Mark Trolley
+.IP \[bu] 2
+Jo\[~a]o Henrique Franco
+.IP \[bu] 2
+anonion
+.IP \[bu] 2
+Ryan Morey <4590343+rmorey@users.noreply.github.com>
+.IP \[bu] 2
+Simon Bos
+.IP \[bu] 2
+YFdyh000 * Josh Soref
+<2119212+jsoref@users.noreply.github.com>
+.IP \[bu] 2
+\[/O]yvind Heddeland Instefjord
+.IP \[bu] 2
+Dmitry Deniskin <110819396+ddeniskin@users.noreply.github.com>
+.IP \[bu] 2
+Alexander Knorr <106825+opexxx@users.noreply.github.com>
+.IP \[bu] 2
+Richard Bateman
+.IP \[bu] 2
+Dimitri Papadopoulos Orfanos
+<3234522+DimitriPapadopoulos@users.noreply.github.com>
+.IP \[bu] 2
+Lorenzo Milesi
+.IP \[bu] 2
+Isaac Aymerich
+.IP \[bu] 2
+YanceyChiew <35898533+YanceyChiew@users.noreply.github.com>
+.IP \[bu] 2
+Manoj Ghosh
+.IP \[bu] 2
+Bachue Zhou
+.IP \[bu] 2
+Manoj Ghosh
+.IP \[bu] 2
+Tom Mombourquette
+.IP \[bu] 2
+Robert Newson
.SH Contact the rclone project
.SS Forum
.PP