From 169990e270b2977c39bd6ecd8a2921cf30a6d2b7 Mon Sep 17 00:00:00 2001 From: Nick Craig-Wood Date: Mon, 1 Nov 2021 15:42:05 +0000 Subject: [PATCH] Version v1.57.0 --- MANUAL.html | 5883 ++++++++----- MANUAL.md | 6631 +++++++++------ MANUAL.txt | 6620 ++++++++++----- bin/make_manual.py | 2 + docs/content/alias.md | 3 +- docs/content/amazonclouddrive.md | 14 +- docs/content/azureblob.md | 57 +- docs/content/b2.md | 28 +- docs/content/box.md | 39 +- docs/content/cache.md | 29 +- docs/content/changelog.md | 173 + docs/content/chunker.md | 35 +- docs/content/commands/rclone.md | 5 +- docs/content/commands/rclone_about.md | 33 +- docs/content/commands/rclone_backend.md | 4 +- docs/content/commands/rclone_cat.md | 10 +- docs/content/commands/rclone_check.md | 2 +- docs/content/commands/rclone_checksum.md | 2 +- docs/content/commands/rclone_completion.md | 34 + .../commands/rclone_completion_bash.md | 48 + .../commands/rclone_completion_fish.md | 42 + .../commands/rclone_completion_powershell.md | 40 + .../content/commands/rclone_completion_zsh.md | 47 + docs/content/commands/rclone_config.md | 3 +- docs/content/commands/rclone_config_create.md | 16 +- docs/content/commands/rclone_config_delete.md | 6 +- .../commands/rclone_config_password.md | 2 +- docs/content/commands/rclone_config_paths.md | 27 + docs/content/commands/rclone_config_update.md | 16 +- docs/content/commands/rclone_copy.md | 46 +- docs/content/commands/rclone_copyto.md | 10 +- docs/content/commands/rclone_dedupe.md | 2 +- docs/content/commands/rclone_listremotes.md | 2 +- docs/content/commands/rclone_lsd.md | 2 +- docs/content/commands/rclone_lsf.md | 14 +- docs/content/commands/rclone_lsjson.md | 25 +- docs/content/commands/rclone_mount.md | 201 +- docs/content/commands/rclone_move.md | 19 - docs/content/commands/rclone_moveto.md | 6 +- docs/content/commands/rclone_ncdu.md | 1 + docs/content/commands/rclone_rc.md | 16 +- docs/content/commands/rclone_selfupdate.md | 4 +- docs/content/commands/rclone_serve_dlna.md | 135 +- docs/content/commands/rclone_serve_docker.md | 172 +- docs/content/commands/rclone_serve_ftp.md | 141 +- docs/content/commands/rclone_serve_http.md | 146 +- docs/content/commands/rclone_serve_restic.md | 18 +- docs/content/commands/rclone_serve_sftp.md | 145 +- docs/content/commands/rclone_serve_webdav.md | 141 +- docs/content/commands/rclone_size.md | 2 +- docs/content/commands/rclone_sync.md | 8 +- .../commands/rclone_test_changenotify.md | 2 +- docs/content/commands/rclone_test_info.md | 14 +- docs/content/commands/rclone_touch.md | 23 +- docs/content/commands/rclone_tree.md | 30 +- docs/content/commands/rclone_version.md | 2 +- docs/content/compress.md | 28 +- docs/content/crypt.md | 21 +- docs/content/drive.md | 83 +- docs/content/dropbox.md | 27 +- docs/content/fichier.md | 14 +- docs/content/filefabric.md | 19 +- docs/content/flags.md | 901 +- docs/content/ftp.md | 66 +- docs/content/googlecloudstorage.md | 95 +- docs/content/googlephotos.md | 25 +- docs/content/hasher.md | 16 +- docs/content/hdfs.md | 27 +- docs/content/http.md | 21 +- docs/content/hubic.md | 14 +- docs/content/jottacloud.md | 5 +- docs/content/koofr.md | 20 +- docs/content/local.md | 33 +- docs/content/mailru.md | 21 +- docs/content/mega.md | 8 +- docs/content/onedrive.md | 27 +- docs/content/opendrive.md | 8 +- docs/content/pcloud.md | 14 +- docs/content/premiumizeme.md | 6 +- docs/content/putio.md | 4 +- docs/content/qingstor.md | 32 +- docs/content/rc.md | 225 +- docs/content/s3.md | 267 +- docs/content/seafile.md | 28 +- docs/content/sftp.md | 45 +- docs/content/sharefile.md | 14 +- docs/content/sia.md | 5 +- docs/content/sugarsync.md | 20 +- docs/content/swift.md | 39 +- docs/content/tardigrade.md | 14 +- docs/content/union.md | 8 +- docs/content/uptobox.md | 10 +- docs/content/webdav.md | 29 +- docs/content/yandex.md | 14 +- docs/content/zoho.md | 14 +- rclone.1 | 7539 +++++++++++------ 96 files changed, 19756 insertions(+), 11228 deletions(-) create mode 100644 docs/content/commands/rclone_completion.md create mode 100644 docs/content/commands/rclone_completion_bash.md create mode 100644 docs/content/commands/rclone_completion_fish.md create mode 100644 docs/content/commands/rclone_completion_powershell.md create mode 100644 docs/content/commands/rclone_completion_zsh.md create mode 100644 docs/content/commands/rclone_config_paths.md diff --git a/MANUAL.html b/MANUAL.html index db69f2569..51a73d999 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -12,12 +12,75 @@ span.underline{text-decoration: underline;} div.column{display: inline-block; vertical-align: top; width: 50%;} +

rclone(1) User Manual

Nick Craig-Wood

-

Jul 20, 2021

+

Nov 01, 2021

Rclone syncs your files to cloud storage

rclone logo

@@ -35,7 +98,7 @@

Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone's familiar syntax includes shell pipeline support, and --dry-run protection. It is used at the command line, in scripts or via its API.

Users call rclone "The Swiss army knife of cloud storage", and "Technology indistinguishable from magic".

Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can check the integrity of your files. Where possible, rclone employs server-side transfers to minimise local bandwidth use and transfers from one provider to another without using local disk.

-

Virtual backends wrap local and cloud file systems to apply encryption, compression chunking and joining.

+

Virtual backends wrap local and cloud file systems to apply encryption, compression, chunking, hashing and joining.

Rclone mounts any local, cloud or virtual filesystem as a disk on Windows, macOS, linux and FreeBSD, and also serves these over SFTP, HTTP, WebDAV, FTP and DLNA.

Rclone is mature, open source software originally inspired by rsync and written in Go. The friendly support community are familiar with varied use cases. Official Ubuntu, Debian, Fedora, Brew and Chocolatey repos. include rclone. For the latest version downloading from rclone.org is recommended.

Rclone is widely used on Linux, Windows and Mac. Third party developers create innovative backup, restore, GUI and business process solutions using the rclone command line or API.

@@ -118,6 +181,7 @@
  • Seafile
  • SeaweedFS
  • SFTP
  • +
  • Sia
  • StackPath
  • SugarSync
  • Tardigrade
  • @@ -141,12 +205,12 @@

    Quickstart

    See below for some expanded Linux / macOS instructions.

    -

    See the Usage section of the docs for how to use rclone, or run rclone -h.

    +

    See the usage docs for how to use rclone, or run rclone -h.

    Already installed rclone can be easily updated to the latest version using the rclone selfupdate command.

    Script installation

    To install rclone on Linux/macOS/BSD systems, run:

    @@ -171,6 +235,7 @@ sudo mandb
    rclone config

    macOS installation with brew

    brew install rclone
    +

    NOTE: This version of rclone will not support mount any more (see #5373). If mounting is wanted on macOS, either install a precompiled binary or enable the relevant option when installing from source.

    macOS installation from precompiled binary, using curl

    To avoid problems with macOS gatekeeper enforcing the binary to be signed and notarized it is enough to download with curl.

    Download the latest version of rclone.

    @@ -239,11 +304,12 @@ docker run --rm \ ls ~/data/mount kill %1

    Install from source

    -

    Make sure you have at least Go go1.13 installed. Download go if necessary. The latest release is recommended. Then

    -
    git clone https://github.com/rclone/rclone.git
    -cd rclone
    -go build
    -./rclone version
    +

    Make sure you have at least Go go1.14 installed. Download go if necessary. The latest release is recommended. Then

    +
    git clone https://github.com/rclone/rclone.git
    +cd rclone
    +go build
    +# If on macOS and mount is wanted, instead run: make GOTAGS=cmount
    +./rclone version

    This will leave you a checked out version of rclone you can modify and send pull requests with. If you use make instead of go build then the rclone build will have the correct version information in it.

    You can also build the latest stable rclone with:

    go get github.com/rclone/rclone
    @@ -260,38 +326,44 @@ go build
        - hosts: rclone-hosts
           roles:
               - rclone
    -

    Autostart

    +

    Portable installation

    +

    As mentioned above, rclone is single executable (rclone, or rclone.exe on Windows) that you can download as a zip archive and extract into a location of your choosing. When executing different commands, it may create files in different locations, such as a configuration file and various temporary files. By default the locations for these are according to your operating system, e.g. configuration file in your user profile directory and temporary files in the standard temporary directory, but you can customize all of them, e.g. to make a completely self-contained, portable installation.

    +

    Run the config paths command to see the locations that rclone will use.

    +

    To override them set the corresponding options (as command-line arguments, or as environment variables): - --config - --cache-dir - --temp-dir

    +

    Autostart

    After installing and configuring rclone, as described above, you are ready to use rclone as an interactive command line utility. If your goal is to perform periodic operations, such as a regular sync, you will probably want to configure your rclone command in your operating system's scheduler. If you need to expose service-like features, such as remote control, GUI, serve or mount, you will often want an rclone command always running in the background, and configuring it to run in a service infrastructure may be a better option. Below are some alternatives on how to achieve this on different operating systems.

    NOTE: Before setting up autorun it is highly recommended that you have tested your command manually from a Command Prompt first.

    -

    Autostart on Windows

    +

    Autostart on Windows

    The most relevant alternatives for autostart on Windows are: - Run at user log on using the Startup folder - Run at user log on, at system startup or at schedule using Task Scheduler - Run at system startup using Windows service

    -

    Running in background

    +

    Running in background

    Rclone is a console application, so if not starting from an existing Command Prompt, e.g. when starting rclone.exe from a shortcut, it will open a Command Prompt window. When configuring rclone to run from task scheduler and windows service you are able to set it to run hidden in background. From rclone version 1.54 you can also make it run hidden from anywhere by adding option --no-console (it may still flash briefly when the program starts). Since rclone normally writes information and any error messages to the console, you must redirect this to a file to be able to see it. Rclone has a built-in option --log-file for that.

    Example command to run a sync in background:

    c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclone\logs\sync_files.txt
    -

    User account

    +

    User account

    As mentioned in the mount documentation, mounted drives created as Administrator are not visible to other accounts, not even the account that was elevated as Administrator. By running the mount command as the built-in SYSTEM user account, it will create drives accessible for everyone on the system. Both scheduled task and Windows service can be used to achieve this.

    NOTE: Remember that when rclone runs as the SYSTEM user, the user profile that it sees will not be yours. This means that if you normally run rclone with configuration file in the default location, to be able to use the same configuration when running as the system user you must explicitely tell rclone where to find it with the --config option, or else it will look in the system users profile path (C:\Windows\System32\config\systemprofile). To test your command manually from a Command Prompt, you can run it with the PsExec utility from Microsoft's Sysinternals suite, which takes option -s to execute commands as the SYSTEM user.

    -

    Start from Startup folder

    +

    Start from Startup folder

    To quickly execute an rclone command you can simply create a standard Windows Explorer shortcut for the complete rclone command you want to run. If you store this shortcut in the special "Startup" start-menu folder, Windows will automatically run it at login. To open this folder in Windows Explorer, enter path %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup, or C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp if you want the command to start for every user that logs in.

    This is the easiest approach to autostarting of rclone, but it offers no functionality to set it to run as different user, or to set conditions or actions on certain events. Setting up a scheduled task as described below will often give you better results.

    -

    Start from Task Scheduler

    +

    Start from Task Scheduler

    Task Scheduler is an administrative tool built into Windows, and it can be used to configure rclone to be started automatically in a highly configurable way, e.g. periodically on a schedule, on user log on, or at system startup. It can run be configured to run as the current user, or for a mount command that needs to be available to all users it can run as the SYSTEM user. For technical information, see https://docs.microsoft.com/windows/win32/taskschd/task-scheduler-start-page.

    -

    Run as service

    +

    Run as service

    For running rclone at system startup, you can create a Windows service that executes your rclone command, as an alternative to scheduled task configured to run at startup.

    -

    Mount command built-in service integration

    +
    Mount command built-in service integration

    For mount commands, Rclone has a built-in Windows service integration via the third party WinFsp library it uses. Registering as a regular Windows service easy, as you just have to execute the built-in PowerShell command New-Service (requires administrative privileges).

    Example of a PowerShell command that creates a Windows service for mounting some remote:/files as drive letter X:, for all users (service will be running as the local system account):

    New-Service -Name Rclone -BinaryPathName 'c:\rclone\rclone.exe mount remote:/files X: --config c:\rclone\config\rclone.conf --log-file c:\rclone\logs\mount.txt'

    The WinFsp service infrastructure supports incorporating services for file system implementations, such as rclone, into its own launcher service, as kind of "child services". This has the additional advantage that it also implements a network provider that integrates into Windows standard methods for managing network drives. This is currently not officially supported by Rclone, but with WinFsp version 2019.3 B2 / v1.5B2 or later it should be possible through path rewriting as described here.

    -

    Third party service integration

    +
    Third party service integration

    To Windows service running any rclone command, the excellent third party utility NSSM, the "Non-Sucking Service Manager", can be used. It includes some advanced features such as adjusting process periority, defining process environment variables, redirect to file anything written to stdout, and customized response to different exit codes, with a GUI to configure everything from (although it can also be used from command line ).

    There are also several other alternatives. To mention one more, WinSW, "Windows Service Wrapper", is worth checking out. It requires .NET Framework, but it is preinstalled on newer versions of Windows, and it also provides alternative standalone distributions which includes necessary runtime (.NET 5). WinSW is a command-line only utility, where you have to manually create an XML file with service configuration. This may be a drawback for some, but it can also be an advantage as it is easy to back up and re-use the configuration settings, without having go through manual steps in a GUI. One thing to note is that by default it does not restart the service on error, one have to explicit enable this in the configuration file (via the "onfailure" parameter).

    -

    Autostart on Linux

    -

    Start as a service

    +

    Autostart on Linux

    +

    Start as a service

    To always run rclone in background, relevant for mount commands etc, you can use systemd to set up rclone as a system or user service. Running as a system service ensures that it is run at startup even if the user it is running as has no active session. Running rclone as a user service ensures that it only starts after the configured user has logged into the system.

    -

    Run periodically from cron

    +

    Run periodically from cron

    To run a periodic command, such as a copy/sync, you can set up a cron job.

    +

    Usage

    +

    Rclone is a command line program to manage files on cloud storage. After download and install, continue here to learn how to use it: Initial configuration, what the basic syntax looks like, describes the various subcommands, the various options, and more.

    Configure

    First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config entry for how to find the config file and choose its location.)

    The easiest way to make the config is to run rclone with the config option:

    @@ -315,10 +387,11 @@ go build
  • Google Cloud Storage
  • Google Drive
  • Google Photos
  • +
  • Hasher - to handle checksums for other remotes
  • HDFS
  • HTTP
  • Hubic
  • -
  • Jottacloud / GetSky.no
  • +
  • Jottacloud
  • Koofr
  • Mail.ru Cloud
  • Mega
  • @@ -333,6 +406,7 @@ go build
  • QingStor
  • Seafile
  • SFTP
  • +
  • Sia
  • SugarSync
  • Tardigrade
  • Union
  • @@ -342,7 +416,7 @@ go build
  • Zoho WorkDrive
  • The local filesystem
  • -

    Usage

    +

    Basic syntax

    Rclone syncs a directory tree from one storage system to another.

    Its syntax is like this

    Syntax: [options] subcommand <parameters> <parameters...>
    @@ -366,11 +440,12 @@ rclone sync -i /local/path remote:path # syncs /local/path to the remote<

    rclone copy

    -

    Copy files from source to dest, skipping already copied.

    +

    Copy files from source to dest, skipping identical files.

    Synopsis

    -

    Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination.

    +

    Copy the source to the destination. Does not transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Doesn't delete files from the destination.

    Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.

    If dest:path doesn't exist, it is created and the source:path contents go there.

    For example

    @@ -413,7 +488,7 @@ destpath/sourcepath/two.txt

    rclone sync

    Make source and dest identical, modifying destination only.

    Synopsis

    -

    Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary (except duplicate objects, see below).

    +

    Sync the source to the destination, changing the destination only. Doesn't transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary (except duplicate objects, see below).

    Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.

    rclone sync -i SOURCE remote:DESTINATION

    Note that files in the destination won't be deleted if there were any errors at any point. Duplicate objects (files with the same name, on those providers that support it) are also not yet handled.

    @@ -530,7 +605,7 @@ rclone --dry-run --min-size 100M delete remote:path
      -C, --checkfile string        Treat source:path as a SUM file with hashes of given type
           --combined string         Make a combined report of changes to this file
           --differ string           Report all non-matching files to this file
    -      --download                Check by downloading rather than with hash.
    +      --download                Check by downloading rather than with hash
           --error string            Report all files with errors (hashing or reading) to this file
       -h, --help                    help for check
           --match string            Report all matching files to this file
    @@ -603,7 +678,7 @@ rclone --dry-run --min-size 100M delete remote:path
    rclone lsd remote:path [flags]

    Options

      -h, --help        help for lsd
    -  -R, --recursive   Recurse into the listing.
    + -R, --recursive Recurse into the listing

    See the global flags page for global options not listed here.

    SEE ALSO

    --drive-root-folder-id

    -

    ID of the root folder Leave blank normally.

    +

    ID of the root folder. Leave blank normally.

    Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point.

    --drive-service-account-file

    -

    Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login.

    +

    Service Account Credentials JSON file path.

    +

    Leave blank normally. Needed only if you want use SA instead of interactive login.

    Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

    --drive-alternate-export

    -

    Deprecated: no longer needed

    +

    Deprecated: No longer needed.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to drive (Google Drive).

    --drive-token

    OAuth Access Token as a JSON blob.

    @@ -14825,7 +15356,8 @@ trashed=false and 'c' in parents
  • Default: ""
  • --drive-auth-url

    -

    Auth server URL. Leave blank to use the provider defaults.

    +

    Auth server URL.

    +

    Leave blank to use the provider defaults.

    --drive-token-url

    -

    Token server url. Leave blank to use the provider defaults.

    +

    Token server url.

    +

    Leave blank to use the provider defaults.

    --drive-service-account-credentials

    -

    Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login.

    +

    Service Account Credentials JSON blob.

    +

    Leave blank normally. Needed only if you want use SA instead of interactive login.

    --drive-team-drive

    -

    ID of the Shared Drive (Team Drive)

    +

    ID of the Shared Drive (Team Drive).

    --drive-use-trash

    -

    Send files to the trash instead of deleting permanently. Defaults to true, namely sending files to the trash. Use --drive-use-trash=false to delete files permanently instead.

    +

    Send files to the trash instead of deleting permanently.

    +

    Defaults to true, namely sending files to the trash. Use --drive-use-trash=false to delete files permanently instead.

    --drive-skip-gdocs

    -

    Skip google documents in all listings. If given, gdocs practically become invisible to rclone.

    +

    Skip google documents in all listings.

    +

    If given, gdocs practically become invisible to rclone.

    --drive-trashed-only

    -

    Only show files that are in the trash. This will show trashed files in their original directory structure.

    +

    Only show files that are in the trash.

    +

    This will show trashed files in their original directory structure.

    --drive-formats

    -

    Deprecated: see export_formats

    +

    Deprecated: See export_formats.

    --drive-allow-import-name-change

    -

    Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.

    +

    Allow the filetype to change when uploading Google docs.

    +

    E.g. file.doc to file.docx. This will confuse sync and reupload every time.

    --drive-use-created-date

    -

    Use file created date instead of modified date.,

    +

    Use file created date instead of modified date.

    Useful when downloading data and you want the creation date used in place of the last modified date.

    WARNING: This flag may have some unexpected consequences.

    When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the "--checksum" flag.

    @@ -14973,7 +15511,7 @@ trashed=false and 'c' in parents
  • Default: false
  • --drive-list-chunk

    -

    Size of listing chunk 100-1000. 0 to disable.

    +

    Size of listing chunk 100-1000, 0 to disable.

    --drive-upload-cutoff

    -

    Cutoff for switching to chunked upload

    +

    Cutoff for switching to chunked upload.

    --drive-chunk-size

    -

    Upload chunk size. Must a power of 2 >= 256k.

    +

    Upload chunk size.

    +

    Must a power of 2 >= 256k.

    Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.

    Reducing this will reduce memory usage but decrease performance.

    --drive-disable-http2

    -

    Disable drive using http2

    +

    Disable drive using http2.

    There is currently an unsolved issue with the google drive backend and HTTP/2. HTTP/2 is therefore disabled by default for the drive backend but can be re-enabled here. When the issue is solved this flag will be removed.

    See: https://github.com/rclone/rclone/issues/3631

    --drive-stop-on-upload-limit

    -

    Make upload limit errors be fatal

    +

    Make upload limit errors be fatal.

    At the time of writing it is only possible to upload 750 GiB of data to Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync.

    Note that this detection is relying on error message strings which Google don't document so it may break in the future.

    See: https://github.com/rclone/rclone/issues/3857

    @@ -15090,7 +15629,7 @@ trashed=false and 'c' in parents
  • Default: false
  • --drive-stop-on-download-limit

    -

    Make download limit errors be fatal

    +

    Make download limit errors be fatal.

    At the time of writing it is only possible to download 10 TiB of data from Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync.

    Note that this detection is relying on error message strings which Google don't document so it may break in the future.

    --drive-skip-shortcuts

    -

    If set skip shortcut files

    +

    If set skip shortcut files.

    Normally rclone dereferences shortcut files making them appear as if they are the original file (see the shortcuts section). If this flag is set then rclone will ignore shortcut files completely.

    --drive-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    -

    Backend commands

    +

    Backend commands

    Here are the commands specific to the drive backend.

    Run them with

    rclone backend COMMAND remote:

    The help below will explain what arguments each command takes.

    See the "rclone backend" command for more info on how to pass options and arguments.

    These can be run on a running backend using the rc command backend/command.

    -

    get

    +

    get

    Get command for fetching the drive config parameters

    rclone backend get remote: [options] [<arguments>+]

    This is a get command which will be used to fetch the various drive config parameters

    @@ -15136,7 +15675,7 @@ rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o ch
  • "chunk_size": show the current upload chunk size
  • "service_account_file": show the current service account file
  • -

    set

    +

    set

    Set command for updating the drive config parameters

    rclone backend set remote: [options] [<arguments>+]

    This is a set command which will be used to update the various drive config parameters

    @@ -15148,7 +15687,7 @@ rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json
  • "chunk_size": update the current upload chunk size
  • "service_account_file": update the current service account file
  • -

    shortcut

    +

    shortcut

    Create shortcuts from files or directories

    rclone backend shortcut remote: [options] [<arguments>+]

    This command creates shortcuts from files or directories.

    @@ -15161,12 +15700,12 @@ rclone backend shortcut drive: source_item -o target=drive2: destination_shortcu -

    drives

    +

    drives

    List the Shared Drives available to this account

    rclone backend drives remote: [options] [<arguments>+]

    This command lists the Shared Drives (Team Drives) available to this account.

    Usage:

    -
    rclone backend drives drive:
    +
    rclone backend [-o config] drives drive:

    This will return a JSON list of objects like this

    [
         {
    @@ -15180,7 +15719,16 @@ rclone backend shortcut drive: source_item -o target=drive2: destination_shortcu
             "name": "Test Drive"
         }
     ]
    -

    untrash

    +

    With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives found.

    +
    [My Drive]
    +type = alias
    +remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
    +
    +[Test Drive]
    +type = alias
    +remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
    +

    Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. This may require manual editing of the names.

    +

    untrash

    Untrash files and directories

    rclone backend untrash remote: [options] [<arguments>+]

    This command untrashes all the files and directories in the directory passed in recursively.

    @@ -15194,7 +15742,7 @@ rclone backend -i untrash drive:directory subdir "Untrashed": 17, "Errors": 0 } -

    copyid

    +

    copyid

    Copy files by ID

    rclone backend copyid remote: [options] [<arguments>+]

    This command copies files by ID

    @@ -15205,10 +15753,10 @@ rclone backend copyid drive: ID1 path1 ID2 path2

    The path should end with a / to indicate copy the file as named to this directory. If it doesn't end with a / then the last path component will be used as the file name.

    If the destination is a drive backend then server-side copying will be attempted if possible.

    Use the -i flag to see what would be copied before copying.

    -

    Limitations

    -

    Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiByte/s but lots of small files can take a long time.

    +

    Limitations

    +

    Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiB/s but lots of small files can take a long time.

    Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server-side copies with --disable copy to download and upload the files if you prefer.

    -

    Limitations of Google Docs

    +

    Limitations of Google Docs

    Google docs will appear as size -1 in rclone ls and as size 0 in anything which uses the VFS layer, e.g. rclone mount, rclone serve.

    This is because rclone can't find out the size of the Google docs without downloading them.

    Google docs will transfer correctly with rclone sync, rclone copy etc as rclone knows to ignore the size when doing the transfer.

    @@ -15222,7 +15770,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2

    The most likely cause of this is the duplicated file issue above - run rclone dedupe and check your logs for duplicate object or directory messages.

    This can also be caused by a delay/caching on google drive's end when comparing directory listings. Specifically with team drives used in combination with --fast-list. Files that were uploaded recently may not appear on the directory list sent to rclone when using --fast-list.

    Waiting a moderate period of time between attempts (estimated to be approximately 1 hour) and/or not using --fast-list both seem to be effective in preventing the problem.

    -

    Making your own client_id

    +

    Making your own client_id

    When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.

    It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower.

    Here is how to create your own Google Drive client ID for rclone:

    @@ -15236,10 +15784,11 @@ rclone backend copyid drive: ID1 path1 ID2 path2

    (PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this has not been tested/documented so far).

    1. Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".

    2. -
    3. Choose an application type of "Desktop app" if you using a Google account or "Other" if you using a GSuite account and click "Create". (the default name is fine)

    4. +
    5. Choose an application type of "Desktop app" and click "Create". (the default name is fine)

    6. It will show you a client ID and client secret. Make a note of these.

    7. Go to "Oauth consent screen" and press "Publish App"

    8. Provide the noted client ID and client secret to rclone.

    9. +
    10. Click "OAuth consent screen", then click "PUBLISH APP" button and confirm, or add your account under "Test users".

    Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it's not such a big deal).

    (Thanks to @balazer on github for these instructions.)

    @@ -15247,7 +15796,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2

    Google Photos

    The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos.

    NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use.

    -

    Configuring Google Photos

    +

    Configuration

    The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -15321,7 +15870,7 @@ y/e/d> y
    rclone ls remote:album/newAlbum

    Sync /home/local/images to the Google Photos, removing any excess files in the album.

    rclone sync -i /home/local/image remote:album/newAlbum
    -

    Layout

    +

    Layout

    As Google Photos is not a general purpose cloud storage system the backend is laid out to help you navigate it.

    The directories under media show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup remote:media/by-month. (NB remote:media/by-day is rather slow at the moment so avoid for syncing.)

    Note that all your photos and videos will appear somewhere under media, but they may not appear under album unless you've put them into albums.

    @@ -15400,38 +15949,11 @@ y/e/d> y

    This means that you can use the album path pretty much like a normal filesystem and it is a good target for repeated syncing.

    The shared-album directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.

    -

    Limitations

    -

    Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item.

    -

    Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and will count towards your storage quota in your Google Account. The API does not offer a way to upload in "high quality" mode..

    -

    rclone about is not supported by the Google Photos backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    -

    See List of backends that do not support rclone about See rclone about

    -

    Downloading Images

    -

    When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.

    -

    The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort

    -

    Downloading Videos

    -

    When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044.

    -

    Duplicates

    -

    If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg would then appear as file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer alas!).

    -

    If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn't cause too many problems.

    -

    Modified time

    -

    The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known.

    -

    This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes.

    -

    Size

    -

    The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check.

    -

    It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is very slow and uses up a lot of transactions. This can be enabled with the --gphotos-read-size option or the read_size = true config parameter.

    -

    If you want to use the backend with rclone mount you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You'll need to experiment to see if it works for you without the flag.

    -

    Albums

    -

    Rclone can only upload files to albums it created. This is a limitation of the Google Photos API.

    -

    Rclone can remove files it uploaded from albums it created only.

    -

    Deleting files

    -

    Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See bug #109759781.

    -

    Rclone cannot delete files anywhere except under album.

    -

    Deleting albums

    -

    The Google Photos API does not support deleting albums - see bug #135714733.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to google photos (Google Photos).

    --gphotos-client-id

    -

    OAuth Client Id Leave blank normally.

    +

    OAuth Client Id.

    +

    Leave blank normally.

    --gphotos-client-secret

    -

    OAuth Client Secret Leave blank normally.

    +

    OAuth Client Secret.

    +

    Leave blank normally.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to google photos (Google Photos).

    --gphotos-token

    OAuth Access Token as a JSON blob.

    @@ -15466,7 +15989,8 @@ y/e/d> y
  • Default: ""
  • --gphotos-auth-url

    -

    Auth server URL. Leave blank to use the provider defaults.

    +

    Auth server URL.

    +

    Leave blank to use the provider defaults.

    --gphotos-token-url

    -

    Token server url. Leave blank to use the provider defaults.

    +

    Token server url.

    +

    Leave blank to use the provider defaults.

    --gphotos-start-year

    -

    Year limits the photos to be downloaded to those which are uploaded after the given year

    +

    Year limits the photos to be downloaded to those which are uploaded after the given year.

    +

    --gphotos-encoding

    +

    This sets the encoding for the backend.

    +

    See the encoding section in the overview for more info.

    + +

    Limitations

    +

    Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item.

    +

    Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and will count towards your storage quota in your Google Account. The API does not offer a way to upload in "high quality" mode..

    +

    rclone about is not supported by the Google Photos backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    +

    See List of backends that do not support rclone about See rclone about

    +

    Downloading Images

    +

    When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.

    +

    The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort

    +

    Downloading Videos

    +

    When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044.

    +

    Duplicates

    +

    If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg would then appear as file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer alas!).

    +

    If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn't cause too many problems.

    +

    Modified time

    +

    The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known.

    +

    This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes.

    +

    Size

    +

    The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check.

    +

    It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is very slow and uses up a lot of transactions. This can be enabled with the --gphotos-read-size option or the read_size = true config parameter.

    +

    If you want to use the backend with rclone mount you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You'll need to experiment to see if it works for you without the flag.

    +

    Albums

    +

    Rclone can only upload files to albums it created. This is a limitation of the Google Photos API.

    +

    Rclone can remove files it uploaded from albums it created only.

    +

    Deleting files

    +

    Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See bug #109759781.

    +

    Rclone cannot delete files anywhere except under album.

    +

    Deleting albums

    +

    The Google Photos API does not support deleting albums - see bug #135714733.

    +

    Hasher (EXPERIMENTAL)

    +

    Hasher is a special overlay backend to create remotes which handle checksums for other remotes. It's main functions include: - Emulate hash types unimplemented by backends - Cache checksums to help with slow hashing of large local or (S)FTP files - Warm up checksum cache from external SUM files

    +

    Getting started

    +

    To use Hasher, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote. Check that your base remote is working.

    +

    Let's call the base remote myRemote:path here. Note that anything inside myRemote:path will be handled by hasher and anything outside won't. This means that if you are using a bucket based remote (S3, B2, Swift) then you should put the bucket in the remote s3:bucket.

    +

    Now proceed to interactive or manual configuration.

    +

    Interactive configuration

    +

    Run rclone config:

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> Hasher1
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / Handle checksums for other remotes
    +   \ "hasher"
    +[snip]
    +Storage> hasher
    +Remote to cache checksums for, like myremote:mypath.
    +Enter a string value. Press Enter for the default ("").
    +remote> myRemote:path
    +Comma separated list of supported checksum types.
    +Enter a string value. Press Enter for the default ("md5,sha1").
    +hashsums> md5
    +Maximum time to keep checksums in cache. 0 = no cache, off = cache forever.
    +max_age> off
    +Edit advanced config? (y/n)
    +y) Yes
    +n) No
    +y/n> n
    +Remote config
    +--------------------
    +[Hasher1]
    +type = hasher
    +remote = myRemote:path
    +hashsums = md5
    +max_age = off
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Manual configuration

    +

    Run rclone config path to see the path of current active config file, usually YOURHOME/.config/rclone/rclone.conf. Open it in your favorite text editor, find section for the base remote and create new section for hasher like in the following examples:

    +
    [Hasher1]
    +type = hasher
    +remote = myRemote:path
    +hashes = md5
    +max_age = off
    +
    +[Hasher2]
    +type = hasher
    +remote = /local/path
    +hashes = dropbox,sha1
    +max_age = 24h
    +

    Hasher takes basically the following parameters: - remote is required, - hashes is a comma separated list of supported checksums (by default md5,sha1), - max_age - maximum time to keep a checksum value in the cache, 0 will disable caching completely, off will cache "forever" (that is until the files get changed).

    +

    Make sure the remote has : (colon) in. If you specify the remote without a colon then rclone will use a local directory of that name. So if you use a remote of /local/path then rclone will handle hashes for that directory. If you use remote = name literally then rclone will put files in a directory called name located under current directory.

    +

    Usage

    +

    Basic operations

    +

    Now you can use it as Hasher2:subdir/file instead of base remote. Hasher will transparently update cache with new checksums when a file is fully read or overwritten, like:

    +
    rclone copy External:path/file Hasher:dest/path
    +
    +rclone cat Hasher:path/to/file > /dev/null
    +

    The way to refresh all cached checksums (even unsupported by the base backend) for a subtree is to re-download all files in the subtree. For example, use hashsum --download using any supported hashsum on the command line (we just care to re-read):

    +
    rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null
    +
    +rclone backend dump Hasher:path/to/subtree
    +

    You can print or drop hashsum cache using custom backend commands:

    +
    rclone backend dump Hasher:dir/subdir
    +
    +rclone backend drop Hasher:
    +

    Pre-Seed from a SUM File

    +

    Hasher supports two backend commands: generic SUM file import and faster but less consistent stickyimport.

    +
    rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM [--checkers 4]
    +

    Instead of SHA1 it can be any hash supported by the remote. The last argument can point to either a local or an other-remote:path text file in SUM format. The command will parse the SUM file, then walk down the path given by the first argument, snapshot current fingerprints and fill in the cache entries correspondingly. - Paths in the SUM file are treated as relative to hasher:dir/subdir. - The command will not check that supplied values are correct. You must know what you are doing. - This is a one-time action. The SUM file will not get "attached" to the remote. Cache entries can still be overwritten later, should the object's fingerprint change. - The tree walk can take long depending on the tree size. You can increase --checkers to make it faster. Or use stickyimport if you don't care about fingerprints and consistency.

    +
    rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1
    +

    stickyimport is similar to import but works much faster because it does not need to stat existing files and skips initial tree walk. Instead of binding cache entries to file fingerprints it creates sticky entries bound to the file name alone ignoring size, modification time etc. Such hash entries can be replaced only by purge, delete, backend drop or by full re-read/re-write of the files.

    +

    Configuration reference

    +

    Standard options

    +

    Here are the standard options specific to hasher (Better checksums for other remotes).

    +

    --hasher-remote

    +

    Remote to cache checksums for (e.g. myRemote:path).

    + +

    --hasher-hashes

    +

    Comma separated list of supported checksum types.

    + +

    --hasher-max-age

    +

    Maximum time to keep checksums in cache (0 = no cache, off = cache forever).

    + +

    Advanced options

    +

    Here are the advanced options specific to hasher (Better checksums for other remotes).

    +

    --hasher-auto-size

    +

    Auto-update checksum for files smaller than this size (disabled by default).

    + +

    Backend commands

    +

    Here are the commands specific to the hasher backend.

    +

    Run them with

    +
    rclone backend COMMAND remote:
    +

    The help below will explain what arguments each command takes.

    +

    See the "rclone backend" command for more info on how to pass options and arguments.

    +

    These can be run on a running backend using the rc command backend/command.

    +

    drop

    +

    Drop cache

    +
    rclone backend drop remote: [options] [<arguments>+]
    +

    Completely drop checksum cache. Usage Example: rclone backend drop hasher:

    +

    dump

    +

    Dump the database

    +
    rclone backend dump remote: [options] [<arguments>+]
    +

    Dump cache records covered by the current remote

    +

    fulldump

    +

    Full dump of the database

    +
    rclone backend fulldump remote: [options] [<arguments>+]
    +

    Dump all cache records in the database

    +

    import

    +

    Import a SUM file

    +
    rclone backend import remote: [options] [<arguments>+]
    +

    Amend hash cache from a SUM file and bind checksums to files by size/time. Usage Example: rclone backend import hasher:subdir md5 /path/to/sum.md5

    +

    stickyimport

    +

    Perform fast import of a SUM file

    +
    rclone backend stickyimport remote: [options] [<arguments>+]
    +

    Fill hash cache from a SUM file without verifying file fingerprints. Usage Example: rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5

    +

    Implementation details (advanced)

    +

    This section explains how various rclone operations work on a hasher remote.

    +

    Disclaimer. This section describes current implementation which can change in future rclone versions!.

    +

    Hashsum command

    +

    The rclone hashsum (or md5sum or sha1sum) command will:

    +
      +
    1. if requested hash is supported by lower level, just pass it.
    2. +
    3. if object size is below auto_size then download object and calculate requested hashes on the fly.
    4. +
    5. if unsupported and the size is big enough, build object fingerprint (including size, modtime if supported, first-found other hash if any).
    6. +
    7. if the strict match is found in cache for the requested remote, return the stored hash.
    8. +
    9. if remote found but fingerprint mismatched, then purge the entry and proceed to step 6.
    10. +
    11. if remote not found or had no requested hash type or after step 5: download object, calculate all supported hashes on the fly and store in cache; return requested hash.
    12. +
    +

    Other operations

    + +

    Note that setting max_age = 0 will disable checksum caching completely.

    +

    If you set max_age = off, checksums in cache will never age, unless you fully rewrite or delete the file.

    +

    Cache storage

    +

    Cached checksums are stored as bolt database files under rclone cache directory, usually ~/.cache/rclone/kv/. Databases are maintained one per base backend, named like BaseRemote~hasher.bolt. Checksums for multiple alias-es into a single base backend will be stored in the single database. All local paths are treated as aliases into the local backend (unless crypted or chunked) and stored in ~/.cache/rclone/kv/local~hasher.bolt. Databases can be shared between multiple rclone processes.

    HDFS

    HDFS is a distributed file-system, part of the Apache Hadoop framework.

    Paths are specified as remote: or remote:path/to/dir.

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -15595,7 +16326,7 @@ type = hdfs namenode = 127.0.0.1:8020 username = root

    You can stop this image with docker kill rclone-hdfs (NB it does not use volumes, so all data uploaded will be lost.)

    -

    Modified time

    +

    Modified time

    Time accurate to 1 second is stored.

    Checksum

    No checksums are implemented.

    @@ -15620,30 +16351,19 @@ username = root

    Invalid UTF-8 bytes will also be replaced.

    -

    Limitations

    - -

    Standard Options

    +

    Standard options

    Here are the standard options specific to hdfs (Hadoop distributed file system).

    --hdfs-namenode

    -

    hadoop name node and port

    +

    Hadoop name node and port.

    +

    E.g. "namenode:8020" to connect to host namenode at port 8020.

    --hdfs-username

    -

    hadoop user name

    +

    Hadoop user name.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to hdfs (Hadoop distributed file system).

    --hdfs-service-principal-name

    -

    Kerberos service principal name for the namenode

    -

    Enables KERBEROS authentication. Specifies the Service Principal Name (SERVICE/FQDN) for the namenode.

    +

    Kerberos service principal name for the namenode.

    +

    Enables KERBEROS authentication. Specifies the Service Principal Name (SERVICE/FQDN) for the namenode. E.g. "hdfs/namenode.hadoop.docker" for namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'.

    --hdfs-data-transfer-protection

    -

    Kerberos data transfer protection: authentication|integrity|privacy

    +

    Kerberos data transfer protection: authentication|integrity|privacy.

    Specifies whether or not authentication, data signature integrity checks, and wire encryption is required when communicating the the datanodes. Possible values are 'authentication', 'integrity' and 'privacy'. Used only with KERBEROS enabled.

    --hdfs-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    +

    Limitations

    +

    HTTP

    The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!)

    Paths are specified as remote: or remote:path/to/dir.

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -15756,39 +16475,29 @@ e/n/d/r/c/s/q> q
    rclone sync -i remote:directory /home/local/directory

    Read only

    This remote is read only - you can't upload files to an HTTP server.

    -

    Modified time

    +

    Modified time

    Most HTTP servers store time accurate to 1 second.

    Checksum

    No checksums are stored.

    Usage without a config file

    Since the http remote only has one config parameter it is easy to use without a config file:

    rclone lsd --http-url https://beta.rclone.org :http:
    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to http (http Connection).

    --http-url

    -

    URL of http host to connect to

    +

    URL of http host to connect to.

    +

    E.g. "https://example.com", or "https://user:pass@example.com" to use a username and password.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to http (http Connection).

    --http-headers

    -

    Set HTTP headers for all transactions

    -

    Use this to set additional HTTP headers for all transactions

    +

    Set HTTP headers for all transactions.

    +

    Use this to set additional HTTP headers for all transactions.

    The input format is comma separated list of key,value pairs. Standard CSV encoding may be used.

    For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.

    You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.

    @@ -15799,7 +16508,7 @@ e/n/d/r/c/s/q> q
  • Default:
  • --http-no-slash

    -

    Set this if the site doesn't end directories with /

    +

    Set this if the site doesn't end directories with /.

    Use this if your target website does not use / on the end of directories.

    A / on the end of a path is how rclone normally tells the difference between files and directories. If this flag is set, then rclone will treat all files with Content-Type: text/html as directories and read URLs from them rather than downloading them.

    Note that this may cause rclone to confuse genuine HTML files with directories.

    @@ -15810,7 +16519,7 @@ e/n/d/r/c/s/q> q
  • Default: false
  • --http-no-head

    -

    Don't use HEAD requests to find file sizes in dir listing

    +

    Don't use HEAD requests to find file sizes in dir listing.

    If your site is being very slow to load then you can try this option. Normally rclone does a HEAD request for each potential file in a directory listing to:

    -

    Limitations

    +

    Limitations

    rclone about is not supported by the HTTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about See rclone about

    Hubic

    Paths are specified as remote:path

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.

    +

    Configuration

    The initial setup for Hubic involves getting a token from Hubic which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -15886,14 +16596,15 @@ y/e/d> y
    rclone copy /home/source remote:default/backup

    --fast-list

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    -

    Modified time

    +

    Modified time

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    Note that Hubic wraps the Swift backend, so most of the properties of are the same.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to hubic (Hubic).

    --hubic-client-id

    -

    OAuth Client Id Leave blank normally.

    +

    OAuth Client Id.

    +

    Leave blank normally.

    --hubic-client-secret

    -

    OAuth Client Secret Leave blank normally.

    +

    OAuth Client Secret.

    +

    Leave blank normally.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to hubic (Hubic).

    --hubic-token

    OAuth Access Token as a JSON blob.

    @@ -15919,7 +16631,8 @@ y/e/d> y
  • Default: ""
  • --hubic-auth-url

    -

    Auth server URL. Leave blank to use the provider defaults.

    +

    Auth server URL.

    +

    Leave blank to use the provider defaults.

    --hubic-token-url

    -

    Token server url. Leave blank to use the provider defaults.

    +

    Token server url.

    +

    Leave blank to use the provider defaults.

    --hubic-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    -

    Limitations

    +

    Limitations

    This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.

    The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

    Jottacloud

    -

    Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway.

    -

    In addition to the official service at jottacloud.com, there are also several whitelabel versions which should work with this backend.

    +

    Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway. In addition to the official service at jottacloud.com, it also provides white-label solutions to different companies, such as: * Telia * Telia Cloud (cloud.telia.se) * Telia Sky (sky.telia.no) * Tele2 * Tele2 Cloud (mittcloud.tele2.se) * Elkjøp (with subsidiaries): * Elkjøp Cloud (cloud.elkjop.no) * Elgiganten Sweden (cloud.elgiganten.se) * Elgiganten Denmark (cloud.elgiganten.dk) * Giganti Cloud (cloud.gigantti.fi) * ELKO Clouud (cloud.elko.is)

    +

    Most of the white-label versions are supported by this backend, although may require different authentication setup - described below.

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Setup

    -

    Default Setup

    +

    Authentication types

    +

    Some of the whitelabel versions uses a different authentication method than the official service, and you have to choose the correct one when setting up the remote.

    +

    Standard authentication

    To configure Jottacloud you will need to generate a personal security token in the Jottacloud web interface. You will the option to do in your account security settings (for whitelabel version you need to find this page in its web interface). Note that the web interface may refer to this token as a JottaCli token.

    -

    Legacy Setup

    -

    If you are using one of the whitelabel versions (Elgiganten, Com Hem Cloud) you may not have the option to generate a CLI token. In this case you'll have to use the legacy authentication. To to this select yes when the setup asks for legacy authentication and enter your username and password. The rest of the setup is identical to the default setup.

    -

    Telia Cloud Setup

    +

    Legacy authentication

    +

    If you are using one of the whitelabel versions (e.g. from Elkjøp or Tele2) you may not have the option to generate a CLI token. In this case you'll have to use the legacy authentication. To to this select yes when the setup asks for legacy authentication and enter your username and password. The rest of the setup is identical to the default setup.

    +

    Telia Cloud authentication

    Similar to other whitelabel versions Telia Cloud doesn't offer the option of creating a CLI token, and additionally uses a separate authentication flow where the username is generated internally. To setup rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is identical to the default setup.

    -

    Example

    +

    Configuration

    Here is an example of how to make a remote called remote with the default setup. First run:

    rclone config

    This will guide you through an interactive setup process:

    @@ -16058,7 +16773,7 @@ y/e/d> y

    Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    Jottacloud supports MD5 type hashes, so you can use the --checksum flag.

    Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the TMPDIR environment variable points to) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag. When uploading from local disk the source checksum is always available, so this does not apply. Starting with rclone version 1.52 the same is true for crypted remotes (in older versions the crypt backend would not calculate hashes for uploads from local disk, so the Jottacloud backend had to do it as described above).

    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -16114,7 +16829,7 @@ y/e/d> y

    Versioning can be disabled by --jottacloud-no-versions option. This is achieved by deleting the remote file prior to uploading a new version. If the upload the fails no version of the file will be available in the remote.

    Quota information

    To view your current quota you can use the rclone about remote: command which will display your usage limit (unless it is unlimited) and the current usage.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to jottacloud (Jottacloud).

    --jottacloud-md5-memory-limit

    Files bigger than this will be cached on disk to calculate the MD5 if required.

    @@ -16125,7 +16840,8 @@ y/e/d> y
  • Default: 10Mi
  • --jottacloud-trashed-only

    -

    Only show files that are in the trash. This will show trashed files in their original directory structure.

    +

    Only show files that are in the trash.

    +

    This will show trashed files in their original directory structure.

    --jottacloud-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    -

    Limitations

    +

    Limitations

    Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.

    Jottacloud only supports filenames up to 255 characters in length.

    -

    Troubleshooting

    +

    Troubleshooting

    Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove operations to previously deleted paths to fail. Emptying the trash should help in such cases.

    Koofr

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    +

    Configuration

    The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr web application, giving the password a nice name like rclone and clicking on generate.

    Here is an example of how to make a remote called koofr. First run:

     rclone config
    @@ -16229,7 +16946,7 @@ y/e/d> y
    rclone ls koofr:

    To copy a local directory to an Koofr directory called backup

    rclone copy /home/source remote:backup
    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -16248,10 +16965,10 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to koofr (Koofr).

    --koofr-user

    -

    Your Koofr user name

    +

    Your Koofr user name.

    --koofr-password

    -

    Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)

    +

    Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).

    NB Input to this must be obscured - see rclone obscure.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to koofr (Koofr).

    --koofr-endpoint

    -

    The Koofr API endpoint to use

    +

    The Koofr API endpoint to use.

    --koofr-mountid

    -

    Mount ID of the mount to use. If omitted, the primary mount is used.

    +

    Mount ID of the mount to use.

    +

    If omitted, the primary mount is used.

    --koofr-setmtime

    -

    Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.

    +

    Does the backend support setting modification time.

    +

    Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.

    --koofr-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    -

    Limitations

    +

    Limitations

    Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    Mail.ru Cloud

    Mail.ru Cloud is a cloud storage provided by a Russian internet company Mail.Ru Group. The official desktop client is Disk-O:, available on Windows and Mac OS.

    Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclone until it gets eventually implemented.

    -

    Features highlights

    +

    Features highlights

    -

    Configuration

    +

    Configuration

    Here is an example of making a mailru configuration. First create a Mail.ru Cloud account and choose a tariff, then run

    rclone config

    This will guide you through an interactive setup process:

    @@ -16384,7 +17103,7 @@ y/e/d> y
    rclone ls remote:directory

    Sync /home/local/directory to the remote path, deleting any excess files in the path.

    rclone sync -i /home/local/directory remote:directory
    -

    Modified time

    +

    Modified time

    Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as "Jan 1 1970".

    Hash checksums

    Hash sums use a custom Mail.ru algorithm based on SHA1. If file size is less than or equal to the SHA1 block size (20 bytes), its hash is simply its data right-padded with zero bytes. Hash sum of a larger file is computed as a SHA1 sum of the file data bytes concatenated with a decimal representation of the data length.

    @@ -16392,7 +17111,7 @@ y/e/d> y

    Removing a file or directory actually moves it to the trash, which is not visible to rclone but can be seen in a web browser. The trashed file still occupies part of total quota. If you wish to empty your trash and free some quota, you can use the rclone cleanup remote: command, which will permanently delete all your trashed files. This command does not take any path arguments.

    Quota information

    To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage.

    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -16446,13 +17165,10 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Limitations

    -

    File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits.

    -

    Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to mailru (Mail.ru Cloud).

    --mailru-user

    -

    User name (usually email)

    +

    User name (usually email).

    --mailru-pass

    -

    Password

    +

    Password.

    NB Input to this must be obscured - see rclone obscure.

    --mailru-speedup-enable

    -

    Skip full upload if there is another file with same data hash. This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. It is meaningless and ineffective if source file is unique or encrypted. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization.

    +

    Skip full upload if there is another file with same data hash.

    +

    This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. It is meaningless and ineffective if source file is unique or encrypted. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to mailru (Mail.ru Cloud).

    --mailru-speedup-file-patterns

    -

    Comma separated list of file name patterns eligible for speedup (put by hash). Patterns are case insensitive and can contain '*' or '?' meta characters.

    +

    Comma separated list of file name patterns eligible for speedup (put by hash).

    +

    Patterns are case insensitive and can contain '*' or '?' meta characters.

    --mailru-speedup-max-disk

    -

    This option allows you to disable speedup (put by hash) for large files (because preliminary hashing can exhaust you RAM or disk space)

    +

    This option allows you to disable speedup (put by hash) for large files.

    +

    Reason is that preliminary hashing can exhaust your RAM or disk space.

    --mailru-check-hash

    -

    What should copy do if file checksum is mismatched or invalid

    +

    What should copy do if file checksum is mismatched or invalid.

    --mailru-user-agent

    -

    HTTP user agent used internally by client. Defaults to "rclone/VERSION" or "--user-agent" provided on command line.

    +

    HTTP user agent used internally by client.

    +

    Defaults to "rclone/VERSION" or "--user-agent" provided on command line.

    --mailru-quirks

    -

    Comma separated list of internal maintenance flags. This option must not be used by an ordinary user. It is intended only to facilitate remote troubleshooting of backend issues. Strict meaning of flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist unknowndirs

    +

    Comma separated list of internal maintenance flags.

    +

    This option must not be used by an ordinary user. It is intended only to facilitate remote troubleshooting of backend issues. Strict meaning of flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist unknowndirs

    --mailru-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    +

    Limitations

    +

    File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits.

    +

    Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    Mega

    Mega is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded. This prevents anyone (including employees of Mega) from accessing the files without knowledge of the key used for encryption.

    This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -16659,7 +17384,7 @@ y/e/d> y
    rclone copy /home/source remote:backup

    Modified time and hashes

    Mega does not support modification times or hashes yet.

    -

    Restricted filename characters

    +

    Restricted filename characters

    @@ -16697,10 +17422,10 @@ y/e/d> y

    Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though.

    Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum.

    So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to mega (Mega).

    --mega-user

    -

    User name

    +

    User name.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to mega (Mega).

    --mega-debug

    Output more debug from Mega.

    @@ -16738,19 +17463,20 @@ y/e/d> y

    --mega-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    -

    Limitations

    +

    Limitations

    This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.

    Mega allows duplicate files which may confuse rclone.

    Memory

    The memory backend is an in RAM backend. It does not persist its data - use the local backend for that.

    The memory backend behaves like a bucket based remote (e.g. like s3). Because it has no parameters you can just use it with the :memory: remote name.

    +

    Configuration

    You can configure it as a remote like this with rclone config too if you want to:

    No remotes found - make a new one
     n) New remote
    @@ -16784,10 +17510,11 @@ rclone serve webdav :memory:
     rclone serve sftp :memory:

    Modified time and hashes

    The memory backend supports MD5 hashes and modification times accurate to 1 nS.

    -

    Restricted filename characters

    +

    Restricted filename characters

    The memory backend replaces the default restricted characters set.

    Microsoft Azure Blob Storage

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.

    +

    Configuration

    Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -16831,7 +17558,7 @@ y/e/d> y
    rclone sync -i /home/local/directory remote:container

    --fast-list

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.

    -

    Modified time

    +

    Modified time

    The modified time is stored as metadata on the object with the mtime key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it.

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -16892,10 +17619,11 @@ container/

    Note that you can't see or access any other containers - this will fail

    rclone ls azureblob:othercontainer

    Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).

    --azureblob-account

    -

    Storage Account Name (leave blank to use SAS URL or Emulator)

    +

    Storage Account Name.

    +

    Leave blank to use SAS URL or Emulator.

    --azureblob-key

    -

    Storage Account Key (leave blank to use SAS URL or Emulator)

    +

    Storage Account Key.

    +

    Leave blank to use SAS URL or Emulator.

    --azureblob-sas-url

    -

    SAS URL for container level access only (leave blank if using account/key or Emulator)

    +

    SAS URL for container level access only.

    +

    Leave blank if using account/key or Emulator.

    --azureblob-use-msi

    -

    Use a managed service identity to authenticate (only works in Azure)

    +

    Use a managed service identity to authenticate (only works in Azure).

    When true, use a managed service identity to authenticate to Azure Storage instead of a SAS token or account key.

    If the VM(SS) on which this program is running has a system-assigned identity, it will be used by default. If the resource has no system-assigned but exactly one user-assigned identity, the user-assigned identity will be used by default. If the resource has multiple user-assigned identities, the identity to use must be explicitly specified using exactly one of the msi_object_id, msi_client_id, or msi_mi_res_id parameters.

    --azureblob-use-emulator

    -

    Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)

    +

    Uses local storage emulator if provided as 'true'.

    +

    Leave blank if using real azure storage endpoint.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).

    --azureblob-msi-object-id

    -

    Object ID of the user-assigned MSI to use, if any. Leave blank if msi_client_id or msi_mi_res_id specified.

    +

    Object ID of the user-assigned MSI to use, if any.

    +

    Leave blank if msi_client_id or msi_mi_res_id specified.

    --azureblob-msi-client-id

    -

    Object ID of the user-assigned MSI to use, if any. Leave blank if msi_object_id or msi_mi_res_id specified.

    +

    Object ID of the user-assigned MSI to use, if any.

    +

    Leave blank if msi_object_id or msi_mi_res_id specified.

    --azureblob-msi-mi-res-id

    -

    Azure resource ID of the user-assigned MSI to use, if any. Leave blank if msi_client_id or msi_object_id specified.

    +

    Azure resource ID of the user-assigned MSI to use, if any.

    +

    Leave blank if msi_client_id or msi_object_id specified.

    --azureblob-endpoint

    -

    Endpoint for the service Leave blank normally.

    +

    Endpoint for the service.

    +

    Leave blank normally.

    --azureblob-upload-cutoff

    -

    Cutoff for switching to chunked upload (<= 256 MiB). (Deprecated)

    +

    Cutoff for switching to chunked upload (<= 256 MiB) (deprecated).

    --azureblob-memory-pool-flush-time

    -

    How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.

    +

    How often internal memory buffer pools will be flushed.

    +

    Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.

    --azureblob-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    --azureblob-public-access

    -

    Public access level of a container: blob, container.

    +

    Public access level of a container: blob or container.

    -

    Limitations

    +

    --azureblob-no-head-object

    +

    If set, do not do HEAD before GET when getting objects.

    + +

    Limitations

    MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.

    rclone about is not supported by the Microsoft Azure Blob storage backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about See rclone about

    -

    Azure Storage Emulator Support

    +

    Azure Storage Emulator Support

    You can test rclone with storage emulator locally, to do this make sure azure storage emulator installed locally and set up a new remote with rclone config follow instructions described in introduction, set use_emulator config as true, you do not need to provide default account name or key if using emulator.

    Microsoft OneDrive

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    +

    Configuration

    The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -17187,7 +17933,7 @@ y/e/d> y
    1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click New registration.
    2. Enter a name for your app, choose account type Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox), select Web in Redirect URI, then type (do not copy and paste) http://localhost:53682/ and click Register. Copy and keep the Application (client) ID under the app name for later use.
    3. -
    4. Under manage select Certificates & secrets, click New client secret. Copy and keep that secret for later use.
    5. +
    6. Under manage select Certificates & secrets, click New client secret. Enter a description (can be anything) and set Expires to 24 months. Copy and keep that secret Value for later use (you won't be able to see this value afterwards).
    7. Under manage select API permissions, click Add a permission and select Microsoft Graph then select delegated permissions.
    8. Search and select the following permissions: Files.Read, Files.ReadWrite, Files.Read.All, Files.ReadWrite.All, offline_access, User.Read. Once selected click Add permissions at the bottom.
    @@ -17296,10 +18042,11 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    Deleting files

    Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to onedrive (Microsoft OneDrive).

    --onedrive-client-id

    -

    OAuth Client Id Leave blank normally.

    +

    OAuth Client Id.

    +

    Leave blank normally.

    --onedrive-client-secret

    -

    OAuth Client Secret Leave blank normally.

    +

    OAuth Client Secret.

    +

    Leave blank normally.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to onedrive (Microsoft OneDrive).

    --onedrive-token

    OAuth Access Token as a JSON blob.

    @@ -17352,7 +18100,8 @@ y/e/d> y
  • Default: ""
  • --onedrive-auth-url

    -

    Auth server URL. Leave blank to use the provider defaults.

    +

    Auth server URL.

    +

    Leave blank to use the provider defaults.

    --onedrive-token-url

    -

    Token server url. Leave blank to use the provider defaults.

    +

    Token server url.

    +

    Leave blank to use the provider defaults.

    --onedrive-drive-id

    -

    The ID of the drive to use

    +

    The ID of the drive to use.

    --onedrive-drive-type

    -

    The type of the drive ( personal | business | documentLibrary )

    +

    The type of the drive (personal | business | documentLibrary).

    --onedrive-no-versions

    -

    Remove all versions on modifying operations

    +

    Remove all versions on modifying operations.

    Onedrive for business creates versions when rclone uploads new files overwriting an existing one and when it sets the modification time.

    These versions take up space out of the quota.

    This flag checks for versions after file upload and setting modification time and removes all but the last version.

    @@ -17441,11 +18191,14 @@ y/e/d> y @@ -17483,26 +18236,26 @@ y/e/d> y

    --onedrive-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    -

    Limitations

    +

    Limitations

    If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the rclone config reconnect remote: command to get a new token and refresh token.

    -

    Naming

    +

    Naming

    Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

    -

    File sizes

    +

    File sizes

    The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business (Updated 13 Jan 2021).

    -

    Path length

    +

    Path length

    The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones.

    -

    Number of files

    +

    Number of files

    OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000 rclone will get errors listing the directory like couldn’t list files: UnknownError:. See #2707 for more info.

    An official document about the limitations for different types of OneDrive can be found here.

    -

    Versions

    +

    Versions

    Every change in a file OneDrive causes the service to create a new version of the the file. This counts against a users quota. For example changing the modification time of a file creates a second version, so the file apparently uses twice the space.

    For example the copy command is affected by this as rclone copies the file and then afterwards sets the modification time to match the source file which uses another version.

    You can use the rclone cleanup command (see below) to remove all old versions.

    @@ -17530,36 +18283,39 @@ y/e/d> y
  • Use rclone to upload or modify files. (I also use the --no-update-modtime flag)
  • Restore the versioning settings after using rclone. (Optional)
  • -

    Cleanup

    +

    Cleanup

    OneDrive supports rclone cleanup which causes rclone to look through every file under the path supplied and delete all version but the current version. Because this involves traversing all the files, then querying each file for versions it can be quite slow. Rclone does --checkers tests in parallel. The command also supports -i which is a great way to see what it would do.

    rclone cleanup -i remote:path/subdir # interactively remove all old version for path/subdir
     rclone cleanup remote:path/subdir    # unconditionally remove all old version for path/subdir

    NB Onedrive personal can't currently delete versions

    -

    Troubleshooting

    -

    Excessive throttling or blocked on SharePoint

    +

    Troubleshooting

    +

    Excessive throttling or blocked on SharePoint

    If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: --user-agent "ISV|rclone.org|rclone/v1.55.1"

    The specific details can be found in the Microsoft document: Avoid getting throttled or blocked in SharePoint Online

    -

    Unexpected file size/hash differences on Sharepoint

    +

    Unexpected file size/hash differences on Sharepoint

    It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and hash checks to fail. There are also other situations that will cause OneDrive to report inconsistent file sizes. To use rclone with such affected files on Sharepoint, you may disable these checks with the following command line arguments:

    --ignore-checksum --ignore-size

    Alternatively, if you have write access to the OneDrive files, it may be possible to fix this problem for certain files, by attempting the steps below. Open the web interface for OneDrive and find the affected files (which will be in the error messages/log for rclone). Simply click on each of these files, causing OneDrive to open them on the web. This will cause each file to be converted in place to a format that is functionally equivalent but which will no longer trigger the size discrepancy. Once all problematic files are converted you will no longer need the ignore options above.

    -

    Replacing/deleting existing files on Sharepoint gets "item not found"

    +

    Replacing/deleting existing files on Sharepoint gets "item not found"

    It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.). As a workaround, you may use the --backup-dir <BACKUP_DIR> command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory rclone-backup-dir on backend mysharepoint, you may use:

    --backup-dir mysharepoint:rclone-backup-dir
    -

    access_denied (AADSTS65005)

    +

    access_denied (AADSTS65005)

    Error: access_denied
     Code: AADSTS65005
     Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.

    This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins.

    However, there are other ways to interact with your OneDrive account. Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint

    -

    invalid_grant (AADSTS50076)

    +

    invalid_grant (AADSTS50076)

    Error: invalid_grant
     Code: AADSTS50076
     Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.

    If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run rclone config, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: Already have a token - refresh?. For this question, answer y and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.

    + +

    On Sharepoint and OneDrive for Business, rclone link may return an "Invalid request" error. A possible cause is that the organisation admin didn't allow public links to be made for the organisation/sharepoint library. To fix the permissions as an admin, take a look at the docs: 1, 2.

    OpenDrive

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

     rclone config

    This will guide you through an interactive setup process:

    @@ -17602,7 +18358,7 @@ y/e/d> y
    rclone copy /home/source remote:backup

    Modified time and MD5SUMs

    OpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.

    -

    Restricted filename characters

    +

    Restricted filename characters

    @@ -17702,10 +18458,10 @@ y/e/d> y

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to opendrive (OpenDrive).

    --opendrive-username

    -

    Username

    +

    Username.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to opendrive (OpenDrive).

    --opendrive-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    -

    Limitations

    +

    Limitations

    Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to instead.

    rclone about is not supported by the OpenDrive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about See rclone about

    QingStor

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    +

    Configuration

    Here is an example of making an QingStor configuration. First run

    rclone config

    This will guide you through an interactive setup process.

    @@ -17842,10 +18599,11 @@ y/e/d> y

    Restricted filename characters

    The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to qingstor (QingCloud Object Storage).

    --qingstor-env-auth

    -

    Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.

    +

    Get QingStor credentials from runtime.

    +

    Only applies if access_key_id and secret_access_key is blank.

    --qingstor-access-key-id

    -

    QingStor Access Key ID Leave blank for anonymous access or runtime credentials.

    +

    QingStor Access Key ID.

    +

    Leave blank for anonymous access or runtime credentials.

    --qingstor-secret-access-key

    -

    QingStor Secret Access Key (password) Leave blank for anonymous access or runtime credentials.

    +

    QingStor Secret Access Key (password).

    +

    Leave blank for anonymous access or runtime credentials.

    --qingstor-endpoint

    -

    Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443"

    +

    Enter an endpoint URL to connection QingStor API.

    +

    Leave blank will use the default value "https://qingstor.com:443".

    --qingstor-zone

    -

    Zone to connect to. Default is "pek3a".

    +

    Zone to connect to.

    +

    Default is "pek3a".

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to qingstor (QingCloud Object Storage).

    --qingstor-connection-retries

    Number of connection retries.

    @@ -17924,7 +18686,7 @@ y/e/d> y
  • Default: 3
  • --qingstor-upload-cutoff

    -

    Cutoff for switching to chunked upload

    +

    Cutoff for switching to chunked upload.

    Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB.

    --qingstor-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    -

    Limitations

    +

    Limitations

    rclone about is not supported by the qingstor backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about See rclone about

    +

    Sia

    +

    Sia (sia.tech) is a decentralized cloud storage platform based on the blockchain technology. With rclone you can use it like any other remote filesystem or mount Sia folders locally. The technology behind it involves a number of new concepts such as Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on. If you are new to it, you'd better first familiarize yourself using their excellent support documentation.

    +

    Introduction

    +

    Before you can use rclone with Sia, you will need to have a running copy of Sia-UI or siad (the Sia daemon) locally on your computer or on local network (e.g. a NAS). Please follow the Get started guide and install one.

    +

    rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980 making external access impossible).

    +

    However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980 and --disable-api-security arguments on the daemon command line. - Enforce API password for the siad daemon via environment variable SIA_API_PASSWORD or text file named apipassword in the daemon directory. - Set rclone backend option api_password taking it from above locations.

    +

    Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock. Alternatively you can make siad unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD. 2. If siad cannot find the SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR directory, it will generate a random password and store in the text file named apipassword under YOUR_HOME/.sia/ directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. Remember this when you configure password in rclone. 3. The only way to use siad without API password is to run it on localhost with command line argument --authorize-api=false, but this is insecure and strongly discouraged.

    +

    Configuration

    +

    Here is an example of how to make a sia remote called mySia. First, run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> mySia
    +Type of storage to configure.
    +Enter a string value. Press Enter for the default ("").
    +Choose a number from below, or type in your own value
    +...
    +29 / Sia Decentralized Cloud
    +   \ "sia"
    +...
    +Storage> sia
    +Sia daemon API URL, like http://sia.daemon.host:9980.
    +Note that siad must run with --disable-api-security to open API port for other hosts (not recommended).
    +Keep default if Sia daemon runs on localhost.
    +Enter a string value. Press Enter for the default ("http://127.0.0.1:9980").
    +api_url> http://127.0.0.1:9980
    +Sia Daemon API Password.
    +Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.
    +y) Yes type in my own password
    +g) Generate random password
    +n) No leave this optional password blank (default)
    +y/g/n> y
    +Enter the password:
    +password:
    +Confirm the password:
    +password:
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> n
    +--------------------
    +[mySia]
    +type = sia
    +api_url = http://127.0.0.1:9980
    +api_password = *** ENCRYPTED ***
    +--------------------
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Once configured, you can then use rclone like this:

    + +
    rclone lsd mySia:
    + +
    rclone ls mySia:
    + +
    rclone copy /home/source mySia:backup
    +

    Standard options

    +

    Here are the standard options specific to sia (Sia Decentralized Cloud).

    +

    --sia-api-url

    +

    Sia daemon API URL, like http://sia.daemon.host:9980.

    +

    Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost.

    + +

    --sia-api-password

    +

    Sia Daemon API Password.

    +

    Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.

    +

    NB Input to this must be obscured - see rclone obscure.

    + +

    Advanced options

    +

    Here are the advanced options specific to sia (Sia Decentralized Cloud).

    +

    --sia-user-agent

    +

    Siad User Agent

    +

    Sia daemon requires the 'Sia-Agent' user agent by default for security

    + +

    --sia-encoding

    +

    This sets the encoding for the backend.

    +

    See the encoding section in the overview for more info.

    + +

    Limitations

    +

    Swift

    Swift refers to OpenStack Object Storage. Commercial implementations of that being:

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.

    +

    Configuration

    Here is an example of making a swift configuration. First run

    rclone config

    This will guide you through an interactive setup process.

    @@ -18112,7 +18991,33 @@ rclone lsd myremote:

    --update and --use-server-modtime

    As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.

    For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.

    -

    Standard Options

    +

    Modified time

    +

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    +

    This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    +

    Restricted filename characters

    + + + + + + + + + + + + + + + + + + + + +
    CharacterValueReplacement
    NUL0x00
    /0x2F
    +

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    +

    Standard options

    Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).

    --swift-env-auth

    Get swift credentials from environment variables in standard OpenStack form.

    @@ -18125,11 +19030,12 @@ rclone lsd myremote: @@ -18201,7 +19107,7 @@ rclone lsd myremote:
  • Default: ""
  • --swift-tenant

    -

    Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)

    +

    Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME).

    --swift-tenant-id

    -

    Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)

    +

    Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID).

    --swift-tenant-domain

    -

    Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)

    +

    Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME).

    --swift-region

    -

    Region name - optional (OS_REGION_NAME)

    +

    Region name - optional (OS_REGION_NAME).

    --swift-storage-url

    -

    Storage URL - optional (OS_STORAGE_URL)

    +

    Storage URL - optional (OS_STORAGE_URL).

    --swift-auth-token

    -

    Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)

    +

    Auth Token from alternate authentication - optional (OS_AUTH_TOKEN).

    --swift-application-credential-id

    -

    Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)

    +

    Application Credential ID (OS_APPLICATION_CREDENTIAL_ID).

    --swift-application-credential-name

    -

    Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)

    +

    Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME).

    --swift-application-credential-secret

    -

    Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)

    +

    Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET).

    --swift-auth-version

    -

    AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)

    +

    AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION).

    --swift-endpoint-type

    -

    Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)

    +

    Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE).

    --swift-storage-policy

    -

    The storage policy to use when creating a new container

    +

    The storage policy to use when creating a new container.

    This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).

    --swift-leave-parts-on-error

    -

    If true avoid calling abort upload on a failure. It should be set to true for resuming uploads across different sessions.

    +

    If true avoid calling abort upload on a failure.

    +

    It should be set to true for resuming uploads across different sessions.

    --swift-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    -

    Modified time

    -

    The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.

    -

    This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.

    -

    Restricted filename characters

    - - - - - - - - - - - - - - - - - - - - -
    CharacterValueReplacement
    NUL0x00
    /0x2F
    -

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Limitations

    +

    Limitations

    The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

    -

    Troubleshooting

    -

    Rclone gives Failed to create file system for "remote:": Bad Request

    +

    Troubleshooting

    +

    Rclone gives Failed to create file system for "remote:": Bad Request

    Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.

    So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies flag.

    This may also be caused by specifying the region when you shouldn't have (e.g. OVH).

    -

    Rclone gives Failed to create file system: Response didn't have storage url and auth token

    +

    Rclone gives Failed to create file system: Response didn't have storage url and auth token

    This is most likely caused by forgetting to specify your tenant when setting up a swift remote.

    +

    OVH Cloud Archive

    +

    To use rclone with OVH cloud archive, first use rclone config to set up a swift backend with OVH, choosing pca as the storage_policy.

    +

    Uploading Objects

    +

    Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel.

    +

    Retrieving Objects

    +

    To retrieve objects use rclone copy as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following:

    +

    2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)

    +

    Rclone will wait for the time specified then retry the copy.

    pCloud

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    +

    Configuration

    The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -18457,9 +19347,8 @@ y/e/d> y
    rclone copy /home/source remote:backup

    Modified time and hashes

    pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.

    -

    pCloud supports MD5 and SHA1 type hashes in the US region but and SHA1 only in the EU region, so you can use the --checksum flag.

    -

    (Note that pCloud also support SHA256 in the EU region, but rclone does not have support for that yet.)

    -

    Restricted filename characters

    +

    pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 hashes in the EU region, so you can use the --checksum flag.

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -18486,10 +19375,11 @@ y/e/d> y

    However you can set this to restrict rclone to a specific folder hierarchy.

    In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the folder field of the URL when you open the relevant folder in the pCloud web interface.

    So if the folder you want rclone to use has a URL which looks like https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid in the browser, then you use 5xxxxxxxx8 as the root_folder_id in the config.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to pcloud (Pcloud).

    --pcloud-client-id

    -

    OAuth Client Id Leave blank normally.

    +

    OAuth Client Id.

    +

    Leave blank normally.

    --pcloud-client-secret

    -

    OAuth Client Secret Leave blank normally.

    +

    OAuth Client Secret.

    +

    Leave blank normally.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to pcloud (Pcloud).

    --pcloud-token

    OAuth Access Token as a JSON blob.

    @@ -18515,7 +19406,8 @@ y/e/d> y
  • Default: ""
  • --pcloud-auth-url

    -

    Auth server URL. Leave blank to use the provider defaults.

    +

    Auth server URL.

    +

    Leave blank to use the provider defaults.

    --pcloud-token-url

    -

    Token server url. Leave blank to use the provider defaults.

    +

    Token server url.

    +

    Leave blank to use the provider defaults.

    --pcloud-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    @@ -18645,7 +19539,7 @@ y/e/d>

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to premiumizeme (premiumize.me).

    --premiumizeme-api-key

    API Key.

    @@ -18656,24 +19550,25 @@ y/e/d>
  • Type: string
  • Default: ""
  • -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to premiumizeme (premiumize.me).

    --premiumizeme-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    -

    Limitations

    +

    Limitations

    Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    premiumize.me file names can't have the \ or " characters in. rclone maps these to and from an identical looking unicode equivalents and

    premiumize.me only supports filenames up to 255 characters in length.

    put.io

    Paths are specified as remote:path

    put.io paths may be as deep as required, e.g. remote:directory/subdirectory.

    +

    Configuration

    The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -18736,7 +19631,7 @@ e/n/d/r/c/s/q> q
    rclone ls remote:

    To copy a local directory to a put.io directory called backup

    rclone copy /home/source remote:backup
    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -18755,11 +19650,11 @@ e/n/d/r/c/s/q> q

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to putio (Put.io).

    --putio-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    Seafile

    This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x and 7.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users

    -

    Root mode vs Library mode

    +

    Configuration

    There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library. You may put subdirectories in too, e.g. remote:library/path/to/dir. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)

    Configuration in root mode

    Here is an example of making a seafile configuration for a user with no two-factor authentication. First run

    @@ -18927,7 +19822,7 @@ y/e/d> y
    rclone sync -i /home/local/directory seafile:

    --fast-list

    Seafile version 7+ supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Please note this is not supported on seafile server version 6.x

    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -18968,10 +19863,10 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/

    Compatibility

    It has been actively tested using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition

    Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to seafile (seafile).

    --seafile-url

    -

    URL of seafile host to connect to

    +

    URL of seafile host to connect to.

    --seafile-user

    -

    User name (usually email address)

    +

    User name (usually email address).

    --seafile-pass

    -

    Password

    +

    Password.

    NB Input to this must be obscured - see rclone obscure.

    --seafile-2fa

    -

    Two-factor authentication ('true' if the account has 2FA enabled)

    +

    Two-factor authentication ('true' if the account has 2FA enabled).

    --seafile-library

    -

    Name of the library. Leave blank to access all non-encrypted libraries.

    +

    Name of the library.

    +

    Leave blank to access all non-encrypted libraries.

    --seafile-library-key

    -

    Library password (for encrypted libraries only). Leave blank if you pass it through the command line.

    +

    Library password (for encrypted libraries only).

    +

    Leave blank if you pass it through the command line.

    NB Input to this must be obscured - see rclone obscure.

    --seafile-auth-token

    -

    Authentication token

    +

    Authentication token.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to seafile (seafile).

    --seafile-create-library

    -

    Should rclone create a library if it doesn't exist

    +

    Should rclone create a library if it doesn't exist.

    --seafile-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.

    Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory. For example, rclone lsd remote: would list the home directory of the user cofigured in the rclone remote config (i.e /home/sftpuser). However, rclone lsd remote:/ would list the root directory for remote machine (i.e. /)

    -

    "Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net, on the other hand, requires users to OMIT the leading /.

    +

    Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net, on the other hand, requires users to OMIT the leading /.

    +

    Configuration

    Here is an example of making an SFTP configuration. First run

    rclone config

    This will guide you through an interactive setup process.

    @@ -19131,14 +20029,17 @@ y/e/d> y

    Key files should be PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa. Only unencrypted OpenSSH or PEM encrypted files are supported.

    The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line ('' or '') separating lines. i.e.

    -

    key_pem = -----BEGIN RSA PRIVATE KEY-----0gAMbMbaSsd-----END RSA PRIVATE KEY-----

    +
    key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----

    This will generate it correctly for key_pem for use in the config:

    awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa
    -

    If you don't specify pass, key_file, or key_pem then rclone will attempt to contact an ssh-agent.

    -

    You can also specify key_use_agent to force the usage of an ssh-agent. In this case key_file or key_pem can also be specified to force the usage of a specific key in the ssh-agent.

    +

    If you don't specify pass, key_file, or key_pem or ask_password then rclone will attempt to contact an ssh-agent. You can also specify key_use_agent to force the usage of an ssh-agent. In this case key_file or key_pem can also be specified to force the usage of a specific key in the ssh-agent.

    Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment.

    -

    If you set the --sftp-ask-password option, rclone will prompt for a password when needed and no password has been configured.

    -

    If you have a certificate then you can provide the path to the public key that contains the certificate. For example:

    +

    If you set the ask_password option, rclone will prompt for a password when needed and no password has been configured.

    +

    Certificate-signed keys

    +

    With traditional key-based authentication, you configure your private key only, and the public key built into it will be used during the authentication process.

    +

    If you have a certificate you may use it to sign your public key, creating a separate SSH user certificate that should be used instead of the plain public key extracted from the private key. Then you must provide the path to the user certificate public key file in pubkey_file.

    +

    Note: This is not the traditional public key paired with your private key, typically saved as /home/$USER/.ssh/id_rsa.pub. Setting this path in pubkey_file will not work.

    +

    Example:

    [remote]
     type = sftp
     host = example.com
    @@ -19178,29 +20079,23 @@ known_hosts_file = ~/.ssh/known_hosts

    And then at the end of the session

    eval `ssh-agent -k`

    These commands can be used in scripts of course.

    -

    Modified time

    +

    Modified time

    Modified times are stored on the server to 1 second precision.

    Modified times are used in syncing and are fully supported.

    Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false in your RClone backend configuration to disable this behaviour.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to sftp (SSH/SFTP Connection).

    --sftp-host

    -

    SSH host to connect to

    +

    SSH host to connect to.

    +

    E.g. "example.com".

    --sftp-user

    -

    SSH username, leave blank for current username, $USER

    +

    SSH username, leave blank for current username, $USER.

    --sftp-port

    -

    SSH port, leave blank to use default (22)

    +

    SSH port, leave blank to use default (22).

    --sftp-key-pem

    -

    Raw PEM-encoded private key, If specified, will override key_file parameter.

    +

    Raw PEM-encoded private key.

    +

    If specified, will override key_file parameter.

    --sftp-key-file

    -

    Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.

    +

    Path to PEM-encoded private key file.

    +

    Leave blank or set key-use-agent to use ssh-agent.

    Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.

    --sftp-disable-hashcheck

    -

    Disable the execution of SSH commands to determine if remote file hashing is available. Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.

    +

    Disable the execution of SSH commands to determine if remote file hashing is available.

    +

    Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to sftp (SSH/SFTP Connection).

    --sftp-known-hosts-file

    Optional path to known_hosts file.

    @@ -19322,7 +20220,7 @@ known_hosts_file = ~/.ssh/known_hosts @@ -19357,7 +20255,8 @@ known_hosts_file = ~/.ssh/known_hosts
  • Default: true
  • --sftp-md5sum-command

    -

    The command used to read md5 hashes. Leave blank for autodetect.

    +

    The command used to read md5 hashes.

    +

    Leave blank for autodetect.

    --sftp-sha1sum-command

    -

    The command used to read sha1 hashes. Leave blank for autodetect.

    +

    The command used to read sha1 hashes.

    +

    Leave blank for autodetect.

    --sftp-use-fstat

    -

    If set use fstat instead of stat

    +

    If set use fstat instead of stat.

    Some servers limit the amount of open files and calling Stat after opening the file will throw an error from the server. Setting this flag will call Fstat instead of Stat which is called on an already open file handle.

    It has been found that this helps with IBM Sterling SFTP servers which have "extractability" level set to 1 which means only 1 file can be opened at any given time.

    --sftp-disable-concurrent-reads

    -

    If set don't use concurrent reads

    +

    If set don't use concurrent reads.

    Normally concurrent reads are safe to use and not using them will degrade performance, so this option is disabled by default.

    Some servers limit the amount number of times a file can be downloaded. Using concurrent reads can trigger this limit, so if you have a server which returns

    Failed to copy: file does not exist
    @@ -19421,7 +20321,7 @@ known_hosts_file = ~/.ssh/known_hosts
  • Default: false
  • --sftp-disable-concurrent-writes

    -

    If set don't use concurrent writes

    +

    If set don't use concurrent writes.

    Normally rclone uses concurrent writes to upload files. This improves the performance greatly, especially for distant servers.

    This option disables concurrent writes should that be necessary.

    --sftp-idle-timeout

    -

    Max time before closing idle connections

    +

    Max time before closing idle connections.

    If no connections have been returned to the connection pool in the time given, rclone will empty the connection pool.

    Set to 0 to keep connections indefinitely.

    -

    Limitations

    +

    Limitations

    SFTP supports checksums if the same login has shell access and md5sum or sha1sum as well as echo are in the remote's PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming.

    SFTP also supports about if the same login has shell access and df are in the remote's PATH. about will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. about will fail if it does not have shell access or if df is not in the remote's PATH.

    Note that some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck is a good idea.

    @@ -19457,6 +20357,7 @@ known_hosts_file = ~/.ssh/known_hosts

    See rsync.net's documentation of rclone examples.

    SugarSync

    SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.

    +

    Configuration

    The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -19523,13 +20424,13 @@ y/e/d> y

    NB you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with SugarSync.

    Modified time and hashes

    SugarSync does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work as rclone can read the time files were uploaded.

    -

    Restricted filename characters

    +

    Restricted filename characters

    SugarSync replaces the default restricted characters set except for DEL.

    Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

    Deleting files

    Deleted files will be moved to the "Deleted items" folder by default.

    However you can supply the flag --sugarsync-hard-delete or set the config parameter hard_delete = true if you would like files to be deleted straight away.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to sugarsync (Sugarsync).

    --sugarsync-app-id

    Sugarsync App ID.

    @@ -19550,7 +20451,7 @@ y/e/d> y
  • Default: ""
  • --sugarsync-private-access-key

    -

    Sugarsync Private Access Key

    +

    Sugarsync Private Access Key.

    Leave blank to use rclone's.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to sugarsync (Sugarsync).

    --sugarsync-refresh-token

    -

    Sugarsync refresh token

    +

    Sugarsync refresh token.

    Leave blank normally, will be auto configured by rclone.

    --sugarsync-authorization

    -

    Sugarsync authorization

    +

    Sugarsync authorization.

    Leave blank normally, will be auto configured by rclone.

    --sugarsync-authorization-expiry

    -

    Sugarsync authorization expiry

    +

    Sugarsync authorization expiry.

    Leave blank normally, will be auto configured by rclone.

    --sugarsync-user

    -

    Sugarsync user

    +

    Sugarsync user.

    Leave blank normally, will be auto configured by rclone.

    --sugarsync-root-id

    -

    Sugarsync root id

    +

    Sugarsync root id.

    Leave blank normally, will be auto configured by rclone.

    --sugarsync-deleted-id

    -

    Sugarsync deleted folder id

    +

    Sugarsync deleted folder id.

    Leave blank normally, will be auto configured by rclone.

    --sugarsync-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    -

    Limitations

    +

    Limitations

    rclone about is not supported by the SugarSync backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about See rclone about

    Tardigrade

    Tardigrade is an encrypted, secure, and cost-effective object storage service that enables you to store, back up, and archive large amounts of data in a decentralized manner.

    -

    Setup

    +

    Configuration

    To make a new Tardigrade configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Tardigrade project you are a member of.

    Here is an example of how to make a remote called remote. First run:

     rclone config
    @@ -19733,7 +20634,77 @@ y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y -

    Usage

    +

    Standard options

    +

    Here are the standard options specific to tardigrade (Tardigrade Decentralized Cloud Storage).

    +

    --tardigrade-provider

    +

    Choose an authentication method.

    + +

    --tardigrade-access-grant

    +

    Access grant.

    + +

    --tardigrade-satellite-address

    +

    Satellite address.

    +

    Custom satellite address should match the format: <nodeid>@<address>:<port>.

    + +

    --tardigrade-api-key

    +

    API key.

    + +

    --tardigrade-passphrase

    +

    Encryption passphrase.

    +

    To access existing objects enter passphrase used for uploading.

    + +

    Usage

    Paths are specified as remote:bucket (or remote: for the lsf command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    Once configured you can then use rclone like this.

    Create a new bucket

    @@ -19787,87 +20758,18 @@ y/e/d> y
    rclone sync -i --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/

    Or even between another cloud storage and Tardigrade.

    rclone sync -i --progress s3:bucket/path/to/dir/ tardigrade:bucket/path/to/dir/
    -

    Standard Options

    -

    Here are the standard options specific to tardigrade (Tardigrade Decentralized Cloud Storage).

    -

    --tardigrade-provider

    -

    Choose an authentication method.

    - -

    --tardigrade-access-grant

    -

    Access Grant.

    - -

    --tardigrade-satellite-address

    -

    Satellite Address. Custom satellite address should match the format: <nodeid>@<address>:<port>.

    - -

    --tardigrade-api-key

    -

    API Key.

    - -

    --tardigrade-passphrase

    -

    Encryption Passphrase. To access existing objects enter passphrase used for uploading.

    - -

    Limitations

    +

    Limitations

    rclone about is not supported by the rclone Tardigrade backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about See rclone about

    -

    Known issues

    +

    Known issues

    If you get errors like too many open files this usually happens when the default ulimit for system max open files is exceeded. Native Storj protocol opens a large number of TCP connections (each of which is counted as an open file). For a single upload stream you can expect 110 TCP connections to be opened. For a single download stream you can expect 35. This batch of connections will be opened for every 64 MiB segment and you should also expect TCP connections to be reused. If you do many transfers you eventually open a connection to most storage nodes (thousands of nodes).

    To fix these, please raise your system limits. You can do this issuing a ulimit -n 65536 just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. $HOME/.bashrc, or change the system-wide configuration, usually /etc/sysctl.conf and/or /etc/security/limits.conf, but please refer to your operating system manual.

    Uptobox

    This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional cloud storage provider and therefore not suitable for long term storage.

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Setup

    +

    Configuration

    To configure an Uptobox backend you'll need your personal api token. You'll find it in your account settings

    -

    Example

    Here is an example of how to make a remote called remote with the default setup. First run:

    rclone config

    This will guide you through an interactive setup process:

    @@ -19922,7 +20824,7 @@ y/e/d>
    rclone copy /home/source remote:backup

    Modified time and hashes

    Uptobox supports neither modified times nor checksums.

    -

    Restricted filename characters

    +

    Restricted filename characters

    In addition to the default restricted characters set the following characters are also replaced:

    @@ -19946,28 +20848,29 @@ y/e/d>

    Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to uptobox (Uptobox).

    --uptobox-access-token

    -

    Your access Token, get it from https://uptobox.com/my_account

    +

    Your access token.

    +

    Get it from https://uptobox.com/my_account.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to uptobox (Uptobox).

    --uptobox-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    -

    Limitations

    +

    Limitations

    Uptobox will delete inactive files that have not been accessed in 60 days.

    rclone about is not supported by this backend an overview of used space can however been seen in the uptobox web interface.

    Union

    @@ -19977,9 +20880,73 @@ y/e/d>

    Attribute :ro and :nc can be attach to the end of path to tag the remote as read only or no create, e.g. remote:directory/subdirectory:ro or remote:directory/subdirectory:nc.

    Subfolders can be used in upstream remotes. Assume a union remote named backup with the remotes mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop.

    There will be no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop.

    +

    Configuration

    +

    Here is an example of how to make a union called remote for local folders. First run:

    +
     rclone config
    +

    This will guide you through an interactive setup process:

    +
    No remotes found - make a new one
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +name> remote
    +Type of storage to configure.
    +Choose a number from below, or type in your own value
    +[snip]
    +XX / Union merges the contents of several remotes
    +   \ "union"
    +[snip]
    +Storage> union
    +List of space separated upstreams.
    +Can be 'upstreama:test/dir upstreamb:', '\"upstreama:test/space:ro dir\" upstreamb:', etc.
    +Enter a string value. Press Enter for the default ("").
    +upstreams> remote1:dir1 remote2:dir2 remote3:dir3
    +Policy to choose upstream on ACTION class.
    +Enter a string value. Press Enter for the default ("epall").
    +action_policy>
    +Policy to choose upstream on CREATE class.
    +Enter a string value. Press Enter for the default ("epmfs").
    +create_policy>
    +Policy to choose upstream on SEARCH class.
    +Enter a string value. Press Enter for the default ("ff").
    +search_policy>
    +Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used.
    +Enter a signed integer. Press Enter for the default ("120").
    +cache_time>
    +Remote config
    +--------------------
    +[remote]
    +type = union
    +upstreams = remote1:dir1 remote2:dir2 remote3:dir3
    +--------------------
    +y) Yes this is OK
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +Current remotes:
    +
    +Name                 Type
    +====                 ====
    +remote               union
    +
    +e) Edit existing remote
    +n) New remote
    +d) Delete remote
    +r) Rename remote
    +c) Copy remote
    +s) Set configuration password
    +q) Quit config
    +e/n/d/r/c/s/q> q
    +

    Once configured you can then use rclone like this,

    +

    List directories in top level in remote1:dir1, remote2:dir2 and remote3:dir3

    +
    rclone lsd remote:
    +

    List all the files in remote1:dir1, remote2:dir2 and remote3:dir3

    +
    rclone ls remote:
    +

    Copy another local directory to the union directory called source, which will be placed into remote3:dir3

    +
    rclone copy C:\source remote:source

    Behavior / Policies

    The behavior of union backend is inspired by trapexit/mergerfs. All functions are grouped into 3 categories: action, create and search. These functions and categories can be assigned a policy which dictates what file or directory is chosen when performing that behavior. Any policy can be assigned to a function or category though some may not be very useful in practice. For instance: rand (random) may be useful for file creation (create) but could lead to very odd behavior if used for delete if there were more than one copy of the file.

    -

    Function / Category classifications

    +

    Function / Category classifications

    @@ -20016,12 +20983,12 @@ y/e/d>
    -

    Path Preservation

    +

    Path Preservation

    Policies, as described below, are of two basic types. path preserving and non-path preserving.

    All policies which start with ep (epff, eplfs, eplus, epmfs, eprand) are path preserving. ep stands for existing path.

    A path preserving policy will only consider upstreams where the relative path being accessed already exists.

    When using non-path preserving policies paths will be created in target upstreams as necessary.

    -

    Quota Relevant Policies

    +

    Quota Relevant Policies

    Some policies rely on quota information. These policies should be used only if your upstreams support the respective quota fields.

    @@ -20050,7 +21017,7 @@ y/e/d>

    To check if your upstream supports the field, run rclone about remote: [flags] and see if the required field exists.

    -

    Filters

    +

    Filters

    Policies basically search upstream remotes and create a list of files / paths for functions to work on. The policy is responsible for filtering and sorting. The policy type defines the sorting but filtering is mostly uniform as described below.

    If all remotes are filtered an error will be returned.

    -

    Policy descriptions

    +

    Policy descriptions

    The policies definition are inspired by trapexit/mergerfs but not exactly the same. Some policy definition could be different due to the much larger latency of remote file systems.

    @@ -20134,74 +21101,11 @@ y/e/d>
    -

    Setup

    -

    Here is an example of how to make a union called remote for local folders. First run:

    -
     rclone config
    -

    This will guide you through an interactive setup process:

    -
    No remotes found - make a new one
    -n) New remote
    -s) Set configuration password
    -q) Quit config
    -n/s/q> n
    -name> remote
    -Type of storage to configure.
    -Choose a number from below, or type in your own value
    -[snip]
    -XX / Union merges the contents of several remotes
    -   \ "union"
    -[snip]
    -Storage> union
    -List of space separated upstreams.
    -Can be 'upstreama:test/dir upstreamb:', '\"upstreama:test/space:ro dir\" upstreamb:', etc.
    -Enter a string value. Press Enter for the default ("").
    -upstreams> remote1:dir1 remote2:dir2 remote3:dir3
    -Policy to choose upstream on ACTION class.
    -Enter a string value. Press Enter for the default ("epall").
    -action_policy>
    -Policy to choose upstream on CREATE class.
    -Enter a string value. Press Enter for the default ("epmfs").
    -create_policy>
    -Policy to choose upstream on SEARCH class.
    -Enter a string value. Press Enter for the default ("ff").
    -search_policy>
    -Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used.
    -Enter a signed integer. Press Enter for the default ("120").
    -cache_time>
    -Remote config
    ---------------------
    -[remote]
    -type = union
    -upstreams = remote1:dir1 remote2:dir2 remote3:dir3
    ---------------------
    -y) Yes this is OK
    -e) Edit this remote
    -d) Delete this remote
    -y/e/d> y
    -Current remotes:
    -
    -Name                 Type
    -====                 ====
    -remote               union
    -
    -e) Edit existing remote
    -n) New remote
    -d) Delete remote
    -r) Rename remote
    -c) Copy remote
    -s) Set configuration password
    -q) Quit config
    -e/n/d/r/c/s/q> q
    -

    Once configured you can then use rclone like this,

    -

    List directories in top level in remote1:dir1, remote2:dir2 and remote3:dir3

    -
    rclone lsd remote:
    -

    List all the files in remote1:dir1, remote2:dir2 and remote3:dir3

    -
    rclone ls remote:
    -

    Copy another local directory to the union directory called source, which will be placed into remote3:dir3

    -
    rclone copy C:\source remote:source
    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to union (Union merges the contents of several upstream fs).

    --union-upstreams

    -

    List of space separated upstreams. Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc.

    +

    List of space separated upstreams.

    +

    Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc.

    --union-cache-time

    -

    Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used.

    +

    Cache time of usage and free space (in seconds).

    +

    This option is only useful when a path preserving policy is used.

    --webdav-user

    -

    User name. In case NTLM authentication is used, the username should be in the format 'Domain'.

    +

    User name.

    +

    In case NTLM authentication is used, the username should be in the format 'Domain'.

    --webdav-bearer-token

    -

    Bearer token instead of user/pass (e.g. a Macaroon)

    +

    Bearer token instead of user/pass (e.g. a Macaroon).

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to webdav (Webdav).

    --webdav-bearer-token-command

    -

    Command to run to get a bearer token

    +

    Command to run to get a bearer token.

    --webdav-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    Default encoding is Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8 for sharepoint-ntlm or identity otherwise.

    --webdav-headers

    -

    Set HTTP headers for all transactions

    +

    Set HTTP headers for all transactions.

    Use this to set additional HTTP headers for all transactions

    The input format is comma separated list of key,value pairs. Standard CSV encoding may be used.

    For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.

    @@ -20492,6 +21393,7 @@ vendor = other bearer_token_command = oidc-token XDC

    Yandex Disk

    Yandex Disk is a cloud storage solution created by Yandex.

    +

    Configuration

    Here is an example of making a yandex configuration. First run

    rclone config

    This will guide you through an interactive setup process:

    @@ -20544,7 +21446,7 @@ y/e/d> y

    Sync /home/local/directory to the remote path, deleting any excess files in the path.

    rclone sync -i /home/local/directory remote:directory

    Yandex paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Modified time

    +

    Modified time

    Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format.

    MD5 checksums

    MD5 checksums are natively supported by Yandex Disk.

    @@ -20552,15 +21454,14 @@ y/e/d> y

    If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.

    Quota information

    To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage.

    -

    Restricted filename characters

    +

    Restricted filename characters

    The default restricted characters set are replaced.

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Limitations

    -

    When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to yandex (Yandex Disk).

    --yandex-client-id

    -

    OAuth Client Id Leave blank normally.

    +

    OAuth Client Id.

    +

    Leave blank normally.

    --yandex-client-secret

    -

    OAuth Client Secret Leave blank normally.

    +

    OAuth Client Secret.

    +

    Leave blank normally.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to yandex (Yandex Disk).

    --yandex-token

    OAuth Access Token as a JSON blob.

    @@ -20586,7 +21488,8 @@ y/e/d> y
  • Default: ""
  • --yandex-auth-url

    -

    Auth server URL. Leave blank to use the provider defaults.

    +

    Auth server URL.

    +

    Leave blank to use the provider defaults.

    --yandex-token-url

    -

    Token server url. Leave blank to use the provider defaults.

    +

    Token server url.

    +

    Leave blank to use the provider defaults.

    --yandex-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.

    +

    Limitations

    +

    When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.

    +

    Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. Token generation will work without a mail account, but Rclone won't be able to complete any actions.

    +
    [403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.

    Zoho Workdrive

    Zoho WorkDrive is a cloud storage solution created by Zoho.

    +

    Configuration

    Here is an example of making a zoho configuration. First run

    rclone config

    This will guide you through an interactive setup process:

    @@ -20683,18 +21592,19 @@ y/e/d>

    Sync /home/local/directory to the remote path, deleting any excess files in the path.

    rclone sync -i /home/local/directory remote:directory

    Zoho paths may be as deep as required, eg remote:directory/subdirectory.

    -

    Modified time

    +

    Modified time

    Modified times are currently not supported for Zoho Workdrive

    Checksums

    No checksums are supported.

    Usage information

    To view your current quota you can use the rclone about remote: command which will display your current usage.

    -

    Restricted filename characters

    +

    Restricted filename characters

    Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.

    -

    Standard Options

    +

    Standard options

    Here are the standard options specific to zoho (Zoho).

    --zoho-client-id

    -

    OAuth Client Id Leave blank normally.

    +

    OAuth Client Id.

    +

    Leave blank normally.

    --zoho-client-secret

    -

    OAuth Client Secret Leave blank normally.

    +

    OAuth Client Secret.

    +

    Leave blank normally.

    -

    Advanced Options

    +

    Advanced options

    Here are the advanced options specific to zoho (Zoho).

    --zoho-token

    OAuth Access Token as a JSON blob.

    @@ -20748,7 +21659,8 @@ y/e/d>
  • Default: ""
  • --zoho-auth-url

    -

    Auth server URL. Leave blank to use the provider defaults.

    +

    Auth server URL.

    +

    Leave blank to use the provider defaults.

    --zoho-token-url

    -

    Token server url. Leave blank to use the provider defaults.

    +

    Token server url.

    +

    Leave blank to use the provider defaults.

    --zoho-encoding

    This sets the encoding for the backend.

    -

    See: the encoding section in the overview for more info.

    +

    See the encoding section in the overview for more info.